FRC Ultimate Ascent Vision System

This post is a companion explanation to the KBotics Vision System for the 2013 challenge Ultimate Ascent. The full source code is available at: github.com/QueensFRC/2809_vision2013

I think our vision system this year is pretty unique, I doubt there are any other teams doing vision the same way we are. I learned a lot from last year and I had two new ideas for this years vision system. I’m going to start by describing the one we didn’t do and end with what we did.

Last years vision system used raw thresholding on the input image and then filtering to identify the targets - which did work it was just highly susceptible to changes in the lighting conditions. We remedied this by saving images we took on the field and then tuning our vision system in between matches. The system worked well but I knew we could do better. The trick was going to be how to get away from absolute thresholding and use a “relative” approach, this is where the two new ideas come in.

The first approach requires the LED ring to be wired to either a spike (relay) or into the solenoid breakout board - this gives you control over when the light is on and when it is off (I hope you can see where I’m going with this!!), then using some back and fourth with the Smartdashboard have your robot turn the lights off then take an image, turn the lights back on and take a second image. Assuming you stopped your robot to aim then simply subtract the 2 images and you should get a very cleanly segmented target. Pretty simple, very effective, I am surprised more teams aren’t doing something like this. We tried this out but we eventually chose a slightly different approach due to the stopped assumption (wish I’d kept some of the images though for this blog post to demonstrate my point!).

Okay so what did we do? We implemented a two layer vision system, a low level system identifies possible targets and then a machine learning algorithm classifies the candidates as target or not. The inspiration for this approach came from my new favourite book Mastering OpenCV with Practical Computer Vision Projects Chapter 5 - Number Plate Recognition Using SVM and Neural Networks.

The low level vision system:

The input image:

The low level vision system runs a sobel filter over the image to highlight the horizontal edges (we know the targets will have quite strong horizontal edges). The following figure shows what the output from this looks like:

Next we run a morphological filter to join the bottom of the target to the top, the result looks like this:

The next step is to run a connected components algorithm and keep only the blobs that are roughly the correct size and aspect ratio. The contours that passed our filter are drawn in blue on the following image:

From here we take each blob that has passed all the filters thus far as a candidate target. We take each candidate and rotate it to be oriented horizontally and resize it to a constant size. The low level system returns all these normalized sub-images along with their locations. 2 such images are shown below:

A segmented
target
False positive 
(looks like a light)

As seen above the love level vision system is pretty easy to fool - that’s what we have a second part to our algorithm!

The second part of the vision system sends the candidate targets through a Support Vector Machine (SVM) which gets the final say on whether it really is indeed a target or not. SVMs are a type of machine learning algorithm, the goal of machine learning is to teach the computer how to perform a task through showing it examples rather than giving it explicit instructions. So in a separate step before running the vision system we trained  SVM by showing it many examples of targets and “not-targets” allowing it to learn how to classify new inputs on its own. SVMs are implemented in OpenCV and the training code is in the trainSVM.cpp file on GitHub (along with a few python scripts for generating file lists).

The neat part about our vision system is that because we save the candidate targets when we are out on the field we can add them to the pool of training data in between matches and our algorithm will actually get smarter and improve as we play more matches and get more data!

Here is a video of the system in action (using my cppVision framework) ! At 12 seconds the operator turns the vision system on and you can see the targets light up as the LEDs turn on to. A small green cross-hair appears in the middle of the top target and when the operator is aligned the fullscreen cross-hair changes from red to green.

Queen's ECE Open House and 4th year project demos

It’s sure been a busy week for my Blog but so much cool stuff happened this week!

This afternoon I went to the Queen’s Electrical and Computer Engineering open house where the 4th year projects were on display. I was very impressed by the quality of the projects this year absolutely top notch stuff!!!!

It was really nice to see the projects this year because I knew and taught a lot of the students who are graduating. I had also been advising several groups throughout the completion of their projects. What made me really happy was to see my “influence” on the graduating class - thanks to the Computer Vision class many groups used OpenCV and other open source software in their projects as opposed to C# and MatLab, some groups even used Linux to develop their code instead of Windows! One project in particular changed from using C# and the Microsoft Kinect SDK for face tracking and blink detection to use OpenCV Haar Cascades and Eigenface thanks to my ELEC 474 lab!

All my best wished to ECE ‘13!

ARPool on Discovery Channel's Daily Planet

or - “my second television experience”

Last Thursday I was contacted by the Daily Planet, a daily science news program on the Discovery Channel, to see if we could bring ARPool in for an invention themed week they were running. I eagerly accepted and after arranging a few details showed up on set the following Tuesday with a laptop, projector and camera to setup ARPool!

I think this time went smoother, at least I was a bit more relaxed, because I knew more of what to expect from a television set. It’s very busy and there are lots of people around, some of them know who you are and what you are supposed to be doing and some of them don’t and it’s really tough to figure out who is in charge. It is pretty hectic I don’t know how these people do it everyday because it seems too stressful to me!.

Any ways as always seems to be the case with ARPool getting it calibrated on time came down to the wire. Although I have a few more idea’s on how we can make this process better for next time! (the issue we often have has to do with how we mount the camera relative to the projector in the past we’ve tended to prioritize ease of mounting while in the future I want to prioritize keeping the same configuration as we have in the lab).

Moving on from the technical aspect and onto Show Biz - being on the show was an absolute blast! While we were waiting to start filming Dan, the host, and I were both mic’d up and he had an earpiece; this made for the weirdest situation because we would be chatting waiting to start filming but then suddenly he would start talking to the producer who was in his ear and listening through his mic! I guess it’s not much different than being next to someone who is on the phone but the lack of hardware and quick transitions made it pretty wacky!

One thing that really amazed me was how completely unscripted the whole thing was. We did one “dry” interview but pretty much just jumped right into it. The host (and the producer in his ear) keep the show going in the direction they want by asking the right questions and guiding the conversation. It really reminded me of HBO’s The Newsroom, which is a brilliant show that you should watch if you haven’t! It also really made me think of all the skills that these people have in order to make this happen everyday and how remarkable it all is.

In summary today was an awesome day and I can’t wait to see how the segment turned out! Stay tuned for a link! Check it out:

Response to the video has been great:

I got a tweet from Queen’s University

A Facebook share from Queen’s ECE, a mention on the Queen’s in the News page and a shout out on the ECE-ALLUSERS mailing list!