Hack Labs Kingston Raspberry Pi Weekend

Last week a friend of mine dropped by my lab with a poster for an event he thought I might be interested in - I love the fact that my friends and colleagues now associate me with all things techy and geeky etc. Anyways he had a poster for a weekend hacking session on Raspberry Pi’s (RPi’s). It just so happened that I actually had the weekend free so bam I was there.

I had a great weekend at Hack Labs Kingston, now before I get into the details of our project I want to talk just a bit about the atmosphere. It was amazing, first the location, deemed the boathouse, a strange older wooden building that juts out onto the lake that during the week is home to I believe 5 small science and tech companies. Everywhere you looked out the windows all you could see was just ice, tt was like being at the cottage for a weekend except we were right downtown. Inside there was lots of large glass windows and dividers which most companies wrote on with window markers for the whole John Nash vibe which I also like. All in all a very cool place to spend the weekend hacking away. The next big point is the people - so many cool people! I definitely need to get more involved with the hacker/maker culture, these are my peoples! I think what was great about the people was that they were as passionate as I am about this kind of stuff. You know when you really need to talk about something you find cool but your audience just isn’t as into it and it kind of leaves you a little unsatisfied - well none of that at Hack Labs! (at least for me lol)

I feel like I should put a link to the Hack Labs Facebook page so here it is:

*edit* Hack Labs Kingston is now the Kingston Maker Space *

https://www.facebook.com/KingstonMakerSpaceInitiative

That was a lot of text so far so bam how about some pics of the final product resulting from the weekend:

pi-arcade3

pi-arcade2

pi-arcade

We made a PiArcade!

I was on a team of 5 including another programmer, 2 electrical engineers and a graphic designer (which explains the nice graphics!) that put this together over the weekend. We wired the retro arcade buttons directly into the GPIO headers on the RPI board. Then we wrote a program which read the GPIO and converted it to keyboard input for playing the game. To polish it all off we wrote some nice bash scripts so we could start and quit the game using the arcade buttons.

We used GitHub for our project so all the source is available. This repo probably has the most coherent commit messages of all time… hahaha. github.com/pickle27/25c-to-continue Anyways if you’re curious I’m sure you can easily reproduce our project although you’ll need a bit of Linux know how because we had some sim links going on and it required some kernel modules loaded.

It was really fun to work with a team that was so enthusiastic and that had such a wide variety of skills. On Sunday everyone presented their project, we ran our PiArcade with a pygame remake of Millipede (with some modifications) for one of the sponsors who had mentioned it was his favourite classic arcade game. Our team, named “Twenty-Five cents to Continue” took home the grand prize for best weekend project, meaning I’m the proud new owner of my very own Raspberry Pi!

I did learn a few things about the RPi over the weekend, the main thing is that it is not, I repeat it is not an Arduino nor does it replace the Arduino. The Arduino is exceptionally good at interacting with hardware and while the RPi can interact via the GPIO it’s not its strength. This may change as improvements are made to the software but that’s not the point, the RPi has a lot of strengths that the Arduino doesn’t have like video out, support for usb devices, internet access, ssh the list goes on. My take away is use the RPi for what it is - a tiny computer and leave the Arduino for dealing with the hardware. The coolest projects in the future are going to use both RPi’s and Arduinos so learn to use them each effectively!

Here’s a video of our creation!

I edited this using Kdenlive and I just have to say wow what an excellent program!

FRC Ultimate Ascent Vision System

This post is a companion explanation to the KBotics Vision System for the 2013 challenge Ultimate Ascent. The full source code is available at: github.com/QueensFRC/2809_vision2013

I think our vision system this year is pretty unique, I doubt there are any other teams doing vision the same way we are. I learned a lot from last year and I had two new ideas for this years vision system. I’m going to start by describing the one we didn’t do and end with what we did.

Last years vision system used raw thresholding on the input image and then filtering to identify the targets - which did work it was just highly susceptible to changes in the lighting conditions. We remedied this by saving images we took on the field and then tuning our vision system in between matches. The system worked well but I knew we could do better. The trick was going to be how to get away from absolute thresholding and use a “relative” approach, this is where the two new ideas come in.

The first approach requires the LED ring to be wired to either a spike (relay) or into the solenoid breakout board - this gives you control over when the light is on and when it is off (I hope you can see where I’m going with this!!), then using some back and fourth with the Smartdashboard have your robot turn the lights off then take an image, turn the lights back on and take a second image. Assuming you stopped your robot to aim then simply subtract the 2 images and you should get a very cleanly segmented target. Pretty simple, very effective, I am surprised more teams aren’t doing something like this. We tried this out but we eventually chose a slightly different approach due to the stopped assumption (wish I’d kept some of the images though for this blog post to demonstrate my point!).

Okay so what did we do? We implemented a two layer vision system, a low level system identifies possible targets and then a machine learning algorithm classifies the candidates as target or not. The inspiration for this approach came from my new favourite book Mastering OpenCV with Practical Computer Vision Projects Chapter 5 - Number Plate Recognition Using SVM and Neural Networks.

The low level vision system:

The input image:

The low level vision system runs a sobel filter over the image to highlight the horizontal edges (we know the targets will have quite strong horizontal edges). The following figure shows what the output from this looks like:

Next we run a morphological filter to join the bottom of the target to the top, the result looks like this:

The next step is to run a connected components algorithm and keep only the blobs that are roughly the correct size and aspect ratio. The contours that passed our filter are drawn in blue on the following image:

From here we take each blob that has passed all the filters thus far as a candidate target. We take each candidate and rotate it to be oriented horizontally and resize it to a constant size. The low level system returns all these normalized sub-images along with their locations. 2 such images are shown below:

A segmented
target
False positive 
(looks like a light)

As seen above the love level vision system is pretty easy to fool - that’s what we have a second part to our algorithm!

The second part of the vision system sends the candidate targets through a Support Vector Machine (SVM) which gets the final say on whether it really is indeed a target or not. SVMs are a type of machine learning algorithm, the goal of machine learning is to teach the computer how to perform a task through showing it examples rather than giving it explicit instructions. So in a separate step before running the vision system we trained  SVM by showing it many examples of targets and “not-targets” allowing it to learn how to classify new inputs on its own. SVMs are implemented in OpenCV and the training code is in the trainSVM.cpp file on GitHub (along with a few python scripts for generating file lists).

The neat part about our vision system is that because we save the candidate targets when we are out on the field we can add them to the pool of training data in between matches and our algorithm will actually get smarter and improve as we play more matches and get more data!

Here is a video of the system in action (using my cppVision framework) ! At 12 seconds the operator turns the vision system on and you can see the targets light up as the LEDs turn on to. A small green cross-hair appears in the middle of the top target and when the operator is aligned the fullscreen cross-hair changes from red to green.

Queen's ECE Open House and 4th year project demos

It’s sure been a busy week for my Blog but so much cool stuff happened this week!

This afternoon I went to the Queen’s Electrical and Computer Engineering open house where the 4th year projects were on display. I was very impressed by the quality of the projects this year absolutely top notch stuff!!!!

It was really nice to see the projects this year because I knew and taught a lot of the students who are graduating. I had also been advising several groups throughout the completion of their projects. What made me really happy was to see my “influence” on the graduating class - thanks to the Computer Vision class many groups used OpenCV and other open source software in their projects as opposed to C# and MatLab, some groups even used Linux to develop their code instead of Windows! One project in particular changed from using C# and the Microsoft Kinect SDK for face tracking and blink detection to use OpenCV Haar Cascades and Eigenface thanks to my ELEC 474 lab!

All my best wished to ECE ‘13!