My First OpenCV Commit

Contributing to open source has long been on my to do list, I actually get chirped a lot for my love of open source, anyways I recently finally made the leap and commited some code to OpenCV.

I’d long been on the look out for something good to add to OpenCV and then one day it hit me – I’d had this utility function for cv::PCA that I had been using a lot in my own work that would be a real nice addition to OpenCV. Having decided on what I was going to contribute I started reading on how to go about this and admittedly I was pretty confused what actually helped me figure it out was the generic git hub tutorial, I am a bit embarrassed to say but I am still pretty amateurish with source control but I’m getting better!

So I figure out the process and cloned myself a copy of the OpenCV repository and got right to work. It was a pretty simple addition really, what it does is gives another option for how to create a PCA space. Previously your only option was to specify the exact dimension of the space my addition allows the user to specify the percentage of variance the space should keep and then figures out how many principal components to keep thus deciding the dimension of the space.

It works by computing the cumulative energy content for each eigenvector and then using the ratio of a specific vectors cumulative energy over the total energy to determine how many component vectors to retain. You can read more about the specifics on the PCA wikipedia page. Here is the source code:

    // compute the cumulative energy content for each eigenvector
    Mat g(eigenvalues.size(), ctype);

    for(int ig = 0; ig < g.rows; ig++)
    {
      g.at(ig,0) = 0;
      for(int im = 0; im <= ig; im++)
      {
          g.at(ig,0) += eigenvalues.at(im,0);
      }
    }

    int L;
    for(L = 0; L < eigenvalues.rows; L++)
    {
        double energy = g.at(L, 0) / g.at(g.rows - 1, 0);
        if(energy > retainedVariance)
            break;
    }

    L = std::max(2, L);

After coding my new feature and I wrote a nifty little sample program (you can find it at OpenCV-2.4.3/samples/cpp/pca.cpp) that adjusted the number of dimensions according to a slider bar and showed the effect on reconstructing a face!

Affect of retained variance in a PCA subspace on data reconstruction

I was all ready to go so I pushed to my git hub account and made my pull request to have the code merged. I received an email back the next day with some feedback – a few of my commit messages were junk, I hadn’t realised that this messages would tag along with my code oops! Git rebase to fix that issue. I also had not provided documentation or added tests for my new code – I didn’t even know about the test module and didn’t realise the I could edit the documentation silly me. I wrote the tests first which was kind of fun actually and pretty straight forward as I was able to modify the existing PCA test. The documentation was a bit more out there - I had never worked with sphinx before but luckily I was able to get by simply looking at how stuff was done.

Pull request #2, this one looks much better much more professional  I am pretty proud of it. I wait, I wait I hear nothing, there is a few other pull requests sitting there too. I figure hearing nothing might be good maybe it is because there are no glaring errors and they are carefully reviewing my work. Finally about 2 weeks later pull request accepted! I contributed to OpenCV yay! A bit later on one of the dev’s who I am internet friends with told me to check the OpenCV meeting notes – I got a shout out!

There I am - “a small yet useful extension to PCA algorithm that computes the number of components to retain a specified standard deviate has been integrated via github pull request.”

Also here is a link to the OpenCV page where my documentation can be seen:

http://docs.opencv.org/modules/core/doc/operations_on_arrays.html?highlight=pca#pca-pca

The Tough Mudder

This post will be a brief departure from my usual programming/robots/technology ramblings and talk about something else really cool that I did recently. Yesterday me and 2 friends ran and completed the Tough Mudder in Toronto (well actually it was in Barrie but they called it the Toronto Tough Mudder).

Backing up about 3 months when my friend first posted a trailer for the Tough Mudder on my Facebook wall. We decided we had to do it. We decided but we didn’t do much about it we continued with our standard lifting gym routine and aside from my intramural sports team I wasn’t doing much running. It was almost as if we hadn’t noticed that this thing was a half marathon through the mud - no big deal right?

With only a month to go I finally realise that I am going to need to do some running training. I am in decent shape at this point from the lifting but I definately don’t consider myself a runner. In fact that last time I would have said that I thought I was a good runner would have been grade 6 yeah no joke the filler years were mainly filled with mountain biking, skiing and school. But that was about to change and by the end of my 4 week training program I considered myself a runner again.

So I started training running but I was determined to do it my way. People who know me know that I am a strong believer in the primal lifestyle and while I don’t want to debate that here I simply want to share how I trained and the results. My running training program consisted of running 400 meter intervals trying for a time better than 1:40 which translates to being faster than a 7 minute mile. After each interval I would rest for 1:30 each workout was only 10 intervals, sometimes I would finish with a few 100 meter sprints. During this time I also consulted quite a few resources on how to run properly notably Timothy Ferris’s The 4 Hour Body and some YouTube videos. Oh and of course I was running in my vibram five finger shoes.

The vibram five fingers, I had long been wearing them for lifting and my day to day shoes were essentially barefoot shoes too but I had never really ran in them (I still used cleats for the sports I play). After my first week my calves were so tight it was hard ish to walk – apparently this is common when people start barefoot running. I was determined to power through which meant lots of stretching and using the foam roller. 2 weeks to go and I wasn’t sure if I would be able to run the Mudder in my barefoot shoes. At around this time the 100 meter sprint was on in the London 2012 Olympics and I was super excited to watch Usain Bolt do his thing. Watching his stride inspired me and I also realised an important thing – barefoot running is essentially just proper running re-branded  I mean watch Usain Bolt run no heal strikes there. Re Motivated I trained hard for the next 2 weeks and my calve tightness was becoming less of a problem.

Before talking about the Mudder itself I just want to re-iterate the important details here – I trained for this long distance event using high intensity intervals, I followed my primal diet and did not carb load for training or for the race and I ran in vibram five finger shoes.

The Mudder itself was awesome, like so awesome! 10 out of 10 would do again. One thing that really caught me off guard was the elevation change of the course – it took place on a ski hill so there was some serious vert. The run went great I was pretty fast on the flats ahead of my team most of the time. The hills were nice because it was a really big equalizer. The obstacles were a ton of fun too, most of them weren’t too hard. I really enjoyed this one where you had to crawl through this underground tunnel – reminded me of video games and air ducts etc. I certainly will pause and remember how tough that was next time I am running through the vents in Half Life! The famed electric shock finale was also quite awesome, I took one right in the chest which dropped me to the ground like a sack of potatoes but I was up and running again in the blink of an eye. Crossing the finish line was amazing and they immediately hand you a beer! Best beer ever.

My 2 team-mates running through the electro shock finale. I am somewhere to their right unfortunately not in the picture

We submitted our time and later found out that we finished in the top 5% and qualified for the World’s Toughest Tough Mudder! Notbad.jpg don’t think we will be going though.

Our trusty stopwatch after we crossed the finish lie. 2 hours 54 minutes.

The ARPool 2.0 Re-write

After our successful but not stress free demonstration of ARPool at the Ontario Centers of Excellence Discovery Show I took it upon myself to make a rather large structural change to the ARPool program in an effort to avoid similar problems in the future. I wanted to change ARPool from an event driven program to what I refer to as a state machine, a technique I learned through my involvement in First Robotics.

If you read my last blog post on our experience at OCE you’ll know that one of our main problems with the system was difficulty understanding the order in which code was called and our lack of control over changing this process. The ARPool system relied on several timer events as well as signals and slots in an interrupt fashion to get things done in the right order. If things are not executed in the right order then the system simply has no chance – for example if the projector doesn’t illuminate the table before we try to detect the pool balls then it won’t be able to find them. We had many problems with the system becoming out of sync and the whole experience felt rather unstable even after the system was working.

Now don’t get me wrong it’s not that I think event driven programs are bad – they aren’t and in many applications it is the only possible way to make something work, it’s just that in the case of ARPool it doesn’t make any sense. ARPool is a program that only needs to do one thing at a time and it all needs to happen in the right order – so why bother with all the event driven stuff? The final point is that being a University lab we don’t have the resources to maintain a system that complicated and many people who have and will work on the system don’t have the required background to effectively work on such a system. ARPool needed an overhaul.

Introducing the state machine: It is really quite a simple idea - there is a main loop and inside this loop there is a single switch case block. The switch is executed on a “state counter” variable which for extra points should be made an enum with relevant state names. Inside each state (a case block) the program does its thing and then if appropriate changes the state counter to the next state or leaves the state variable as it is. Here is a simple example:

enum State
{
  detectBalls,
  trackCue
};

State state = detectBalls;

while(true)
{
  Switch(state)
  {
    Case(detectBalls):
    detectBalls();
    state = trackCue;
    break;

    Case(trackCue):
    // code
    break;

    // and so on…
  }
}

It’s probably not the first you thing you would think of but after you try it out it’s a pretty nice way to do things. I picked up on this approach earlier this year while working on programming the CRio device for First Robotics – see First Robotics has these built in safety functions around your robot class if your robot’s main function doesn’t finish executing at least several times a second it assumes that you’ve either lost communication or wrote an infinite loop. You’re basically forced into this state machine idea if you want to do anything complicated.

Anyways First’s API and safety functions are a topic of a different rant how did all of this turn out for ARPool? Well surprisingly well it only took me about 2.5 hours of straight coding to completely pull out the Qt gui, remove all the event driven code and re-write the main program to utilize the state machine concept. It compiled and the code was way cleaner, shorter and I could finally see where and in what order my vision functions were called. All in all a great victory – however it was time to return to my other projects and leave my shiny new ARPool program sitting there waiting to be tested on the real system.

*edit* 09/01/2012 I finally got my shot to try the new ARPool code, it actually worked! Every programmer has a few moments they are proud of and this was definitely one of mine, re-structuring an entire program (like major major stuff here) not being able to test it for months and it actually just worked! Not every day gets to be like this.