Edit: Took some serious DIY hackery to get this baby up and running tonight. My plan was to switch over my current SIM card from my old phone into my new Nexus 4 but at some point in the last 2 years micro SIM cards became the thing. I wasn’t in the mood to give Telus (or any other phone company) more of my money for a micro SIM and I wanted it to work NOW lol! Luckily I found a guide online on how to cut down a full sized SIM to make it a micro SIM, needless to say when I turned on my new phone and made a call I was very happy and relieved!
Someone made a .gif out of a section of the ARPool YouTube video and captioned it “No Skill Needed, Just Tech” Being an avid internet user I found this quite amusing and thought I would share it here.
2 weeks ago my supervisor asked if I would be interested in designing the 5th lab exercise for his 4th year course ELEC 474 Machine Vision to which I accepted. My task was very open ended all he wanted was a lab centered around Eigenface that would be appropriate for the students.
Designing the lab was an interesting experience on one monitor I was working on the program the students would eventually have to write and on the other monitor I had a LaTeX file of the lab handout. It was tough to figure out how long to make the lab and what the right difficulty would be. I decided to split it into 2 main components - in the pre lab the students would prepare the face data and learn how to use cv::FileStorage while the actual Eigenface implementation would be done in the lab period. I ended up coding both the pre-lab and the lab in their entirety and then removing the code from certain functions for the students to fill in.
The lab itself went quite well! As part of the pre-lab they had to code a function that would combine their face database with another classmates database and of course this process could be continued so they could all share data. This ended up working way better than I expected and some students who coded the function generically were able to use it for all sorts of handy data manipulation to quickly generate the test cases we wanted to see during marking. I think the students in particular really enjoyed visualizing the Eigenfaces and some of them really got into trying the different ColorMaps available in OpenCV - I think Hot and Bone produced the coolest results!
The one thing I might do differently would be the effort marks I put in the grading scheme. I put the effort marks in because I wanted the students to put some time into the “softer” part of the lab like choosing good and interesting faces to test on and labelling their data nicely etc. But in the end it just became tough to rationalize what grade a student was getting. In my teaching assistant positions I have noticed that when a grade is given for a demonstration with the student present it’s a lot harder to give less than perfect grade as they will debate it with you but if it’s an assignment that is handed in students seem to accept non-perfect grades without saying anything, not sure why this is.
All in all it was an interesting experience that I would definitely do again. It was particularly interesting to see where the common pitfalls and misunderstandings were, gave me some more insight into teaching. I enjoyed designing the lab - I like teaching others especially when it is on a topic I like so much.
For anyone who is interested here are the lab materials!
#include<opencv2/core/core.hpp>
#include<opencv2/imgproc/imgproc.hpp>
#include<opencv2/objdetect/objdetect.hpp>
#include<opencv2/highgui/highgui.hpp>#include
#include
#include
usingnamespacecv;usingnamespacestd;MatdetectFace(constMat&image,CascadeClassifier&faceDetector);voidresizeFace(Mat&face);voidaddToDataSet(Mat&data,vector&labels,Mat&newData,vector&newLabels);intmain(){// Set up face detectorCascadeClassifierfaceDetector;if(!faceDetector.load("lbpcascade_frontalface.xml")){cerr<<"ERROR: Could not load classifier cascade"<<endl;return-1;}// fill these variable with your data setMatsamples;vectorlabels;return0;}MatdetectFace(constMat&image,CascadeClassifier&faceDetector){vectorfaces;faceDetector.detectMultiScale(image,faces,1.1,2,0|CV_HAAR_SCALE_IMAGE,Size(30,30));if(faces.size()==0){cerr<<"ERROR: No Faces found"<<endl;returnMat();}if(faces.size()>1){cerr<<"ERROR: Multiple Faces Found"<<endl;returnMat();}//Mat detected = image.clone();//for(unsigned int i = 0; i < faces.size(); i++)//{// rectangle(detected, faces[i].tl(), faces[i].br(), Scalar(255,0,0));//}//imshow("faces detected", detected);//waitKey();returnimage(faces[0]).clone();}voidresizeFace(Mat&face){// code here}voidaddToDataSet(Mat&samples,vector&labels,Mat&newSamples,vector&newLabels){// code here}
Lab skeleton code
#include<opencv2/core/core.hpp>
#include<opencv2/contrib/contrib.hpp>
#include<opencv2/ml/ml.hpp>
#include<opencv2/highgui/highgui.hpp>#include
#include
#include
usingnamespacecv;usingnamespacestd;voidaddToDataSet(Mat&data,vector&labels,Mat&newData,vector&newLabels);Matnorm_0_255(Matsrc);stringrecognizeFace(Matquery,Matsamples,vectorlabels);intmain(){// Load your data and combine it with the data set of several of your peers using:// addToDataSet// Perform PCA// cv::PCA pca(....);// Visualize MeanMatmeanFace=pca.mean;// normalize and reshape meanimshow("meanFace",meanFace);waitKey();// Visualize Eigenfacesfor(unsignedinti=0;i<pca.eigenvectors.rows;i++){Mateigenface;eigenface=pca.eigenvectors.row(i).clone();// normalize and reshape eigenface//applyColorMap(eigenface, eigenface, COLORMAP_JET);imshow(format("eigenface_%d",i),eigenface);waitKey();}// Project all samples into the Eigenspace// code..// ID Faces// code..return0;}voidaddToDataSet(Mat&samples,vector&labels,Mat&newSamples,vector&newLabels){// your code from the pre lab}Matnorm_0_255(Matsrc){// Create and return normalized image// should work for 1 and 3 channel images}stringrecognizeFace(Matquery,Matsamples,vectorlabels){// given a query sample find the training sample it is closest to and return the proper label// implement a nearest neighbor algorithm to achieve this}