Face detection

Single image face detection with OpenCV

Next, we’re going to touch on using OpenCV with the Raspberry Pi’s camera, giving our robot the gift of sight.

First, we want to install OpenCV. OpenCV stands for Open Computer Vision, and it is an open source computer vision and machine learning library. To start, you will need to get OpenCV on to your Raspberry Pi. Use the following link: Installing OpenCV on the Raspberry Pi 3

You will need:

  1. A Raspberry Pi 3 loaded with OpenCV,Numpy and python
  2. A PiCamera and ribbon cable
  3. Wireless keyboard and mouse
  4. HDMI monitor

The code:

import io
import picamera
import cv2
import numpy

#Create a memory stream so photos doesn't need to be saved in a file
stream = io.BytesIO()

#Get the picture (low resolution, so it should be quite fast)
#Here you can also specify other parameters (e.g.:rotate the image)
with picamera.PiCamera() as camera:
 camera.resolution = (320, 240)
 camera.capture(stream, format='jpeg')

#Convert the picture into a numpy array
buff = numpy.fromstring(stream.getvalue(), dtype=numpy.uint8)

#Now creates an OpenCV image
image = cv2.imdecode(buff, 1)

#Load a cascade file for detecting faces
face_cascade = cv2.CascadeClassifier('/usr/share/opencv/haarcascades/haarcascade_frontalface_alt.xml')

#Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)

#Look for faces in the image using the loaded cascade file
faces = face_cascade.detectMultiScale(gray, 1.1, 5)

print "Found "+str(len(faces))+" face(s)"

#Draw a rectangle around every found face
for (x,y,w,h) in faces:
 cv2.rectangle(image,(x,y),(x+w,y+h),(255,255,0),2)

#Save the result image
cv2.imwrite('result.jpg',image)

You should be all set, except for the line for the face_cascade variable. We’re going to grab a new xml file, which you can find here: haarcascade_frontalface_alt.xml file

Also change the face_detection line in the script above to link to the Faces XML document I shared above. I put mine in the directory that I’ve been working in, just make sure you link to wherever you put yours:

face_cascade = cv2.CascadeClassifier(‘/home/pi/haarcascade_frontalface_alt.xml’)
Run this script and smile at the camera! The script will either say it found some faces, or not. The resulting image will be output so you can see what it was seeing when it made the assessment. Here’s an example from me:

The way image recognition works is we first need to “train” a classifier, like we would with any machine learning algorithm. Normally we would need to compile a massive set of images of what we’re looking to detect.

Here is a great explanation from a programmer named “sentdex”

“The way image recognition works is we first need to “train” a classifier, like we would with any machine learning algorithm. To do this, we generally need to compile a massive set of images of what we’re looking to detect. In the case of faces, we’d want to grab 1,000 images of faces. Then we note where the faces are in the images. Then we feed this information to the machine, saying “hey, here are all the face images, there are the faces. Got it?” From here, you can either be all set, or sometimes you will take another step and show the machine a bunch of images that have no faces at all, and you tell the machine “there are no faces here! See no faces!” At this point, training is done. You’re ready to show the machine a new image with a face. Hopefully, given it’s “memory” of what images with faces were like, and what the actual faces in the images were like, our algorithm will be able to detect the face.”

For a lot of the image recognition tasks, people have already built data sets for you to use for the training part. Face Detection is very popular, so there are already a lot of datasets for face data.

For the next step we will move on to face tracking using the PiCamera

Code based of of more generic code found here written by Carlo Mascellani. Code was modified to work with the Raspberry Pi 3 more easily.