Many video cameras now seem to include face detection capabilities. But what exactly is face detection? Our primer explains how it works and how it might stump you, too.
The story of face recognition technology begins back in 2005, when Nikon released the Coolpix 5900, a mid-range point-and-shoot still camera with what it called a Face Priority mode. The camera would use built-in algorithms to examine a scene and identify human faces. It would then adjust contrast, color balance and exposure to “properly” expose the faces. Other still camera manufacturers followed suit, and today even high-end DSLRs (digital single-lens reflex cameras) ship with the feature. Something this exciting wasn’t about to remain solely in the still photography world. By 2008 Sony, Panasonic and Sanyo had all incorporated face-detection features into their current batch of video cameras. But what is face detection? A gimmick or a useful tool? Let’s take a look.
Reasons for Face Detection
Our own eyes are automatically drawn to other human faces. When we’re walking down the street or glancing at images, our eyes leap to faces and our brains instantly attempt to determine not just whether or not we know this person, but also mood, attractiveness or whether this person may pose a threat. It all happens in fractions of a second, and most times we don’t even know that we’re doing it. It makes sense then that, when we look at video, we’ll also seek out faces, so, in recording video, we want to make those faces look their best.
Often in life, people and lighting are not as a director would place them – huge windows may backlight a bride and groom, bright sunlight may make dark eye sockets or overhead fluorescents may throw a green hue onto someone’s skin. Ever-changing lighting and distance conditions have made auto-focus and auto-exposure indispensable in modern camcorders. Face detection is no different. It will identify a face, track the face if it moves across a frame and then perform a series of tasks, such as selecting a focus point or brightness and contrast.
What Does It Do?
Most face detection algorithms recognize not just one, but multiple faces. Usually they consider the face closest to the center the most important or primary face; they notice others behind it or on the periphery, but focus and image adjustments will follow the primary face. If the primary face leaves the frame, the software will choose a new primary face. Many of the current crop of cameras will pick five or more faces (if they’re available).
Does This Technology Know Who I Am?
Though sometimes used interchangeably, the terms facial recognition and facial detection are actually two different technologies. While your camcorder can pick a face out of a crowd (facial detection), it doesn’t (yet) have the ability to search a database and identify the people in the frame (facial recognition); it knows only that there’s a face in the frame – it’s completely clueless as to whose face it is. Companies like Betaface (among others) have sophisticated methods of identifying specific people, based on techniques similar to this, but that take more accurate measurements of the distance between eyes and mouth, the width of the mouth when open and other things, such as hairstyle and whether the person is wearing glasses. It can then compare these measurements to a database and look for matches. The idea that your camcorder will someday be able to do something like this on the fly is not farfetched. Already social networking sites such as Facebook allow viewers to manually tag photos with the names of people in them and then link them to a pool of other photos. This suggests that the desire for automatic recognition (and tagging, by extension) is there, and now it’s merely a matter of the technology catching up.
How Does It Work?
Most face-detection implementations, whether done in hardware or software, look for two eyes, a nose and a mouth, though some companies now claim that their technology is able to detect faces in full or three-quarter profile. Once the software has identified a face, the software will highlight it in the viewfinder and track the face as it moves. Additionally, the software can adjust the image, focusing on the face, for example, and tracking while a person moves.
You may be thinking, what happens when there are non-human faces in a frame? From what we’ve seen of them, all of them are designed to pick up on human faces first and foremost; but of course, there are differences in all of the face detection algorithms currently available. Having said that, though, there are animal-specific face detection algorithms that are used in scientific contexts. These would only be truly useful if you’re actually on a scientific mission, though. Unless you happen to make pet videos for a living but that’s your business.
Will It Work for Me?
So far, all of the camcorders we’ve seen that include face detection also let you turn the feature off. Why would you want to do that? An artsy shot with flowers in focus and the subject holding them in soft focus is a good example of when you might want to turn this feature off.
The only way to know if facial detection technology will help you in your production endeavors is to try it out. Never experiment on an important production – spend an afternoon in a test environment, similar to your “real” shooting, simulating your normal shooting conditions, and see if facial detection is helping your images and helping you do your job with less effort.
Contributing Editor Kyle Cassidy is a visual artist who writes extensively about technology.