The XBOX Kinect sensor is far more than a mere camera. It maps, in real-time, three-dimensional space from its point of view, creates textures to wrap around these real-time geometries and can recognize body parts and even facial movements. You can even use it to digitize objects by scanning them into 3D animation software or 3D plug-ins for After Effects like Video Copilot’s Element 3D.
The good news is that several 3D packages offer interfaces for the Kinect sensor. Poser, Lightwave 3D and even the free DAZ Studio can be used in conjunction with Kinect. After Effects even has a free interface made by AExperiments that adjusts your mocap data — captured by something like Brekel Body Pro — to animation pins. And as for you YouTubers wanting to remain anonymous, Brekel Face Pro does a face mocap that actually exports a 3D object in FBX format complete with deformations while it records your audio. Just do your talk, load the FBX into any 3D software like Blender and render; Bam! You’ve got a 3D mask talking with your voice.
The bad news is that if you want to reliably use a Kinect for mocap, it needs to be a Windows machine and it needs to be as fast as possible to avoid any dropped frames. There is also a bit of a learning curve to using a Kinect sensor and some limitations.
Well, since this isn’t a dedicated mocap studio and the Kinect is from one point of view, you have to be careful not to move your arms or legs to an area the sensor can’t see — like behind your body. Also, as is true with any mocap, the movements — although realistic — do have a margin of error. This means that there will be some tweaking of the keyframes in the final product. In most cases, this is as simple as just removing keys. They are on every frame in a sequence, so removing outliers will smooth your motion considerably.
Also, the quality of your mocap is greatly dependent on the software that processes it. For instance, even on a screamingly fast 8-core PC, Poser captures at about half of real-time. So, when you playback the animation it is twice as fast as it should be. It’s a simple fix: Just scale your keys. But other software — like Lightwave 3D’s uber-expensive Nevronmotion — may record your motion in a very jerky way with lots of outlier keyframes. Usually it’s best to perform your mocap in separate software, then import your mocap file into your animation software. The Brekel suite (Face and Body) have stood out to be the best, most accurate for the money to use with Kinect.
Great, but How do I Get Started?
This is where there’s also great news. There is an adapter that lets the Kinect One (a.k.a. Kinect v2) sensor plug into a USB 3.0 port. There are also drivers from Microsoft for the Kinect sensor that let you do everything from mocap to facial recognition logins to your computer — and no, not using a picture; it must be you. With these readily available tools, getting it up and running is a snap.
If you already have a Kinect — or have a friend who does — the adapter is inexpensive. Also, there is a free version of Brekel available so you can play around with it and see if you like it. Another option is DAZ Studio — the 32-bit version has a mocap for Kinect interface and is also free.
Setting Up Your Space
If you’re doing full-body mocap, all you have to do is set your Kinect at about mid-chest level, just at the top you’re your abdominal area. Stand about 10-15 feet away from the device and begin your capture. The closer you are, the better the quality of your capture as it’s essentially 1920 x 1080 resolution, but it can capture further away if you’re hoping to capture a walk-cycle.
Although the Kinect can separate you from objects in your environment; it’s best to make sure the only thinking your software has to do is in regard to your movements — not in recognizing you versus a chair or bookshelf.
You’ll also want to make sure there is plenty of light as well as a simple background. Although the Kinect can separate you from objects in your environment; it’s best to make sure the only thinking your software has to do is in regard to your movements — not in recognizing you versus a chair or bookshelf.
How to Move
It’s best to face the Kinect sensor. This will give it full view of all your limbs. Don’t worry, it does capture depth as well as height and width. When you move, try not to move too quickly. Motion blur can confuse the sensor and lead to outlier keyframes. For instance, if you’re capturing a roundhouse kick, perform it as slowly as possible and then scale the keyframes down in your animation software.
Some software, such as Brekel, can capture multiple people simultaneously. This can be useful for fights or populating an area for an architectural fly-through. Remember, tough, that more people means more burden on the software to interpolate the motion. It’s best to capture one-person at a time if possible.
It’s an exciting time when someone can setup a home mocap studio for very low cost. There is a bit of a learning curve in both the processing of mocap data and being a Kinect mocap actor. But at the price point, can you afford not to implement this technology into your animation workflows?
Ty Audronis has been a TV / Film professional for over 20 years. He’s well-known for designing studios for major production houses, and small shops alike; and maximizing production quality on a budget.