Many work hours can be lost to footage in formats that render slowly or need to be transcoded but fear not! Once you unlock the knowledge of codecs and containers, you’ll be able to continue your quest for great video.
Why Formats Matter
There’s a multitude of reasons a video format is important. If you understand the format that a camera shoots in, then you can calculate how much storage space you will need for the footage you plan to shoot. To know if editing or color correction software can handle a format natively or if it will have to be transcoded, you need to have knowledge of the format of the footage. When a film festival or broadcaster asks for a delivery format for video, the better you understand it, the easier it will be to have your project looking the best it can.
What Exactly is a Format?
By understanding a little about the codecs and containers used in popular formats, you will be able to make better choices for the work you do. When someone asks what format a video is in, they often want to know what container and codec were used to make it — and possibly what standard it is encoded to — unless, of course, the video is from the dark ages and then they usually want to know what kind of video tape it’s stored on and hope that they can find something on which to play it.
A codec is the order used to layout the data of an audio or video file in such a way that it may be utilized for playback, editing or changing to other codecs (transcoding). Codecs are used to organize media data, but that data is held within a container. There are many different types of audio and video codecs, and they each have their own advantages.[image:magazine_article:58227]
A container or a wrapper is what is used to hold audio and video data together in a single file along with additional information. Containers have file extension like .mov, .avi or .mp3. While some containers only tend to hold media in a particular codec, like the .mpg file container that’s used for MPEG files, some containers, like .mov, can hold data in a variety of audio and video codecs. The container has information to tell if there are both audio and video data held within it so things like media players will know to play them both at once.
Containers often also hold metadata on media in the file. That metadata can be as simple as the frame rate of the video up to showing what camera and lens were used to record the footage, what camera settings were used, where it was shot and information about the shot and the production. The metadata within a container can sometimes also tell you what standards the footage was produced in.[image:magazine_article:58226]
Not So Standard Standards
A car dealer may tell you that a car they are selling comes with a standard spare tire, but the standards to which that the wheel was made may only match a few makes, models and years of cars and not match any of the regular wheels on any car. Sadly, video can be very similar. If someone tells you that a video is in NTSC, they may only be referring to an NTSC standard frame rate like 29.97fps. If a video is said to be Rec. 709, this refers to a specific set of standards for HDTV that covers things like frame rate, color gamut and resolution, although one may point out that even Rec. 709 supports multiple frame rates.
To make things more confusing, you have the Rec. 2020 standard for UHDTV (4K TV) and the DCI standard for 4K film. These standards have different resolutions and different aspect ratios. Rec. 2020 has an aspect ratio of 1.78:1 (16 X 9), where DCI supports 1.85:1 and 2.39:1 (roughly 17 X 9 and 21 X 9). So if you’re working on a project like a short film that will need copies in Blu-ray, and it will also play from a DCP (Digital Cinema Package) at a theater, be aware that the standards for those formats are very different.
Some formats have been created for video using common codecs but only allowing certain variations in things like resolution and bitrate so they can more easily be used across hardware and software platforms. AVCHD and DivX both are formats that use the H.264 (MPEG-4) codec but are different standards.
Avoiding Compression Depression
Compressing a video file can cause it to lose a lot of its image and sound quality. Taking the same file and compressing it multiple times can make that loss much worse. Most online video hosts like Vimeo and YouTube will re-compress the video you upload so you want to make sure that you maintain as much quality as you can before it gets to them. Whenever possible, you’ll want to edit and master from your source footage or transcode to uncompressed or lossless codecs to maintain media quality.
An uncompressed codec stores media without compression so no quality is lost but the files are big. A lossless codec stores media with compression and without quality loss but with minimal space savings. A lossy codec stores media with compression and quality loss. With lossy compression, the higher the compression, the smaller the file and the greater the quality loss.
Most cameras and recorders use some type of compression so you want to keep as much data as you can when you’re in post. If your final renders are made from your source footage, then the only loss that you’ll have is from the compression of that render if there is any. Note that if your rendering for archiving, compression is not recommended. If you use a lossy intermediate codec like ProRes 422 or Matrox MPEG-2, then you’ll see a loss in image quality, but for the ease in workflow and depending on the delivery format, it might be worth it. It’s best to test your workflow out beforehand and decide before you edit because being half way through a project and seeing how much image quality you’ve lost to compression can be very depressing.
Compatibility with different codecs and formats for hardware and software can still be a huge challenge. You want to ensure that any codecs you plan on using from acquisition in production through post and onto distribution and archiving are compatible with your hardware and software before you start on your project. You can often see things like color shifts in images when moving from one piece of software to another since they are probably not processing the image data the same way. On a large project or one where consistency is critical, testing your workflow before you begin can you help avoid or learn how to compensate for problems like this.
The Academy of Motion Picture Arts and Sciences (they give out the Oscars) recently launched a standard image file format and workflow called ACES. It’s designed to eliminate shifts in color and other file compatibility issues between software from production through post and archiving. It’s already seeing support in software like Premiere Pro and DaVinci Resolve and use on films like “Chappie”. They are also working on a hardware certification program for ACES. ACES is free and easy to use; it’s the first cross platform (film and video) standard for digital motion picture archiving.
Intermediate codecs or formats are used for post-production work in many situations; however, for the fastest results in post, you’ll want to avoid using intermediate formats and work from the source footage. There are times when using an intermediate is preferable or a must. If the hardware and/or software you’ll be using for your workflow doesn’t support the source footage, you’ll need to use an intermediate format. If you’re not keeping the source footage and have limited storage space for files, then an intermediate format may work better for you.
WAV files (.wav) could be called the default industry audio standard (there is no global audio standard; ACES only supports image files). They are compatible with almost all production and post software and are always uncompressed because the files are small compared to video files. Some cameras record in a WAV compatible format, but it will list as PCM or Linear PCM (LPCM) because this is the codec used, and the audio must be placed in a container like .avi or .mov in order for it to remain synced with the video that is also being recorded. PCM audio encoding is also used in AIFF files. While there are a large number of audio codecs around, for most workflows WAV is going to be the best option for video work.
While there are hundreds of audio and video codecs around made for different uses, this is a list of common codecs and their typical applications.
Commonly referred to as MPEG-4, H.264 uses lossy compression and is one of the most common video codecs in use today. The codec is widely supported and used in production, post and distribution of video. Many camcorders and DSLRs record in H.264. It’s the standard for Blu-ray disks as well as many web video hosts. H.264 is more efficient for compression than MPEG-2 and it typically delivers better video quality at the same bitrate.
It’s the standard for DVDs and was originally used for cable TV. It was used for HDV videotape and was popular for web video. MPEG-2 is still used by some current cameras, and it’s commonly used by editing software to render video previews. MPEG-2 is a lossy compression but when used at lower levels of compression it can deliver a high image quality.
H.265 (MPEG-H, HEVC)
A lossy codec and the follow up to H.264, H.265 offers better compression than its predecessor. While there isn’t much support for H.265 now, it won’t be long before it’s more widely used.
Flash was once the most popular option for encoding online videos, but now, the Adobe developed, lossy codec is mostly used for animations and games.
MJPEG (Motion JPEG)
MJPEP was used in the past for web video and some post work. The lossy codec isn’t as efficient as MPEG-2 or H.264 and is seldom still used. MJPEG was based on JPEG compression used for still images.
A lossy compression, JPEG 2000 is the follow up to the JPEG format for stills. The JPEG 2000 format allows for very high quality image sequences, and it’s the compression used for Digital Cinema (DCP).
Red Digital Cinema developed its own variation of JPEG 2000 for their cinema cameras called REDCODE. It’s a low loss, high image quality compression that is supported natively by most professional post software. REDCODE uses the .r3d file container.
Apple ProRes is a series of codecs that offer both lossless and lossy compression. Although the ProRes codecs were designed for intermediate work in post, they’re being used as acquisition formats by makers of cameras and recorders because of the popularity of the codecs with users as well as the wide spread support of the codecs by software companies. The ProRes codecs were the replacement for the older Apple Intermediate codec.
Avid’s lossy intermediate codec, DNxHD, was designed for work with their software. Much like ProRes, hardware manufacturers are now using DNxHD in their products.
Windows Media Video is typically used as a lossy codec and has never been widely supported except by Microsoft products. WMV is the preferred video codec for PowerPoint.
VP9 is the lossy codec designed by Google that’s used for YouTube and supported by many web browsers for HTML5 video. There’s talk of adding uncompressed options to the codec.
HuffyYUV and Lagarith
HuffyYUV and Lagarith are both free lossless video codecs that are often paired with an .avi wrapper. The two codecs are not as popular as they once were possibly because at a compression rate of around 3:1 they don’t save a lot of space compared to uncompressed video files.
Here are a few codecs that were once common that have been replaced or are no longer widely supported: Apple Intermediate, Apple Animation, MPEG-1, RealVideo (Real Player), Indeo and Cinepac.
Common Containers and Formats
There are dozens of digital video formats and containers out there; this is a list of common containers and formats and their usual uses.
Developed by Adobe and sometimes confused with AdobeDNG (for still cameras), CinemaDNG was designed to be a standardized image sequence format for feature film work. CinemaDNG supports uncompressed and compressed image files. Up until the past few years, there was little support for CinemaDNG even from Adobe. Now, there are both hardware and software products that support CinemaDNG but the files for HD, 2K and 4K tend to be very large.
ACES (Academy Color Encoding System)
ACES is a color management and image file interchange system that is free, and it is growing in software support. ACES utilizes the OpenEXR file format that was developed by Industrial Light and Magic (ILM). The ACES OpenEXR files are uncompressed image sequences, but you don’t need to render those to work in ACES. You just chose the ACES profile that matches your work in your post software, import your footage and go. ACES is the only global archiving standard for digital video, so if you want your grandkids to be able to watch your work years from now, using the same archiving standard as The Academy (Oscars) is a good start to ensuring they can.
The AVI (.avi) container is supported by almost all post software. It can be used with possibly the largest number of audio and video codecs of any container which makes it very helpful.
Apple developed the MOV (.mov) container, but it’s not limited to Apple codecs or hardware. You can work with MOV files in Linux and Windows as well as Apple platforms with a large number of codec options.
Jointly developed by Sony and Panasonic as an HD recording format, AVCHD is typically utilized by consumer camcorders. The standard uses the H.264 video codec combined with support for compressed or uncompressed audio. Since the format is used by many cameras, software support is widespread. AVCHD typically uses the .mts and .m2ts file extensions.
Panasonic developed the AVC-Intra format for its professional camcorders. AVC-Intra uses intra-frame compression meaning the image is compressed one frame at a time as opposed to compression across multiple frames like AVCHD. AVC-Intra is supported by most professional post software and uses the MXF container.
MXF is a format designed to be a standard for file exchange of compressed video. While there is software support for MXF, it’s never had the broad reaching use of AVI or MOV.
XAVC and XAVC-S
These file formats were developed by Sony and utilize H.264 compression for recording HD and 4K video to cameras. Software support is growing for these formats that also use the MXF container.
Putting it All Together
The value of a codec or container is how it fits into a workflow. The format doesn’t have to be a popular one to work well for you, but there’ll be more information available about working in the more common formats should you need help. Remember to check the specs of your hardware and software when putting together your workflow plans. By looking at the projects you want to work on, you can figure out wich codecs and containers will be best for you.
Sidebar: Why Use an Image Sequence?
Image sequences can be used for rendering video; additionally, some animation software only export to image sequences. Saving a video clip as a series of still images instead of frame based video in a single file can have some advantages. If you have a video sequence that has a lot of heavy effects work that you’re rendering to an image sequence, and your render fails midway through, you can pick up where you left off because the render will be good up until the last frame you completed. If you’re rendering to a video file format like AVI, and the render is interrupted, the whole file is usually unusable.
Image sequences are supported as import and export formats by many popular post-production software packages including some used for still images like Photoshop. You may even see better performance using an image sequence as opposed to a video file in some software because the encoding of image sequences is more simplistic.
One of the biggest advantages to image sequences is in archiving. If a file in an image sequence that has been archived is corrupt and can’t be repaired or opened by any available software, then all that’s lost is a single frame of video. By analyzing the frame that came before the lost frame and the one that followed it, tools found in many post programs can create a frame to go in between the two; if done properly, it is often very convincing that it always belonged. While these tools were designed to create frames for slow motion, they can be used for restoration as well. If a file containing frame-based video (as opposed to an image sequence) is corrupt than it can be very difficult and sometimes impossible to recover any of the footage.
There are also a few drawbacks to using images sequences, however. They don’t have audio so you’ll have to deal with audio in its own file. Additionally, some software doesn’t perform well when using compressed image sequences. Despite this, depending on the types of projects you have, image sequences may still fit in well with your workflows.
Odin Lindblom is an award-winning editor and cinematographer whose work includes film, commercials and corporate video.
A few decades back, when Videomaker was just starting out, shooting and editing got a little complicated with the introduction of VHS, VHS-C and Betamax. If you read a two page article, you would discover the differences. When the digital age dawned, everything went out the window. Suddenly there was a bewildering array of video formats — .wmv, .asf, .rm, .mov, .mpeg. On top of that, many of these standards had their own substandards (MPEG-1, MPEG-2, etc.) How’s anyone supposed to keep it straight?
Containers and codecs
Possibly one of the most confusing things about video formats is the idea that there’s a “container” and “codec“. It’s enough to make you yearn for the days when you just put a tape in the camera and pressed record. The plethora of video formats means that, whatever type of video production you’re doing, there’s a good way to make it happen. Twenty years ago, everybody was watching movies the same way — either on a screen via a projector, or on a television set. Today many, many more options exist and people are taking advantage of them all. From high-end 4K home theaters, to video streaming from a cellphone, video is everywhere. By understanding the various formats, you can ensure that your video is viewed in the right way with the best quality.
An analogy of video formats
Figuring out exactly what containers and codecs are can be a little bewildering, because it’s a very technical subject. Think of containers as a type of publication. It might be a hardback book, a glossy magazine, a newspaper, a pamphlet or a gum wrapper. These all contain words, and potentially photographs, or images. Yet they each work in a different way.
Think of the video format like the way you view text or images in a publication. You could print Tolstoy’s “War and Peace“, for example, on Dove candy wrappers. It will take thousands of wrappers and who would want to read it that way? In the same way, you can create your vacation footage in an uncompressed format, but the file becomes enormous. There is no way you could upload it to the web or send it in an email. Similarly, you want your copy of “War and Peace” bound beautifully in a hardback book. If you are printing a takeout menu, on the other hand, you’ll want color photos on heavier paper. Words with images appear in a comic book, or a hardback book, or a newspaper. The images of a fashion magazine, however, require heavy-weight glossy paper to reproduce properly.
Every video application has a proper codec and container. Unfortunately, codecs and containers get improvements and updates regularly. You may rarely see today, what was a popular format a few years ago. Additionally, some containers and some codecs are proprietary. That means, occasionally, there are licensing issues related one type of container with another type of codec.
Keep in mind, when you compress video data, there is some data lost in the process. Video compression applications work by looking for redundancies in a frame and maintain them frame to frame. For example, one bit of blue sky is the same as another bit of blue sky. The blue section carries though each frame. At high compression rates, this becomes obvious. At lower rates it’s difficult to notice.
Every video maker’s temptation is to use lossless formats to preserve all the original data. That isn’t practical for uploading, sending or storing your file. It is best to create multiple versions of your files for multiple uses. One file format you will uploaded to your web site. You may choose a different format or size to email to your clients. You might save the final product in a third video format to a hard drive for projection at an event.
Formats with lossy vs lossless compression
Edit and distribute in the highest quality
The highest quality video format is the one you originally captured. While digital files do not degrade in quality, every time they are converted, there is a lose in data. Converting your uncompressed files directly from your camera, even into a high quality file, does result in some loss of quality. It’s necessary to compress files in order to be able to share them. Avoid re-compressing any more than you have to.
Keep your master files in the original video format. Generally, you should edit and create versions at whatever sizes necessary. Whenever possible, you don’t want to convert a file from a file previously compressed.
You can rely on your editing applcation for lot of the work in deciding how to compress video files. Most consumer editing software today will have presets for various methods of distribution — such as email, YouTube or video display.
Let’s define a few specific video formats and the different containers. A video’s file extension usually refers to the container. A few containers have codecs that they almost always use and other containers are often used with many different codecs.
Audio Video Interleave (.avi)
Microsoft developed and released this with Windows 3.1 way back. AVI files were once a workhorse of digital video. If I say “AVI is dead” the comments section will clog with people still using it. I’ll say that it’s popularity has waned, but you will find lots of legacy AVI files all over the web. Short answer, don’t output video to it, but keep a player handy.
Advanced Systems Format (.asf)
ASF is another proprietary Microsoft container. It usually houses files compressed with Microsoft’s WMV codec. To make things confusing, the files are usually designated .wmv and not .asf. The ASF container has the advantage over other formats in that it’s able to include Digital Rights Management. (DRM – a form of copy protection) Microsoft designed this format for streaming video from media servers or over the Internet. Short answer, again, don’t output video to it, but keep a player handy.
QuickTime (.mov or .qt)
Apple developed QuickTime and it supports a wide variety of codecs. It’s a proprietary format though and Apple decides what it supports. Like Microsoft’s version, .avi, Quicktime looked like it was going to fade into the sunset. It looked like it was about to as it die when Apple released the Maverick update. Quietly the update replaced anything inside a .mov container with h.264. In fact, both Nikon and Canon DSLR’s output h.264 video wrapped in a .mov container. Short answer: Sure, why not. Most people will be able to read .mov files for a while now.
Advanced Video Coding, High Definition (AVCHD)
AVCHD is a very popular container for data compressed with h.264. It comes to us through a collaboration between Sony and Panasonic as a format for digital camcorders. It’s a file based video format. That means it’s meant to be stored and played back on storage devices like flash drives or SD cards. This format supports both standard definition and a variety of high definition variants.
This format is an extraordinarily robust container. It includes, not just things like subtitles, but menu navigation and slideshows with audio. Short answer: Yes.
MPEG-4 which was based on the Quicktime file format comes in a variety of extension including mp4, m4a, m4v, f4v, f4a, m4b, m4r, and f4b. Apple created this standard as an extension of MPEG-4. Originally the format was proprietary to Apple DRM to keep their files from playing on non-apple devices. You would use it, among other places, when distributing content on iTunes. The formats are so similar. In some instances, where the DRM isn’t being used, simply change the file extension to .mp4 and you can play the file.
There are a number of Codecs out here and we have a good tutorial here. Two of the most common are:
Windows Media Video (.wmv)
Once the Internet became a primary delivery vehicle for things like video, people started trying to come up with ways to share video that wouldn’t take up a lot of bandwidth and disk space. One of the big advances was the idea of streaming video — where your computer downloads only a part of a video and begins to play while the download continues — this means you don’t have to wait two hours for a movie to download before you can start watching. Over the years the WMV format has grown to include support for high definition 720 and 1080 video. To make things complicated, files that end in .wmv are usually stored in an .asf container.
Not only do you need to call the MPEG-2 compression codec h.262, you have to keep from confusing it with the format used to compress Blu-ray disks as well as lots of web video. One of the very nice things about h.264 is that you can use it at very low and very high bitrates. The h.264 will send highly compressed low resolution video across the web and then happily encode your high definition movie at super high bitrates for delivery to a High Definition television. This is a very common codec for camcorders and digital video cameras. Its container is AVCHD.
What’s the best video format?
While there isn’t one “best video format,” there are best video formats for particular jobs. Ask yourself a few questions about your intended audience: Will they be watching video streaming over the Internet? What kind of connection speeds? Do they have a DVD player or a Blu-ray player? — are the longevity of the format and how widespread its adoption.
For a number of years, a good bet for a forward-looking, high-quality, versatile video format is h.264. (MPEG-4/AVC – Advanced Video Coding). H.264 is supported by a number of important players including Microsoft, Apple and Adobe. Though in early 2011 Google dropped support for h.264 from its Chrome browser. They cited the desire to use only open-source (i.e. non-patented, royalty-free) standards. Microsoft swiftly made a Chrome extension which restored support. Google’s answer was rolling out .vp9 (later vp10) used as part of HTML5. Performance of the two codecs is very similar. Adoptions and usage will determine if there’s an eventual winner.
Switching between video formats
If you do transfer between formats, remember that re-compression causes degradation. If quality is of paramount importance, don’t delete the originals, archive them somewhere. Be sure to note that when moving between containers, data streams like subtitles and chapter data may be lost, if the new container doesn’t support them.
As viewing experiences and platforms evolve, video delivery will continue to evolve and change. You will see new codecs and containers that will deliver larger amounts of data more quickly and with additional data streams. There will never be a final format. Just keep your audience in mind and the type of video you want to deliver. There will always be some options that are superior to others. Good organizational skills and regular migrating to new formats will make sure that your video survives as technology changes.