Video formats explained

A few decades back, when Videomaker was just starting out, shooting and editing got a little complicated with the introduction of VHS, VHS-C and Betamax. If you read a two-page article, you would discover the differences. When the digital age dawned, everything went out the window. Suddenly there was a bewildering array of video formats — .wmv, .asf, .rm, .mov, .mpeg. On top of that, many of these standards had their own sub-standards (MPEG-1, MPEG-2, etc.)

Many work hours can be lost to footage in formats that render slowly or need to be transcoded. But fear not! Once you unlock the knowledge of codecs and containers, you’ll be able to continue your quest toward great video.

Why formats matter

There’s a multitude of reasons that video format is important for video editing. If you understand the format that a camera shoots in, then you can calculate how much storage space you will need for the footage you plan to shoot. To know if editing or color correction software can handle you video natively or if it will need to be transcoded, you have to know the format of the footage. When a film festival or broadcaster asks for a delivery format for video, the better you understand it, the easier it will be to have your project looking the best it can.

Twenty years ago, everybody was watching movies the same way — either on a screen via a projector or on a television set. Today many, many more options exist and people are taking advantage of them all. From high-end 4K home theaters to video streaming from a cellphone, video is everywhere. By understanding the various formats, you can ensure that your video is viewed in the right way with the best quality. It is also important to understand for cinematography, where the quality of what shows on screen is critical.

What exactly is a format?

When someone asks what format a video is in, they often want to know what container and codec were used to make it — and possibly what standard it is encoded to — unless, of course, the video is from the dark ages. Then they usually want to know what kind of videotape it’s stored on and hope that they can find something that will still play it back.

Containers and codecs

Possibly one of the most confusing things about video formats is the idea that there’s both a container and a codec associated with every video file. It’s enough to make you yearn for the days when you could just put a tape in the camera and press record. On the other hand, the plethora of video formats means that whatever type of video production you’re doing, there’s a good way to make it happen.

Codecs

A codec is the order used to layout the data of an audio or video file in such a way that it may be utilized for playback, editing or changing to other codecs (transcoding). Codecs are used to organize media data, but that data is held within a container. There are many different types of audio and video codecs, and they each have their own advantages.

Containers

A container or a wrapper holds audio and video data together in a single file along with additional information. Containers have file extension like .mov, .avi or .mp3. While some containers only tend to hold media in a particular codec, like the .mpg file container that’s used for MPEG files, some containers, like .mov, can hold data in a variety of audio and video codecs. The container has information to tell if there is both audio and video data held within it so things like media players will know to play them both at once.

Containers often also hold metadata on media in the file. That metadata can be as simple as the frame rate of the video up to what camera and lens were used to record the footage, what camera settings were used, where it was shot and information about the shot and the production. The metadata within a container can sometimes also tell you what standards the footage was produced in.

An analogy of video formats

Figuring out exactly what containers and codecs are can be a little bewildering because it’s a very technical subject. Think of containers as a type of publication. It might be a hardback book, a glossy magazine, a newspaper, a pamphlet or a gum wrapper. These all contain words, and potentially photographs or images. Yet they each work in a different way.

Think of the video format like the way you view text or images in a publication. You could print Tolstoy’s “War and Peace“, for example, on Dove candy wrappers. It will take thousands of wrappers and who would want to read it that way? In the same way, you can create your vacation footage in an uncompressed format, but the file becomes enormous. There is no way you could upload it to the web or send it in an email.

Similarly, you want your copy of “War and Peace” bound beautifully in a hardback book. If you are printing a takeout menu, on the other hand, you’ll want color photos on heavier paper. Words with images appear in a comic book, or a hardback book, or a newspaper. The images of a fashion magazine, however, require a heavy-weight glossy paper to reproduce properly.

Every video application has a proper codec and container. Fortunately or unfortunately, codecs and containers get improvements and updates regularly. You may rarely see today what was a popular format a few years ago. Additionally, some containers and some codecs are proprietary. That means, occasionally, there are licensing issues related to using one type of container with another type of codec.

Not-so-standard standards

A car dealer may tell you that a car they are selling comes with a standard spare tire, but the standards to which that the wheel was made may only match a few makes, models and years of cars and not match any of the regular wheels on any car. Sadly, video can be very similar. If someone tells you that a video is in NTSC, they may only be referring to an NTSC standard frame rate: 29.97fps. If a video is said to be Rec. 709, this refers to a specific set of standards for HDTV that covers things like frame rate, color gamut and resolution, although one may point out that even Rec. 709 supports multiple frame rates.

To make things more confusing, you have the Rec. 2020 standard for UHDTV (4K TV) and the DCI standard for 4K film. These standards have different resolutions and different aspect ratios. Rec. 2020 has an aspect ratio of 1.78:1 (16 X 9), where DCI supports 1.85:1 and 2.39:1 (roughly 17 X 9 and 21 X 9). So if you’re working on a project like a short film that will need copies in Blu-ray, and it will also play from a DCP (Digital Cinema Package) at a theater, be aware that the standards for those formats are very different.

Some formats have been created for video using common codecs but only allowing certain variations in things like resolution and bitrate so they can more easily be used across hardware and software platforms. AVCHD and DivX both are formats that use the H.264 (MPEG-4) codec but are different standards.

Compatibility

Compatibility with different codecs and formats for hardware and software can still be a huge challenge. You want to ensure that any codecs you plan on using from acquisition in production through post and onto distribution and archiving are compatible with your hardware and software before you start on your project. You can often see things like color shifts in images when moving from one piece of software to another since they are probably not processing the image data the same way. On a large project or one where consistency is critical, testing your workflow before you begin can you help avoid or learn how to compensate for problems like this.

Data loss

Keep in mind, when you compress video data, there is some data loss in the process. Video compression applications work by looking for redundancies in a frame and maintain them frame to frame. For example, one bit of blue sky is the same as another bit of blue sky. The blue section carries through each frame. At high compression rates, this becomes obvious. With less compression, it’s difficult to notice.

An uncompressed codec stores media without compression so no quality is lost but the files are big. A lossless codec stores media with compression and without quality loss but with minimal space savings. A lossy codec stores media with compression and quality loss. With lossy compression, the higher the compression, the smaller the file and the greater the quality loss.

Managing data loss through the edit

Most cameras and recorders use some type of compression so you want to keep as much data as you can when you’re in post. If your final renders are made from your source footage, then the only loss that you’ll have is from the compression of that render, if there is any. Note that if your rendering for archiving, compression is not recommended.

Likewise, keep your master files in the original video format. The highest-quality video format is the one you originally captured. While digital files do not degrade in quality, every time they are converted, there is a loss in data. Converting your uncompressed files directly from your camera, even into a high-quality file, does result in some loss of quality.

It’s necessary to compress files in order to be able to share them but avoid re-compressing any more than you have to. Generally, you should edit and create versions at whatever sizes necessary. Whenever possible, you don’t want to convert a file from a file that’s already compressed.

Image of woman with close up inset showing artifacts.

Intermediate formats

Intermediate codecs or formats are used for post-production work in many situations; however, for the fastest results in post-production, you’ll want to avoid using intermediate formats and work from the source footage. There are times when using an intermediate is preferable or a must. If the hardware and/or software you’ll be using for your workflow doesn’t support the source footage, you’ll need to use an intermediate format. If you’re not keeping the source footage and have limited storage space for files, then an intermediate format may work better for you.

Depending on the delivery format, using a lossy intermediate codec like ProRes 422 or Matrox MPEG-2 might be worth it for the ease in workflow, despite the loss in image quality. It’s best to test your workflow out beforehand and decide before you edit. Being halfway through a project and seeing how much image quality you’ve lost to compression can be very depressing.

If you do transfer between formats, remember that re-compression causes degradation. If quality is of paramount importance, don’t delete the originals; archive them somewhere. Be sure to note that when moving between containers, data streams like subtitles and chapter data may be lost if the new container doesn’t support them.

Export according to your use

Every video producer’s temptation is to use lossless formats to preserve all the original data. That isn’t practical for uploading, sending or storing your file. It is best to create multiple versions of your files for multiple uses. One file format you will upload to your web site, but you may choose a different format or size to email to your clients. Then, you might save the final product in a third video format to a hard drive for projection at an event.

You can rely on your editing application for a lot of the work in deciding how to compress video files. Most consumer editing software today will have presets for various methods of distribution — such as email, YouTube or video display.

Since compressing a video file can cause it to lose a lot of its image and sound quality, taking the same file and compressing it multiple times can make that loss much worse. Most online video hosts like Vimeo and YouTube will re-compress the video you upload, so you want to make sure that you maintain as much quality as you can before it gets to them. Whenever possible, you’ll want to edit and master from your source footage or transcode to uncompressed or lossless codecs to maintain media quality.

Editing software like FinalCut will gives you lots of choices depending on ho you want to use your video

What’s the best video format?

While there isn’t one “best video format,” there are best video formats for particular jobs. Ask yourself a few questions about your intended audience: Will they be watching video streaming over the internet? What kind of connection speeds? What’s the longevity of the format and how widespread its adoption?

Common containers

A video’s file extension usually refers to the container. A few containers have codecs that they almost always use and other containers are often used with many different codecs.

There are dozens of digital video formats and containers out there; we’ve attempted to list the ones you’re most likely to encounter.

.mp4

This container format is one of the most commonly used formats today, especially when it comes to sharing content online. In fact, YouTube recommends uploading files in the .mp4 format for the best video quality. In addition to video and audio data, it can also be used to store things like subtitles and still images. It’s most commonly paired with H.264 or H.265.

.mov

The Apple-developed .mov file supports a wide variety of codecs. This container format is commonly used both in capture and export. In fact, both Nikon and Canon cameras output H.264 video wrapped in a .mov container.

Apple developed the MOV (.mov) container, but it’s not limited to Apple codecs or hardware. You can work with MOV files in Linux and Windows as well as Apple platforms with a large number of codec options.

.avi

Microsoft developed and released this with Windows 3.1 way back. AVI files were once a workhorse of digital video. If I say “AVI is dead” the comments section will clog with people still using it. I’ll say that it’s popularity has waned, but you will find lots of legacy AVI files all over the web. Short answer, don’t output video to it, but keep a player handy.

.asf

ASF is another proprietary Microsoft container. It usually houses files compressed with Microsoft’s WMV codec. To make things confusing, the files are usually designated .wmv and not .asf. The ASF container has the advantage over other formats in that it’s able to include Digital Rights Management. (DRM, a form of copy protection) Microsoft designed this format for streaming video from media servers or over the Internet. Short answer: Again, don’t output video to it, but keep a player handy.

AVCHD

Jointly developed by Sony and Panasonic as an HD recording format, AVCHD (Advanced Video Coding, High Definition) is typically utilized by consumer camcorders. The standard uses the H.264 video codec combined with support for compressed or uncompressed audio. Since the format is used by many cameras, software support is widespread. AVCHD typically uses the .mts and .m2ts file extensions.

AVCHD is a file-based video format. That means it’s meant to be stored and played back on storage devices like flash drives or SD cards. This format supports both standard definition and a variety of high-definition variants.

This format is an extraordinarily robust container. It includes, not just things like subtitles, but menu navigation and slideshows with audio. Short answer: Yes.

MXF

MXF is a format designed to be a standard for file exchange of compressed video. While there is software support for MXF, it’s never had the broad-reaching use of AVI or MOV.

CinemaDNG

Developed by Adobe and sometimes confused with AdobeDNG (for still cameras), CinemaDNG was designed to be a standardized image sequence format for feature film work. CinemaDNG supports uncompressed and compressed image files. Up until the past few years, there was little support for CinemaDNG even from Adobe. Now, there are both hardware and software products that support CinemaDNG but the files for HD, 2K and 4K tend to be very large.

ACES

ACES (Academy Color Encoding System) is a color management and image file interchange system that is free, and it is growing in software support. It utilizes the OpenEXR file format that was developed by Industrial Light and Magic (ILM). The ACES OpenEXR files are uncompressed image sequences, but you don’t need to render those to work in ACES. You just chose the ACES profile that matches your work in your post software, import your footage and go. ACES is the only global archiving standard for digital video, so if you want your grandkids to be able to watch your work years from now, using the same archiving standard as The Academy (Oscars) is a good start to ensuring they can.

There are many Codecs to choose from as you can see in this dropdown menu from Final Cut

Common codecs

While there are hundreds of audio and video codecs around made for different uses, this is a list of common codecs and their typical applications.

H.264 (MPEG-4)

Commonly referred to as MPEG-4, H.264 uses lossy compression and is one of the most common video codecs in use today. The codec is widely supported and used in production, post-production and distribution of video. Many camcorders and DSLRs record in H.264. It’s the standard for Blu-ray disks as well as many web video hosts. H.264 is more efficient for compression than MPEG-2 and it typically delivers better video quality at the same bitrate.

One of the very nice things about H.264 is that you can use it at very low and very high bitrates. H.264 will send highly compressed low-resolution video across the web and then happily encode your high-definition movie at super high bitrates for delivery to an HD television.

This codec is often used with .mp4 and .mov containers.

H.265 (MPEG-H, HEVC)

A lossy codec and the follow-up to H.264, H.265 offers better compression than its predecessor. Support for H.265 is growing, and it’s quickly becoming widely used.

ProRes

Apple ProRes is a series of codecs that offer both lossless and lossy compression. Although the ProRes codecs were designed for intermediate work in post, they’re being used as acquisition formats by makers of cameras and recorders because of the popularity of the codecs with users as well as the widespread support of the codecs by software companies. The ProRes codecs were the replacement for the older Apple Intermediate codec.

DNxHD

Avid’s lossy intermediate codec, DNxHD, was designed for work with their software. Much like ProRes, hardware manufacturers are now using DNxHD in their products.

XAVC and XAVC-S

These file formats were developed by Sony and utilize H.264 compression for recording HD and 4K video to cameras. Software support is growing for these formats that also use the MXF container.

AVC-Intra

Panasonic developed the AVC-Intra format for its professional camcorders. AVC-Intra uses intra-frame compression meaning the image is compressed one frame at a time as opposed to compression across multiple frames like AVCHD. AVC-Intra is supported by most professional post software and uses the MXF container.

Flash

Flash was once the most popular option for encoding online videos, but now, the Adobe-developed lossy codec is mostly used for animations and games.

Windows Media Video (.wmv)

Once the internet became a primary delivery vehicle for things like video, people started trying to come up with ways to share video that wouldn’t take up a lot of bandwidth and disk space. One of the big advances was the idea of streaming video — where your computer downloads only a part of a video and begins to play while the download continues — this means you don’t have to wait two hours for a movie to download before you can start watching.

Over the years the WMV format has grown to include support for high-definition 720 and 1080 video. To make things complicated, files that end in .wmv are usually stored in a .asf container.

Windows Media Video has never been widely supported outside of Microsoft products, but it is the preferred video codec for PowerPoint.

MPEG-2

It’s the standard for DVDs and was originally used for cable TV. It was used for HDV videotape and was popular for web video. MPEG-2 is still used by some current cameras, and it’s commonly used by editing software to render video previews. MPEG-2 is a lossy compression, but when used at lower levels of compression, it can deliver a high image quality.

MJPEG (Motion JPEG)

MJPEP was used in the past for web video and some post-work. The lossy codec isn’t as efficient as MPEG-2 or H.264 and is seldom still used. MJPEG was based on JPEG compression used for still images.

JPEG 2000

A lossy compression, JPEG 2000 is the follow-up to the JPEG format for stills. The JPEG 2000 format allows for very high-quality image sequences, and it’s the compression used for Digital Cinema (DCP).

REDCODE

Red Digital Cinema developed its own variation of JPEG 2000 for its cinema cameras called REDCODE. It’s a low-loss, high-image-quality compression that is supported natively by most professional post software. REDCODE uses the .r3d file container.

Why use an image sequence?

Image sequences can be used for rendering video; additionally, some animation software only export to image sequences. Saving a video clip as a series of still images instead of frame-based video in a single file can have some advantages. If you have a video sequence that has a lot of heavy effects work that you’re rendering to an image sequence, and your render fails midway through, you can pick up where you left off because the render will be good up until the last frame you completed. If you’re rendering to a video file format like AVI, and the render is interrupted, the whole file is usually unusable.

Image sequences are supported as import and export formats by many popular post-production software packages, including some used for still images like Photoshop. You may even see better performance using an image sequence as opposed to a video file in some software because the encoding of image sequences is more simplistic.

One of the biggest advantages to image sequences is in archiving. If a file in an image sequence that has been archived is corrupt and can’t be repaired or opened by any available software, then all that’s lost is a single frame of video. By analyzing the frame that came before the lost frame and the one that followed it, tools found in many post programs can create a frame to go in between the two; if done properly, it is often very convincing that it always belonged. While these tools were designed to create frames for slow motion, they can be used for restoration as well.

If a file containing frame-based video (as opposed to an image sequence) is corrupt then it can be very difficult and sometimes impossible to recover any of the footage.

There are also a few drawbacks to using images sequences, however. They don’t have audio so you’ll have to deal with audio in its own file. Additionally, some software doesn’t perform well when using compressed image sequences. Despite this, depending on the types of projects you have, image sequences may still fit in well with your workflows.

Audio codecs

WAV files (.wav) could be called the default industry audio standard. They are compatible with almost all production and post software and are always uncompressed because the files are small compared to video files. Some cameras record in a WAV-compatible format, but it will list as PCM or Linear PCM (LPCM) because this is the codec used, and the audio must be placed in a container like .avi or .mov in order for it to remain synced with the video that is also being recorded. PCM audio encoding is also used in AIFF files. While there are a large number of audio codecs around, for most workflows, WAV is going to be the best option for video work.

Putting it all together

The value of a codec or container is how it fits into a workflow. The format doesn’t have to be a popular one to work well for you, but there’ll be more information available about working in the more common formats should you need help. Remember to check the specs of your hardware and software when putting together your workflow plans. By looking at the projects you want to work on, you can figure out which codecs and containers will be best for you.

As viewing experiences and platforms evolve, video delivery will continue to evolve and change. You will see new codecs and containers that will deliver larger amounts of data more quickly and with additional data streams. There will never be a final format. Just keep your audience in mind and the type of video you want to deliver. There will always be some options that are superior to others. Good organizational skills and regular migrating to new formats will make sure that your video survives as technology changes.