The fundamentals of audio editing for video

Having a wide set of skills is a must in today’s production environment. One cannot simply expect to get by on a single main skill, no matter their proficiency. Augmenting your range of abilities with fundamentals like audio editing is key to securing and delivering projects.

Audio breaks down into two main components: recording and mixing. Good audio editing depends heavily on the source material, and without properly recorded audio the process becomes one of diminishing returns.


Timing and sync

These are your frames per second settings. The following frame rates are the most common film and television standards:

  • 24 for most cinematic work
  • 25 for PAL and European broadcast
  • 29.97d for NTSC and American broadcast
  • 48, 50 and 60 are more exotic rates that can be used for aesthetics and are common for the action cam shots synonymous with GoPros.

Getting these wrong more or less invalidates your SMPTE time code and makes any kind of sync impossible.

Sample rates and bit depth

These will determine the number of samples captured per second, the overall resolution and dynamic range of your project.

A bit depth of 24-bit is standard for production and mixdown environments. Content is only dithered down to 16-bit if required for a particular medium.

The standard sample rate for video has long been 48KHz, with 96KHz and 192KHz beginning to see more frequent use. While there is nothing wrong with using higher sample rates, they will consume more space, processing power and bandwidth.

Higher sample rates offer the benefit of increased dynamic range and fidelity and always leave the option of higher quality exports, much in the same way that having 4K raw footage leaves the door open for exporting to multiple resolutions, formats and remasters. In the meantime, you will have to downsample because most exports are done in 48 KHz.

Most high-fidelity sound formats like DTS HD and Dolby TrueHD use 24-bit audio at 48 KHz. Older formats, such as DVDs, use 16-bit AAC and AC3 formats.


Staging your gain is key to ending up with levels that are easy to mix and getting the most out of your microphones. You typically want to see levels of -12 to -9 dBFS in digital recordings, with the latter being about as hot as you want to be at the input stage. This leaves plenty of headroom for mixing.

Setting it too high risks clipping, while setting it too low leaves a signal that is outside of the nominal range and will need further amplification during mixing, which in turn, will introduce additional noise.



Good project structure and file management makes editing and mixing more of a process and less of an ordeal.

Group your tracks by type, dialogue, background noise, auxiliaries and potential effects. Digital audio workstations (DAWs) typically have folder and grouping structures that help with bulk edits and keeping regions together.

I tend to lump the initial batch of cuts and trims into this stage, removing a lot of clutter.

Levels and panning

Now that you have a working skeleton, you can start creating a working balance out of your current sounds. This sets your foundation for adding EQ, compression other audio filters and effects.

Mutes, cuts, and fades

When in doubt, mute it! Start by stripping away silence and trailing sounds. Most DAWs have silence detecting functions that can automatically identify and strip silence from audio regions. Make sure the detection isn’t too aggressive or you might accidentally lose some quiet passages.

With non-destructive editing in today’s DAWs, you never truly delete audio unless you delete the source files or convert regions to new audio files. This means that you can restore regions back to their original states even after trimming and resizing them.

Fades and crossfades are useful when muting or cutting is simply too aggressive and your edit requires a more subtle approach. Crossfades are fantastic at smoothing transitions between two edited audio regions.

Noise reduction

Ridding yourself of sounds that don’t belong should always be high on your to-do list. Otherwise they will only get in the way, steal signal bandwidth and detract from the overall quality.

The most common culprit is the 50 or 60Hz hum, usually caused by a poorly routed or grounded power source. Other frequent offenders are wind and background and the two should be avoided or minimized as much as possible during recording through the use of windscreens, directional microphones and good microphone placement.

Waves offers both the X-Noise and X-Hum plug-ins. Each offers noise reduction functionality that can learn and adapt to a particular noise threshold on an audio track and then suppress it at a specified level.

Alternatively, you can configure a narrow band or notch filter to reduce a target frequency by sweeping through the frequency band and identifying the offending sound. Regardless of the approach, you will need to balance the need for noise reduction against cutting a sound’s fundamental frequencies.

Our article on fixing messy audio goes into great detail on muting and using noise reduction:


The crisp clarity associated with a professional sounding production always starts here. Equalization will bring out the frequencies that make dialogue sound flattering, push back some of the low-end, and allow a good performance to shine.

A graphic equalizer is a powerful audio clean-up and sweetening tool in the right hands. To learn more about EQ, try adjusting different fequencies on a sound source and note how the sound changes.
A graphic equalizer is a powerful audio clean-up and sweetening tool in the right hands. To learn more about EQ, try adjusting different frequencies on a sound source and note how the sound changes.

EQ is the most basic form of noise reduction — a 32 band equalizer in the right hands can be used to great effect. There are a variety of EQ exercises for engineers to practice with to hone their hearing and ability to identify sounds. Try sweeping through a source sound with an equalizer to learn where different sounds live.


Getting compression right means that your sound sits comfortably in its own place within the mix with consistent dynamics that still leave enough room to differentiate between soft and loud sections.

At their most basic, compressors operate using ratios, thresholds and output gains. The ratio determines the amount of compression and high ratios will cause the compressor to function as a limiter, which can sound unnatural. The threshold sets where the compressor kicks in and the output gain makes up for any loss of amplitude, allowing you to adjust the final output level.

Compressors also operate on attack and release times that allow you to dial in the compression speed. The rate, usually measured in milliseconds, depends on the sound source. A slow sound source will benefit from higher attack and release times, whereas fast transients will benefit from faster rates.


The biggest piece of advice regarding automation in audio editing is not starting too early — otherwise, a lot of time gets spent on rebalancing and rewriting automation. It really helps to have confidence in your mixes, levels and balances. Automation is the final touch that ties in all the elements and commits the dynamic flourishes you have picked out.

The standard automation modes are:

  • Read – plays back existing automation
  • Write – writes new automation along with the movement of the play head
  • Latch – laches the automated parameter to its last position
  • Touch – only write automation when the parameter is changed

I tend to only use write automation when I am manually inputting a live automation run. Otherwise, I tend to use read mode by adding nodes on the automation panel, a lot of automation includes rudimentary level changes, but there are moments that call for a more hands-on musical approach.

For more examples and walkthrough on the above please see our video on audio editing:

Finishing Up

For the beginner, it will take time and experimentation to arrive at a satisfactory audio edit, but as you become familiar with your tools and their effects, the process will become faster and easier. Following these steps and listening carefully at every stage will give you a believable soundtrack that supports your visuals and your story.

Blag spends his time between web development, IT and audio. His background is, oddly enough, in the same things. Blag works at a software company and is a contributing editor at Videomaker, where he mainly focuses on, you guessed it, audio.

Blag Ivanov
Blag Ivanov
Blag Ivanov is a contributing editor at Videomaker and works at a software company.

Related Content