The best way of explaining reverb is by clearing up a common misconception. Reverb is frequently confused with being the same as an echo. The technical definition of reverb is that it is the remainder of a sound once it has dispersed within an acoustic space.
A more esoteric definition can be illustrated through the following thought experiment. Picture yourself standing in a large room. Now clap your hands. Imagine the sound as it travels around the room and interacts with each surface. Reverb is the gasp of sound that remains in the air after the sound source has ceased. It is the result of the sound’s reflections against surfaces, diffusion around surfaces and absorption by surfaces. Anyone that has mentioned a room’s ambience is referring to its reverb characteristics.
Echoes occur when a sound is directly reflected back to the source’s position. An echo is referred to as delay when working with pro audio since it is one of many types of delayed signal produced by delay outboard gear and plug-ins.
A question to ask yourself when recording is if you want reverb to be part of your source signal or to add reverb during postproduction? Reverb can always be added, but is extremely difficult to take away. A safe and flexible approach uses room microphones to record the ambient room sound while using close proximity microphones on the sound sources. These signals can then be independently blended during mixing.
Close proximity recordings are usually made with directional microphones such as cardioid and shotgun pattern microphones. This can vary by application; shotgun microphones are not commonly used on instruments. Meanwhile, omnidirectional and cardioids can be used as room microphones. Always remember to use your ears when making a microphone selection, these suggestions are just guidelines in the end.[image:magazine_article:56690]
Acoustically Treating a Space
The sounds we hear are categorized as complex sounds because they are made up of multiple simultaneous frequencies. An all-inclusive solution does not exist when acoustically treating a space. Instead we rely on a combination of products and materials. In order for a material to absorb sound, it needs to have both mass and density for it to reduce reflectivity. Upon contacting a surface, a sound is absorbed, transmitted and reflected. The degrees to which these actions occur depend on the type of material. Acoustic treatment traps targeted frequencies in order to balance absorption and reflection. Studio spaces have to suppress transmission to prevent leakage between rooms, because the last thing you want is the sound of your rowdy band mates from the next room bleeding into your pristine vocal track.
Audio for video can be more forgiving as long as the end product is convincing
Temporarily treating an acoustic space requires identifying the offending frequencies and surfaces. A room analysis is done with a calibration microphone, frequency analysis software and playback equipment. A frequency analysis provides a precise reading of a room’s frequency distribution based on the microphone’s location. Bare glass, stone and concrete are highly reflective surfaces. However, concrete is also good at limiting transmission even at low frequencies. Acoustic fabrics, such as curtains, are an effective means of reducing higher frequency reflections. Acoustic panels can be used in the area surrounding the set, stage or source to trap and reduce localized sound levels and reflections. These panels will be most effective at mid range frequencies of 500Hz to 4.5KHz. In tandem, these methods will increase a sound’s energy loss, reduce reflections and provide some absorption. The result should cut down reverb length and make for a smaller and tighter sounding room. Choosing the right product will depend on the desired frequency reduction, aesthetic needs and dimensions.
Avoid reverb rich environments
This should be practical consideration at the start of a project. If there is an expectation that work will be taking place in reverberant environments, a decision should be made whether or not it is possible to limit dialogue recording to ADR/Post and to use location sound as a reference. If the set recording must be used, it is the audio engineer’s responsibility to use the correct tools to minimize any unwanted sounds during the capture process and to avoid subsequent issues during post-production.
Shure SM 7
Often seen as the standard in broadcast microphones, the SM 7 also has a strong following amongst vocalists. The SM 7 is a dynamic cardioid microphone that offers a comparatively warm sound while offering good off axis rejection at frequencies higher than 5KHz. The SM 7 works best in close mic setups.
The Rode Lavalier is an omnidirectional microphone and an excellent method of capturing close proximity dialogue. Setting the gain correctly will allow for good signal pickup while limiting background noise and reverb. Passages that occur in between the dialogue can easily be edited out or reduced with a Noise Gate.
The K6 is one of my favorite microphones because of its rich feature set and great sound. The K6 is a modular base that fits several different capsules. The ME66 is a short shotgun capsule, the ME67 is a long shotgun capsule and ME64 is a cardioid pattern capsule. The K6 accepts phantom power or a single AA battery. The capsules can handle high sound levels and multiple styles of sources. The K6 can also be attached to a video camera.
The iZotope RX4 Audio Repair Toolbox has a Dereverb tool that analyzes a recording and provides adjustable parameters with which to reduce the audibility of reverb. The core mechanics of the Dereverb plugin utilize EQ and a degree of compression to reduce the higher frequencies of 8KHz and above and the lower frequencies of 200Hz and below.[image:magazine_article:56691]
You can just as easily make use of these stand-alone tools to achieve the same results. The majority of the human voice resides in the range between 500Hz and 4Khz. Making use of an Equalizer (EQ) to boost frequencies is an effective place to begin your adjustments. Reverb and room noise reside in the higher and lower frequency ranges. A low pass EQ at around 15KHz with 12dB/Octave slope is a safe starting point for reducing room reverb. Adding a high pass EQ at approximately 150Hz with a 12dB/Octave slope is a good starting point for reducing background room noise. Adjust these settings until you achieve the best results without negatively impacting the quality of the dialogue or target sound.
A multiband compressor/limiter can also be used to provide a more precise dynamic reduction in those frequency ranges. The most important tool to making these techniques work is your hearing. The final result should reflect what sounds right, the meters and frequencies are there to support what you should already be hearing.
Automatic Dialogue Replacement (ADR) is the very manual process of overdubbing dialogue with the cast re-recording their lines to match their original delivery and lip movements. This process is easily managed when working with trained actors or voice talent and yields a high quality recording that can be adapted to each scene. The recording takes place in an acoustically treated room and is free of any reverb.
This is really a last ditch attempt to rid yourself of a menacing reverb footprint and, depending on the application, might not even be an option. Audio for video can be more forgiving since the engineer is building a sonic landscape and can get away with more as long as the end product is convincing. Masking can be done with subtle uses of background noise and music. These should be used in conjunction with the already outlined reverb reduction techniques.
Music is less forgiving on account that any supplementary sounds should provide and contribute to the overall tonal and melodic context of the piece. Employing good editing practices removes unwanted noise by muting or removing sections of tracks that are not active at the time: i.e. cutting the silence from a vocal track when the singer is between lines or verses.
Reverb: Sound Stage
Our brains depend on the auditory cues provided by a sound’s behavior within a space. These signals clue us into perceiving the size, distance and directionality of a room and the sounds we hear. The field of study referring to our brains’ interpretation of sounds is called Psychoacoustics. While the term implies a certain degree of complexity, a practical understanding and application of Psychoacoustics is common practice when mixing audio.
Let’s cover some examples of how reverb can be used to create a richer and more meaningful environment.
Reverb effects in music are used to thicken sounds in addition to giving them a sense of space or shared proximity. There is no limit to where reverb can be applied. It is popular with vocals, guitars and drums.
In film, reverb can reinforce a scene’s setting. For example a scene that takes place in a cavernous setting is hardly going to sound like it is taking place in a padded room. Reverb is useful in giving scale to both on and off screen elements.
Reverb can be applied in mono, stereo and surround configurations. The ability to construct a convincing stereo image is a flourish that is one of the keys to propelling your mixes to higher levels of quality and realism. I use realism loosely here since audio for video is a huge work of fiction — verisimilitude might be a more accurate term. The final audio soundtrack is far removed from the original source material and for good reason; it would otherwise sound flat and uninspired. Film audio is not known for its grounded sense of realism, but rather for making a scene feel more believable, no matter how outlandish the plot.
Blag Ivanov has a B.A. in Recording Arts from California State University Chico. He is a freelance media and web professional.