There are many different things you can do to affect your audio, one of the most impactful and transformative is to process it by adding an effect. Adding signal processing is a place to be creative. It’s a place to play, to experiment and open up the possibilities of your audio. Signal processing comes in many different flavors. From phasers and flangers that sound out of this world, to delays and echos that open up space and change the auditory perspective. We’re going to take a look at the concepts of signal processing and the many-different signal processors there are to choose from, along with their attributes and uses. What is audio processing anyway? Well, simply speaking it’s the intentional alteration of auditory signals, or sound. After countless hours of careful acquisition and editing, you go to play your finished work on a tv and part of your audio is gone. What happened? Audio phase is one of the most misunderstood topics in the audio world. This may be because it's a complicated issue and the math behind it all is something only a physics geek could love. Regardless, audio phase and, more important, phase cancellation, is important to every video producer who cares about the sound on their projects. Phase cancellation can easily ruin the dialog or voice-over in your video. Have you ever been in a restaurant or store, listening to your favorite song, only to wonder where the vocals went? This is a classic example of phase cancellation. How does it happen? The simple answer is, someone, somewhere hooked things up wrong. Let me explain. Acoustic sounds -- like your voice or an instrument -- compress the air around them. This creates invisible waves or ripples in the air. These compressed waves reach your ear, and the eardrum translates them into electrical signals for your brain to decode. Now, let's substitute your ear with a microphone. The mic's diaphragm moves along with the air vibrations and converts that movement into a small electrical signal for you to record. A single signal going directly into a camera or mixer is no big deal, but recording in stereo is potentially problematic. Here's what happens: balanced mic and line cables carry two versions of your audio signal -- one in correct phase and the other 180 degrees out of phase. This is sometimes known as the positive and negative. At the end of the signal chain, these versions are combined into one accurate, noise-free channel of audio. Then add in a second channel, recording the same sound source but the cable you are using was home made and the positive and negative were accidentally exchanged for each other. If you combine these two channels and one is out-of-phase with the other, one will cancel out the other. Let’s write this concept as a math equation -- don't worry, the math is pretty easy: 1 plus negative 1 equals zero. This was more of an issue when low budget or guerilla cinematographers commonly made their own xlr cables but it’s less common now. The out of phase problem could, however, arise when an xlr cable is connected via an adapter to a camera with an 1/8-inch mini stereo connector. The out of phase signals are separated into distinct channels and not put back in phase in the camera. Here's the big problem; if you're listening to phase-cancelled audio on stereo speakers, you may not notice. You can still hear everything -- although it may not be as sharp and clear as normal. With in most audio editing software you will find the ability to flip phase. The easiest way to decipher if you are out of phase is to flip the phase of the sound and see if there is a positive change or not. If not — if it takes away the presence of the signal — then it was in the correct phase to begin with and you have some other issue at hand. Phasing an effect using a phaser, not to be confused with the concept of phase, works around the principle of passing a signal through evenly spaced notch filters. It differs from flanging by not being a delay-based effect. A phaser sounds very distinct and is sometimes used to make something sound like it’s from space or to sound psychedelic. Phasers were used often in late 60s and 70s psychedelic music. Now that i’ve explained what a phaser is, let’s take a listen to one. Another effect is a flanger. A flanger works with delay when times are constantly varied. This has the effect of combing through the frequencies and creating an uneven alignment of a sound wave’s peaks and valleys, affecting their equalized frequency response. Thus, not only does the delay timing change, but it will provide a degree of equalization. So let’s take a listen to a flanger. The chorusing effect is achieved when two identical and slightly out of tune signals are combined. Those signals will have a delay time below 15ms. This is an established method of adding depth and a sense of movement through by blending a slightly detuned and delayed with the source material. Let’s take a listen to chorus. Delays are defined by not exceeding a delay time of 35ms. Any more and it will result in the sounds being interpreted as discrete echoes. Delay in itself is great at creating a sense of space and adding size and character to a performance. As with all effects, you should apply good taste so that your mixes do not become muddy and devoid of definition. At this stage you will be delaying a signal by a period of time longer than 35ms so that you can start hearing discrete delays that do not become a form of doubling, chorusing or flanging. When bussed out, you can create a nice blend of wet and dry signal, and these are great way of giving vocals and guitars a greater sense of space. I am a fan of bussing out signals to stereo delays and creating a wider stereo image that adds extra flavor and character to the original signal. This method can be used in conjunction with reverb or without it, remember to use your ears and good taste. By contrast, echoes are delays that have a longer delay time. Delay times exceed 35 to 40ms and then some of the signal is sent through again; this is referred to as feedback. Echoes occur when a sound is directly reflected back to the source’s position. Now, let’s listen to a delay, then an echo to hear the differences. Reverb is the remainder of a sound once it has dispersed within an acoustic space Picture yourself standing in a large room. Now clap your hands. Imagine the sound as it travels around the room and interacts with each surface. Reverb is the gasp of sound that remains in the air after the sound source has ceased. It is the result of the sound’s reflections against surfaces, diffusion around surfaces and absorbed by surfaces. Anyone who has mentioned a room’s ambience is referring to its reverb characteristics. The granddaddy of all audio effects, reverberation adds a sense of space to your sounds. Reverb simulates an audio environment using small bits of computer programming called "algorithms." Reverbs are based on three major types of algorithms: plate, room and hall. The plate reverb is named for the analog electro-mechanical device it is based on. Back in the old days, springs suspended a large plate of thin metal. A transducer attached to the plate and pumped in audio signals, making the plate reverberate. A pickup device was attached to hear the reverb, with the resulting signal mixed into recordings to simulate an acoustic space. Plate reverbs (real or virtual) are renowned for their complimentary sound on vocals, percussion and drums. Room and hall algorithms simulate the space they're named for. Room sounds are more confined, representing everything from a shower stall to a classroom. Hall reverbs emulate the acoustic properties of performance venues from a small room all the way up to carnegie hall. Reverb plug-ins have several adjustable settings. First is the room size or decay time. This setting determines how long the reverberated sound takes to die out. The second setting, called "brightness" or "damping," determines how much high frequency your reverb allows. The bigger the room, the less high-frequency content. Another common setting is pre-delay. This determines how long the original sound takes to hit the virtual walls and will radically change the space of your reverb. Lastly, the diffusion adjustment controls whether your reverb is smooth or full of echoes. Take a listen to some reverb, and how it affects sound. The evolution of delay from analogue to digital has truly allowed it come into its own by embracing most of its early qualities while simultaneously moving beyond the limitations of tape based, analogue circuits and early digital technology. Early delays have gone from using large echo chambers, to tape systems with multiple playback heads, analogue circuits that suffered from limited memory and signal degradation and early digital delays which in some ways experienced similar problems due to the high cost of memory and lower sample rates. The digital revolution of the 80’s and 90’s quickly made those issues a thing of the past and has paved the way for today’s digital delays to do whatever we ask of them. When signal processing, it's not an all or nothing game. There’s a middle ground and it’s a best practice to use it. Here is the concept: when you process a signal, it’s called a wet signal. By contrast, the unaffected signal is called dry. To be able to have the presence of the original signal, you mix the two together, now having the ability to control the amount of wet signal you hear. Now, let’s take a listen to an example of wet and dry. Adding any processing to your audio can expand its potential, but can easily be overdone. Be deliberate when applying post processing, sometimes a little goes a long way. But feel free to experiment, just pull it back when you're done playing around. Enjoy the creative process and then bring it back for betterment of the story. Hopefully, the changes you have made support the on screen action and take it to another level.