That title alone may be enough to scare off some readers. If you’ve gotten this far, please stay with me. I
know the technical side of video production isn’t fun for most. It’s actually a nuisance when you come to
think of it.
But understanding some of the basic mechanics and technology behind this wonderful passion can only
help increase the quality of your work. The knowledge may assist you in better camera operation.
More to the point, it will come in handy during the editing process. The ability to "dial in" your video
signal during editing is extremely important for maintaining a consistent look throughout your video.
Furthermore, you can often make up (at least in part) for equipment problems or poor shooting in the field.
Basically, calibration equipment and know-how give the editor control.
I’ll try to keep the info comprehensible for even the most non-technical among us. As to the more
technically inclined, bear with us–you could probably use a little refresher course anyway. If you feel it’s
getting hairy, hold on. I promise the trip will be worth it.
CCDs
The signal is the most important aspect of video production. Without it you have nothing. Something has to
carry the information from outside of the lens to the TV screen. But how does it work?
Modern video cameras and camcorders use a charge-coupled device to make a video signal. Better know
as a CCD, this image sensor is a solid state semiconductor that converts incoming light into video
information.
When light strikes a CCD, it’s actually hitting a layer of light-sensitive silicon. This layer separates the
incoming light into a precise pattern of pixels; the higher the number of pixels on the chip, the better the
resolution of the final video image. Some consumer camcorders boast CCDs with nearly 500,000 pixels!
Each pixel is responsible for reproducing a small part of the entire video picture. From the CCD, the light-
turned-electrical charge moves into a storage layer on the chip. Finally, this stored information transfers
frame-by-frame, line-by-line to the tape and/or to the monitor as the video signal.
Scanning A View
To view the electronically recorded image, a monitor works in reverse of the above process. Instead of
turning light into electric signals, a monitor turns electric signals into light.
Here’s how it works. An electronic beam scans the monitor’s tube. This beam flashes on and off in varying
degrees of intensity, reproducing the multitude of pixels captured in the recording process. A photosensitive
material coating the back of the screen causes an image to appear when the beam makes a pass.
The way the scanning works is quite curious. We know that within the current video system there are 30
frames of video information per second, and each of these frames consists of 525 horizontal lines of data. The
beam in the monitor starts scanning the tube with the first line of information. When line one is complete, the
beam shuts off, returns to the starting side and scanning continues with line three, not two. Then it shuts off
again and returns to scan line five. This pattern continues until the beam scans all of the odd lines. This
amount of the picture is called a field.
When it completes this part of the cycle, the scanning starts again at line two, and proceeds to cover all the
even lines. The beam passes by the screen a total of 525 times each 1/30th of a second, far faster than the eye
can see.
Why not have the beam simply scan each line in succession instead of skipping every other line? Because
the photosensitive surface of a monitor glows for only a short time after the electron beam hits it. If the beam
scanned continually from top to bottom, not skipping anything, the top of the screen would darken by the
time the beam returned. To keep part of the picture from going black, the scan lines interlace, assuring
constant brightness of the entire image.
Sync
Sync is the part of the video signal that makes sure everything happens when it’s supposed to. Without sync,
the different parts of the video signal don’t know when to start or finish relaying their information to the
screen. You video editors out there know what happens when a tape’s sync gets corrupted–video chaos.
Every camera has part of its circuitry devoted to generating sync pulses. These pulses (called internal
sync) become part of the signal that outputs to tape or directly to the monitor.
You can break this sync information into two categories: horizontal (which controls the timing of the lines
in an image) and vertical (which keeps the picture framed).
While many videomakers disregard sync until post-production, it may become a problem if you’re
working with a multiple-camera setup. For proper on-site switching in this situation, all cameras must scan at
the same synchronization rate. Furthermore, they must start each frame at the exact same instant.
There are two ways to do this. You can use an external sync generator to time all the cameras, or you can
use the signal of one camera to regulate the signal of a second. In this process, known as genlock, the second
camera recognizes sync pulses from the first camera and creates a synchronous signal to match.
Waveform Monitors
Looking more closely at a video signal requires the use of monitoring equipment, specifically the waveform
monitor.
A waveform monitor screen shows an electronic display ranging from 100 units at the top down to -40 at
the bottom. This incremented scale measures luminance (signal brightness strength) in IREs. (An IRE is a
unit developed by and named for the Institute of Radio Engineers.)
Measuring the highest and lowest luminance points is the most basic use of the waveform monitor. These
points are known as reference white and reference black. Reference white is the brightest point of a video
signal; reference black is the color you see between commercials on the TV screen–not completely devoid of
light, but dark enough to appear black to the eye.
One of the most common uses of a waveform monitor is the white balance. White balancing lets the
camera operator adjust the relative intensity of the red, green and blue channels. This allows the camera to
produce an accurate white signal under predetermined lighting conditions; it tells the camera what white
should look like under existing lighting. Once the camera "knows" this information, it is able to properly
reproduce all other colors.
Another important element in dealing with the brightness of an image is the pedestal. Pedestal, or
reference black, controls the black levels of a video signal. All the images on the video result from variations
in shades of gray. Pedestal controls the deepest blacks that the signal will reproduce. Reference black is set at
7.5 IRE. The area below this reading is for other parts of the signal which control the scanning process.
Reference black also controls the contrast of the picture. If you set this level too low, the dark areas of the
picture will be too dark, producing an image with stark contrast. When reference black is set too high, the
contrast between the dark and bright areas will be insufficient. The resulting image looks dull and washed
out.
Color Signals
Luminance and chrominance: if you’ve ever been in a profession video post-production studio, you always
hear those words floating around. What the tech guys are talking about are the principle components of the
color television signal.
Luminance refers to the black and white or "brightness" information present in a video signal.
Chrominance provides the color info and is made up of two further components, hue and saturation.
Hue describes the color itself, while saturation details the amount or intensity of the color. For example,
a very deep royal and a light, baby blue have the same hue, blue. They differ in their saturation of the
color.
Video cameras create a color image by working with the additive primary colors of light–red, green, and
blue. After the light enters a camera, it breaks down into these color components in one of three ways.
A prism block is the most sophisticated and expensive manner in which cameras generate a color signal.
Simplifying the process, the light the lens captures hits a prism which splits it into red, green and blue (RGB).
Each of these colors then goes to its own separate CCD. A color encoder takes the pure RGB signals and
recombines them along with the luminance information, making a full-color picture possible. Since each
color goes to its own CCD, prism block 3-chip cameras produce a very high quality video image.
A similar method of splitting light uses dichroic mirrors, which reflect some colors and allow others to
pass through. The process is similar to the prism block, only mirrors take the place of the prism. Though
incorporating three image sensors, the image from a dichroic system maintains less sharpness than that
produced with a prism. This is mainly due to a loss of light from the mirrors themselves.
Stripe filters capture color information on a single CCD. This method uses a thin stripe of red, green and
blue filter material in front of the CCD. Light enters the camera, hits the stripe, and divides into its
accompanying components. The single chip in this system produces all three channels of chrominance in
addition to the luminance information. While this is easily the least sophisticated of the color generating
systems, it is also the most popular. The decreased cost, weight and technology make the single-chip unit
very popular in the consumer camera market.
Vectors and Color
With all this color information floating around, it’s important to keep things synchronized. That’s where color
burst comes in.
Color burst is a special control pulse. This sync information is what ensures that all three color signals start
at the right time at the beginning of each line of video information. You can see the pulse on the video
waveform monitor. By checking the waveform reading, you can discern if color is present in your video
signal. You can also see how healthy your video’s color information is.
Checking the color burst signal will alert you that color is present. What it doesn’t tell you is which colors
are present. For that, you need a vectorscope.
The vectorscope screen identifies the three primary colors discussed earlier (red, green and blue) and their
compliments (cyan, magenta and yellow). By reading the scope, you can easily determine which and how
much of each color is present in the signal. The rotational location of the bright spots on the screen measures
the shade or tint of colors–their hues.
Vectorscopes are vital when monitoring color signals in a multi-camera setup. They help match the color
quality of each camera so their output is similar. A number of factors, including the length of camera cables,
can alter the hues of each signal.
In a single-camera situation, the vectorscope assists with proper white balancing. By adjusting the proper
controls on the camera, a video operator can put all the colors in proper balance. As you make the
adjustments, the bright dots on the screen rotate until each moves to its proper position. The length between
the center of the display and each of the bright dots represent the saturation of each color. The farther from
the center, the greater the saturation. If dots move close to the center, color saturation is low.
On the face of the vectorscope lies a series of squares. These boxes indicate the proper position of the
bright dots in an ideal set-up. To reproduce proper colors in your video signal, the bright dots on the screen
should fall into the center of these boxes.
Phew! You made it through relatively unscathed. Now that wasn’t too bad, was it? The study of video
signal measurement and manipulation is not a page-turning topic by any standards. It is, however, useful
knowledge that contributes to a greater understanding of what happens when you flick your power button.
And whether you use this knowledge on location or in the editing suite, it can only help you to produce better
video.
Which is, of course, why we’re here.