Video is a perfect example – it uses processes developed for sound recording, motion picture and still photography, and television, and combines them to form a new technology. But as with any technology, there’s no magic involved in how a camcorder “sees” light, just engineering and science.
The Nerve Center Is the CCD Sensor
Ever since camera film was invented in the 19th century, the camera’s lens focused light onto the light-sensitive silver halide salts of the photographic film, whether the end result is still or motion pictures. With video, the camcorder’s lens focuses the image onto an electronic sensor called the CCD, short for charge-coupled device. The CCD sensor contains hundreds of thousands of light-sensitive diodes, a.k.a. photosites, which record the intensity of the light they receive and convert that measurement to an electrical charge. The intensity of that charge corresponds with the strength of the light each photosite receives.
From Grayscale to Living Color
Photosites are ultimately converted into the pixels within a video image, but there’s an intermediate step, as CCDs don’t record color. They only detect and record the intensity of the light hitting them, recording the grayscale image. The grayscale image they’ve captured is then transmitted to a separate color filter array, which captures the individual red, green and blue primary colors.
In the case of higher-end camcorders with multiple CCDs, each CCD is dedicated to capturing a single element of the color spectrum. They receive light via a prism inside the camera. The prism takes the light entering the camera’s lens, splits that light into red, green and blue, and then sends those colors off to the matching CCD, where the intensity of each color is encoded. The need for the three individual CCDs, one for each primary light color, is one of the reasons why three-CCD pro cameras are more expensive than consumer-grade CMOS cameras. CMOS (complementary metal-oxide semiconductor) chips bundle both the camcorder’s image sensor and its central processing unit (CPU) into a single chip.
Regardless of whether you have a small CMOS camcorder with a single image sensor and relatively unsophisticated CPU, or a 3CCD pro unit with a separate and complex high-end processor, these chips do a whole lot of work that used to be done by many mechanical parts and a lot of chemicals in film camera days. But first, these chips need to interact with the camcorder’s lens.
Through A Glass, But Not Too Darkly
Of course, not all aspects of how a camcorder sees light involve electronics; the physics of the camcorder’s lens also plays a significant role. More than any other aspect of videography, how a lens operates is a carryover from the first days of still and movie photography.
The size of the iris needs to move in contradistinction with the amount of light it’s receiving. Lots of light will narrow the camera’s iris. Less light will open it. In consumer camcorders, this is all done automatically by light-sensing circuitry inside the camcorder. Better camcorders allow these settings to be changed manually as well.
Of course, if the image is too dark to begin with, there simply won’t be enough light entering the lens to record a decent image, regardless of how wide-open the iris is. This can result in electronic noise being visible in a recorded image. Hence the need for a fair amount of sunlight or, when inside, video lights, for a smooth, evenly-lit image.
Similarly, since iris settings also affect depth of field, videographers and their predecessors, the original movie cinematographers, have long manipulated the size of the camera’s iris to impact how much of the shot is in focus. For example, beginning most famously with 1941’s Citizen Kane, cinematographers would often use a combination of plenty of light and a dramatically closed-down iris for deep-focus images.
The Shutter Also Impacts How Light Is Recorded
If you’ve ever aimed a CCD camera directly into a bright video light, you’ve seen what happens. Unlike with a film camera, where you’ll get a halo effect, the video you record will often give you glaring vertical streaks. (However, any streaking you’ll see with a CCD-based camera will pale in comparison to the severe lags and streaks that used to be commonplace with the tube-based cameras of yesteryear.)
On the other hand, most CMOS cameras use a rolling electronic shutter to capture an image sequentially in thin rows from top to bottom in the course of a single frame. The rolling shutter can generate a different type of visual distortion. Panning too quickly with a CMOS camera will often result in skew, which is a distortion of vertical lines in an image.
So What Happens on the Other End?
These days, camcorders use a variety of storage mediums. But whether it’s DV or HDV tape, a hard drive or even Flash media, they’re recording a digital version of the intensity of the light flowing into their lens.
On the opposite side of the lens, your computer or digital TV takes the information and reverses the above process. It increases the intensity of light in specific areas of the image it’s playing back, based on the intensity of the electrical charge.
Fortunately, all of this is infinitely easier in practice than description, allowing us to focus on our overall productions, and not the tiny technological details. A computer involves a myriad of processes occurring microsecond by microsecond, but which we take for granted (at least, until it crashes). Similarly, the complex process of translating light into digital images happens at near-instantaneous speeds inside a device that, at its smallest, fits in the palm of your hand.
Perhaps what I said at the beginning was incorrect, and maybe it is magic. Or at the very least, certainly indistinguishable from it.
Edward B. Driscoll Jr. is a freelance journalist covering home theater and the media.