Imaging sensors, the heart of any camera, have evolved quite a bit in recent years.
With CMOS becoming a major force, it’s time to update what is powering the great video we shoot every day.
The most interesting question is: Why does a smaller HDV or AVCHD camera with, say, a chip that is 1/6″ in size, with over a million pixels packed onto it, have differences, both good and bad, versus a chip that is 2/3″ in size, but is only considered standard-definition with about 320,000 pixels?
In a small, high-definition-resolution chip, the pixels are smaller and packed more closely together to fit onto the chip, leading to a lower signal-to-noise ratio than a standard-definition chip would have achieved. The result is that the bigger, standard-definition-resolution chip is more sensitive to light and has less visible noise in the image it reproduces.
Not all sensor sizes are native to the final aspect ratio they produce. The Canon XL H1 has three 1440×1080-pixel sensors, the Sony HVR-Z1 has three 960×1080-pixel native sensors and the Panasonic AG-HVX200 has three 960×540-pixel sensors, yet they are all capable of reproducing 1080i (1920×1080-pixel) video. Each camcorder is capable of doing this by a measure of multiple technologies and algorithms that include pixel shifting and pixel interpolation.
The Importance of the Lens
The lens plays an important part in how the sensor will work; specific size and resolution of the lens will help the sensor keep its image intact. The resolution of a lens refers to the quality of the lens and the ease and clarity with which light passes through it. Smaller-sized image sensors need a high-resolution lens with a wider field of view to squeeze the image onto a smaller area (the image sensor). Smaller chips also allow for smaller lenses, which are more affordable and will keep the costs of the camcorder down.
With a bigger sensor, the camera needs a larger lens for the same pixel resolution, which can add to the bottom line of a purchase or rental, even if the lens resolution doesn’t need to be as high as a small-sensor camera’s lens. Also note that a telephoto lens will actually have to be longer on these cameras for the same field of view. As stated above, a smaller-sensor camera needs a wider lens to achieve a similar field of view.
One Chip Versus Three
The number of chips in a camera is another factor when it comes to overall image quality. With single-sensor systems, color and sensitivity to light are handled by one chip versus the tasks being distributed over three chips in other systems. Three-image-sensor systems have the advantage of reproducing color more accurately.
Many see the one-chip system as being inferior to three-chip camcorders, mostly due to the single chip’s pixels being able to “see” only one color (red, green or blue) at a time. Color filters are placed over the single chip in a mosaic pattern, known as a Color Filter Array; the most common is the Bayer Pattern.
To achieve a full-color RGB image from a Bayer Pattern sensor, complex interpolation algorithms are needed to avoid glaring color artifacts on edges. As computer processing power advances, the adaptive algorithms needed are now being seen in cameras with one chip.
However, single chips can be larger than the individual chips in a three-chip system, which means single sensors can have higher resolution and larger pixels, which gives a higher signal-to-noise ratio and a cleaner image. This consideration becomes important for “ultra-high-definition” Digital Cinema cameras, such as the Dalsa Origin, Arri D-20, Vision Research Phantom and RED, all of which use large single-chip sensors. The resulting images have a high signal-to-noise ratio, which contributes to clean images.
With image sensor technology making incremental improvements every year, both CCDs and CMOS sensors will continue to get better while becoming even more affordable, which will lead to higher resolution, low-cost camcorders.
Heath McKnight is a filmmaker and writer. He recently co-wrote HDV: What You NEED to Know, Volume 2, from VASST. He would like to thank Graeme Nattress for research help with this article.
Side Bar: Definitions
CCD: Charge-Coupled Device that is used to detect light, but not color, in video and digital still cameras. On their own, most sensors would detect only black-and-white images. To achieve full color, filters are used on the chips, either spread out evenly (three-CCD Red, Blue and Green, using dichroic filters in a prism) or on each pixel (one-CCD via colored filters).
CMOS: Complementary Metal-Oxide-Semiconductor. You may be familiar with it from digital still cameras. This more affordable chip consumes less power and contains electronics that convert the analog information from the pixels to a digital form. This conversion must be performed off-chip in CCD sensors.
SNR: Signal-to-Noise Ratio. The higher the SNR, the cleaner the image; the lower the SNR, the more noise. Smaller pixels, relatively speaking, have a lower SNR than bigger pixels.
DIFFRACTION: This occurs as light travels through an aperture, especially one that is closed due to increased light. The resolution of the image will actually decrease with diffraction; to avoid it, control the light going into the lens with neutral density filters or with filters, diffusion, etc. on the lights themselves. Keep the aperture as open as possible with smaller-chip cameras.
BAYER PATTERN: Named after inventor Dr. Bryce Bayer, it is used with one-chip camera systems to allow the chip to “see” color. It’s broken down as RGBG – 50% green, 25% blue and 25% red.
CCD vs. CMOS: CCDs have traditionally provided a “cleaner” image via a higher SNR than CMOS, but recent and ongoing developments in CMOS technology have closed this gap, especially in the case of large, high-resolution, high-frame-rate chips.With CMOS, less vertical smear is an advantage; vertical smear is very common when shooting with a very bright object, such as a light, that is in the shot. The effect shows up as “points” coming off a bright spot, which CMOS helps to reduce.