It tames bright lights.

Captures a candle’s glow.

Freezes the fastest motion.

Slices reality into hundreds of thousands of tiny pieces.


8 Tips for Making a Stellar First Video

Free eBook


8 Tips for Making a Stellar First Video

Free eBook


Thank you! Your free eBook will be sent to you via email

It’s a part of your camcorder that receives little attention, yet may be the most technologically advanced component in the case.

Meet your image sensor.

The image sensor is your camcorder’s eye. Its job is to convert light into electrical impulses. Like the human retina, it relies on complex optics to focus an image on its surface. Unlike the human eye, which has no need for further design changes, the CCD sensor has experienced incredible advances in the past few years.

Chips in the Making

CCDs are manufactured using the same techniques that generate integrated circuits for computers and other electronic devices.

A large silicon wafer is precisely layered with different types of semiconductor material; each layer reacts in a controlled way to those surrounding it. The final layer is photosensitive, changing its electrical properties in response to light. In combination with the semiconductor material below this silicon layer creates and holds the electrical charge that later becomes the image.

The silicon wafer is exposed to a photographic etching process that divides the surface into dozens of discrete sensors, each less than an inch across. At the same time, each sensor is etched with thousands of microscopic pixels, storage registers and support circuits. When the process is complete, the mid-visual sensors are cut from the wafer and quality control verification begins.

CCD manufacturing requires an environment cleaner than one used for brain surgery. Even the tiniest microscopic particle can destroy an individual pixel; larger contaminants spell doom for the whole sensor. Sony’s newest CCD plant is completely sealed from outside contaminants, using only robots for maximum efficiency and cleanliness. The only humans involved in the process monitor the manufacturing from a control facility several miles away. The resulting chips are of such high quality that some industrial manufacturers use the CCDs for their pro-grade cameras.

Sensors are separated by grade, with the highest quality units sold for medical, military and robotic uses. CCDs with an extremely high percentage of imperfections are discarded or sold as surplus. Those sensors with a moderate level of contaminants make their Way into camcorders and video cameras.

It’s not uncommon for a CCD to feature defective pixels. Sometimes groups of substandard pixels can be seen as a constant, dark speck in the viewfinder and resulting images.

Contaminants can affect a sensor in other ways as well, primarily when they fall on non-pixel circuits. Excessive video noise or streaking in one area of the image is often the result of a contaminated sensor.

Rest assured the CCD in your camcorder is expected by the manufacturer to deliver acceptable performance.

Pixel Perfect

When manufacturing is complete, every CCD sensor consists of hundreds of pixels, each functioning as an individual light receptor. Sensor pixels are not unlike microscopic versions of the photodiodes that turn on outdoor security lights at night. The operating principle is the same: the electrical properties of each pixel change in response to the light striking it.

Pixels are arranged on the sensor in a rectangular grid, several hundred strong in each direction. Like a football field full of cameras aimed skyward, each pixel is responsible for one tiny portion of the final image. By measuring the charge of the individual pixels in order and combining them in the camera’s electronics, the image can be reconstructed.

The amount of charge produced in each pixel is related to the intensity of the light and the length of exposure. How efficiently a given amount of light exposure is converted into an electrical signal is called the chip’s sensitivity. This specification directly affects the camcorder’s low-light capabilities, as well as its video noise specs and high-speed shutter performance.

The resolution of a camcorder’s pickup assembly is determined in large part by the number of pixels on the sensor itself. In camcorder specs, two different pixel counts are often given: total and active. The total number of pixels is what its name implies-all the photodiodes present on the chip, whether contributing to the final image or not.

The second spec, active pixels, is a more meaningful number. This is the number of individual picture elements actually creating the image. This figure is usually slightly lower than the total number, due to a small overlap of pixels beyond the active sensing area. This overlap allows manufacturers more leeway in positioning the sensor relative to the camcorder optics.

Today’s high-resolution sensors offer well over 400,000 pixels for camera resolutions approaching 500 lines. When compared to first-generation designs with under 200,000 pixels, modern image sensors show some of the most dramatic evolution of any camcorder technology.

But these advances in sensor technology haven’t come easy.

Modern Problems

At the same time pixel counts have increased, sensors have grown progressively smaller to accommodate shrinking camcorder designs.

The relationship between image sensor area and lens size is proportional; as smaller sensors are developed, lens assemblies can shrink and still deliver the same focal lengths. More than the space demands of the sensor itself, the drive for smaller lenses forces the creation of smaller sensors.

With this emphasis on tiny high-resolution chips comes a serious design challenge. The active surface area of each individual pixel, where the light is actually detected, is diminishing rapidly. Already the sensing area of each pixel is a mere fraction of the designs of yesteryear.

Two major factors determine a chip’s sensitivity: the construction of the chip and support electronics, and the over-all surface area devoted to each pixel. As the latter has decreased, the former has advanced dramatically.

The evolution from 2/3-inch to 1/2- inch CCDs was made possible by refinements in conventional chip manufacturing techniques. The latest sensor design, a 1/3-inch CCD used in many compact 8mm and VHS-C camcorders, has forced manufacturers to pursue some truly innovative solutions to the problem of shrinking sensors. And the problem is acute: 1/3-inch chips have only half the active area of previous 1/2-inch designs. Boosting resolution from 270,000 pixels to 400,000 drops the area of each pixel another 30 percent.

Part of the solution lies in positioning tiny microlenses over each pixel. Layered onto the surface of the sensor itself, these lenses increase the effective size of the pixel by gathering incident light. The individual microlens isn’t responsible for focusing the entire image on each pixel, so optical purity isn’t a significant issue.

More efficient use of the available chip area also aids in boosting sensitivity. In CCD designs, a percentage of the chip’s surface is used for functions other than sensing light. The chip’s sensing area can be maximized by placing some of these circuits below the pixels. A special low-noise amplifier located on the sensor further boosts sensitivity.

These advances mean the tiny 1/3 inch CCD is not only equal to its larger predecessors; in many areas it is superior. Video noise, dynamic range, sensitivity and color accuracy are actually better in some 1/3-inch designs. The result is smaller, lighter camcorders with stunning image quality.

Simulated Shutter

Still and motion picture cameras feature mechanical shutters that help control the amount of light striking the film. Blades located between the lens and film snap open for a set time, allowing light to pass.

The term “shutter” is applied to camcorders, though they don’t actually possess the mechanical assembly found in their film cousins. Instead, the electronic process of reading the sensor’s image simulates the action of a mechanical shutter.

The sensor’s pixels are constantly storing up a charge in response to the light striking them. During normal shooting, the pixels are allowed 1/60th of a second to accumulate a voltage. At the end of each cycle, the charges are transferred from the pixels to a transfer gate located vertically next to each row of pixels. The charges then move bucket-brigade style to a horizontal transfer gate located on the edge of the chip. Finally, the electrical charges are measured, amplified and routed to the recording electronics.

The residual charge is then drained from each sensor, and the cycle starts again. Though no mechanical assembly is alternately blocking and passing light 60 times per second, a “shutter speed” of 1/60th of a second is simulated.

If the subject moves significantly while pixel charges are accumulating, the result is a smeared image. Shooting fast action at normal shutter speed guarantees blurry motion.

If the sensor is allowed less time to accumulate a charge, it captures a shorter piece of the action. This is the principle behind the high-speed shutter, a feature that forces the sensor to complete its image gathering in as little as 1/10,000 of a second.

Whether recording at a time-freezing 1/4,000 of a second or a dreamy 1/8th ofa second, a video field is recorded onto tape every 1 /60th of a second. At higher shutter speeds, the pixel charges are purged mid-cycle and allowed to reacccumulate before the image is recorded.

At the highest shutter speed, unwanted charges are purged a mere 1/10,000 of a second before the end of the record cycle. The pixels begin building up a voltage again, and the charges are read and recorded to tape an instant later.

This explains why higher shutter speeds demand more light and larger apertures-pixels have less time to build up a suitable charge. The relationship is proportional: the higher the shutter speed, the greater the light required. The fastest speed offers the chip less than one percent of the charging time available with the standard 1/60th of a second, potentially turning a well-lit scene into a low-light situation.

The same principle applies to shutter speeds slower than 1/60th of a second. Used for low-light shooting or special effects, these shutter speeds give the chip extra time to accumulate a charge. This is accomplished by holding the CCD’s output in a buffer while the next image accumulates on the chip. The stored image is recorded to tape repeatedly until the extended exposure cycle is finished and the new image is read from the chip. Image blur increases dramatically.

CCD flexibility is quite impressive when viewed in this light. Today’s sensors are capable of gathering candlelight images in 1/8th of a second, or freezing an arrow in flight.

No wonder science has used the CCD in a variety of research imaging roles, from nuclear reactor cores to deep-space probes.

Easy As One

A camcorder image sensor, for all its technological sophistication, is inherently color blind. The tiny photocells that make up the sensor respond only to the amount of light reaching them, not the color of that light.

For a camcorder to become sensilive to color, the light must be worked on before it reaches the sensor.

All colors visible to the human eye can be broken down into combinations of three primary colors: red, green and blue. If we route each of these colors to a unique set of pixels or even different sensors, we can detect both the amount and color of light entering the lens. One, two and three sensor designs use different methods to achieve this color separation, each with a different level of color accuracy and resolution.

Most consumer camcorders use a single CCD to capture both image luminance-brightness-and color information. Color is gleaned from the otherwise monochrome chip through a special filter over the front of the sensor. Made of a repeating pattern of primary colors and their complements, this mosaic color filter allows only certain colors of light to reach certain pixels. By electronically comparing the pixels’ output to the arrangement of the filter, a color signal is produced.

Early sensor designs required precise positioning of an external color filter on top of the chip during manufacturing. Filter registration. was crucial; even the slightest error could result in an unusable color signal and decreased sensitivity. Newer CCDs form the color filter into the chip itself, insuring accurate placement and nearly doubling sensitivity.

The advantages of a single sensor are numerous. Fewer components make for smaller, lighter camcorders, and decrease manufacturing complexity. A single sensor and simple optics lower production costs. Excellent low-light sensitivity results from the lack of additional mirrors or prisms between the chip and the lens.

But the single-chip camera is not without its compromises.

The mosaic color filter, whether attached on top or built into the image sensor, somewhat reduces the chip’s sensitivity and resolution. Decoding color information from the color filter requires extensive processing, creating more artifacts and bleed in the color signal. Highly saturated colors are particularly difficult for single-chip designs.


A better solution divides image sensing duties between two chips. Currently, there are but two two-chip camcorders available to the consumer. Each uses a dramatically different method to produce an image.

The Panasonic Super-VHS AG-460 has been a favorite of semi-pro and industrial videomakers for years, and until recently was the only dual-sensor camcorder on the market. The Panasonic system devotes one chip to luminance information, and one to chrominance. A traditional mosaic color filter is used on the color sensing chip, while the monochrome luminance chip responds to all colors of light equally. These two signals are then routed to the record electronics.

This method delivers improved sharpness and resolution over most one-chip designs, thanks to the absence of a color filter on the luminance sensor. The inherent drawbacks of the mosaic color filter, including color bleed and reduced sensitivity and resolution, still affect the chrominance signal. Low-light sensitivity suffers slightly with this scheme, due in part to the additional optics the light must pass through.

Minolta’s first two-chip camcorder, the Hi8 Master Pro 8-918, splits light into green and red/blue components through a specialized beamsplitter that routes green information to one sensor and red/blue to another.

Due to the way the human eye responds to light, much of the brightness information in a video image is gleaned from the green portion of the signal. Routing green light to its own unfiltered sensor increases sharpness and resolution in the luminance signal. At the same time, the camcorder’s ability to reproduce subtle or deeply saturated greens in an image is enhanced.

Optically splitting out green light allows a much simpler filter to glean the remaining colors with the other sensor. Consisting of alternating vertical stripes of red and blue, this filter introduces far less color distortion and ailifacts than a traditional color filter. Resolution and low-light sensitivity are also less affected.

Though the beamsplitter used in this design transmits close to 100 percent of the light entering it, there is still some loss of sensitivity. This translates into slightly poorer low-light performance than some single-chip designs. Also, the extra optical components and sensors make it unlikely two-chip camcorders will ever be as small as their one-chip cousins.


The most accurate but complex scheme is to split the light into its three components and send each to a different sensor.

This is usually accomplished using an optical prism or dichroic mirrors. A prism bends each frequency of light by a different amount, sending each primary color in a unique direction. Dichroic mirrors utilize a special kind of glass that reflects one color while letting others pass by.

By positioning a prism or combination of dichroic and regular mirrors in front of the three sensors, incoming light can be divided and redirected. Splitting the frequency spectrum often distorts the image, so most three-chip designs feature additional lenses and mirrors to correct these problems.

Alignment of prisms, mirrors and sensors is extremely critical for proper color registration, making construction of three-chip cameras more exacting and time-consuming. If one sensor or mirror is off by even a fraction of a degree, the corresponding color and brightness information won’t line up with the other signals. The result is an unacceptable image.

Low-light sensitivity is compromised slightly with three-chip cameras, due to the inevitable loss experienced each time light passes through a prism or mirror. Fewer elements of greater optical purity help three-chip sensitivity, as do premium quality image sensors.

In the small area of image quality, three-chip designs reign supreme. Devoting one high-resolution sensor to each primary color delivers the purest reproduction, delivering a color signal with full resolution. No color filter obscures the face of the sensors; color bleed and smearing within the chip is eliminated. Most professional three-chip designs use top quality CCDs, explaining in part the higher price tag of these units.

Sensing the Future

There’s no mistaking the trend toward increased sophistication and performance in every area of camcorder manufacturing, especially in sensors and multi-sensor designs.

In the same way that two-chip camcorders have begun to impact the consumer market, three-chip technology will eventually be available at dramatically lower prices. The first three-chip camera marketed to consumers was released this September by Sony in Japan; more are sure to follow.

The current trend in sensors will also continue: tomorrow’s chips will be smaller, higher in resolution and higher in perfomance than those of today. New manufacturing techniques and materials will make 1/4-inch CCDs a reality while further improving performance. Smaller optics will result in camcorders even smaller than those available today.

The same technologies driving small chip development will benefit larger sensors as well. Microlenses and improved low-noise amplifiers already bring increased performance to 2/3- inch sensors in professional three-chip designs. Similar technology would allow consumer camcorders with larger chips to enjoy increased resolution and lower noise, at the same time boosting sensitivity.

High definition television has pushed large-chip development to a new level, resulting in high-resolution CCDs with millions of active pixels. This is the pinnacle of sensor development, resulting in an image approaching film resolution.

The benefit to videomakers everywhere is clear. Each development in sensor design eventually results in better images for the masses. Whether these advances occur in the consumer realm or trickle down from HDTV research, camcorders like yours will eventually reap the rewards. They already have.

Loren Alldrin is a Videomaker technical editor.

The Videomaker Editors are dedicated to bringing you the information you need to produce and share better video.