Digital video is a dream come true–a magnet for creative spirits. It may look like a side-show attraction right now, but soon it will be hogging the spotlight. It often takes 50 years or more for a hot technology to trickle down to the salivating, frugal masses. Digital video is only a decade old, but already it has turned the video world inside-out.
With so much money invested in analog equipment, the big guys are proceeding with caution into the digital realm. Meanwhile, we consumers are racing past them, even if the technology gives us all whiplash. The technology is changing rapidly and it’s hard to keep up–but it pays if you do.
So here’s a helpful primer on digital video basics. We’ll look at the pros (many) and the cons (a few). After a quick refresh on the amazing technology behind it all, we’ll check out some new developments in digital videotape. And we’ll also explore a venue for you to display your digital masterpieces to a global audience.
On and Off
What is it that makes digital video so special? Why is it that digital video formats look so much better than their analog counterparts, and can be copied a dozen times with little generation loss?
We can sum up the magic in two words: "on" and "off." Analog video is a continuous signal. But so is noise. This is why noise signals can easily infiltrate and corrupt analog signals. Digital video encodes the analog signal into a series of pulses ("ons" and "offs," or ones and zeros) represented by binary numbers. These pulses are very simple and very different from noise signals, and are therefore much less susceptible to noise. This is why digital video has a much higher signal-to-noise ratio (S/N) than its analog cousins. It’s also what makes it possible to make near-perfect dubs of a digital video.
There isn’t a single blasted reason why digital is superior to analog. There are dozens. Here are the top three:
No generation loss.
By far the most miraculous aspect of digital video is that it makes near-perfect copies. Analog video lets noise leak into the signal from every corner. One of the most basic and unavoidable sources is thermal noise, created by the constant vibration of the very atoms and molecules of the materials used in video equipment. Another is interference, where video equipment intercepts noise signals traveling through the air and through wires. We can reduce these noise signals to a heroic degree in analog video, but it still builds up as you make copies, since each copy adds more noise onto the previous noise.
Digital copies, on the other hand, are very close to perfect–really. You wouldn’t accept computers if they dropped numbers from your spreadsheet, and digital video plays by the same rules.
Since a digital image can be manipulated by a computer, and because a computer has random access to the data on its drives, nonlinear editing was an idea that was just begging to be invented from the start of the computer age.
Instead of laboriously sequencing clips by fast-forwarding and rewinding a cranky video deck as is the case with linear editing, nonlinear systems let you instantly access any digital clips you’ve recorded to your hard drive. With computer software, you can combine multiple video streams with chroma-keys, magical morphing effects, amazing dissolves and thousands of special effects–and do it in any order you like. If you don’t like the way a certain scene looks, you can use the Undo command to change it in an instant, just like a word processor.
To get your opus into the computer, you need a video capture card. It converts the analog video signal into a digital data stream and routes it to your hard drive. The nice part about computerizing your video is the array of nonlinear editing software available. Adobe Premiere is a popular choice, giving you solid frame-by-frame editing, titles and dozens of transition effects. Depending on your computer, creating and mixing these titles and effects (digital compositing) can be slower than analog. But for the price–and the digital purity–it’s still a bargain.
Nonlinear editing equipment isn’t exactly cheap, but it compares favorably with professional analog units, which cost tens of thousands of dollars. However, turnkey nonlinear editors like the DraCo Casablanca are beginning to appear in the marketplace. If you own a powerful multimedia computer, you might already have the most expensive part of a digital editing machine. Fortunately for videographers, good A/V hard drives are now available at the sizes and speeds necessary for video work. Computer power seems to double every 18 months, even as the price drops. It’s a wonderful world.
A good S-VHS camcorder has up to 400 lines of horizontal resolution. That means that you can squeeze 400 dots (called pixels by computer types) onto each horizontal scan-line of a video image. The new DV format can reproduce up to 500 lines. Not only is that better than Hi8 or S-VHS, it’s better than laserdisc: it’s comparable to BetacamSP.
Of course, there are a few downsides to digital video at the time of this writing.
I hate to bring it up with all the fun we’ve been having, but this stuff isn’t cheap. To really experience nonlinear editing and the other fun aspects of digital video, you need a computer or a turnkey nonlinear editor. And the more muscle, the better. For computer-based systems, you need a fast CPU, fast hard drives, plenty of RAM and a video capture card.
You can skip the video capture card and the computer and move straight into the digital videotape realm–if you can get your hands on a digital camcorder and VCR. Right now, even the cheapest DV camcorders are pretty expensive (over $2,500), and a DV VCR is over $4,000. But prices will go down in the near future as competition kicks in.
Its unbelievable how much data it takes to represent an image on the screen. A 640×480 screen with full color, for example, sucks up nearly a million bytes (921kB)! Multiply that by video’s 30 frames per second to get about 27 million bytes every second. That’s a lot of data to pipe through a computer.
Fortunately, it is possible to compress digital video down to a fraction of its original size, store it on tape or hard disc and then decompress it upon playback. The devices or software packages that perform these complementary functions–COmpression and DECompression–are codecs.
There are three concepts that codecs use to compress data: space, time and probability. It sounds a bit esoteric, but it’s really pretty simple.
Codecs based on space compare information within the space of each frame (intraframe) to eliminate redundancies. For example, in a frame that shows a landscape and lots of blue sky, many of the pixels that represent the blue sky are identical. Instead of recording each identical pixel, the codec can represent the color and location of these pixels collectively with a single code, thereby reducing the data required to represent that frame.
Codecs based on time compare information from one frame to the next (interframe) to eliminate redundancies. For example, if there are several frames that show a train traveling across a landscape, the pixels that represent the landscape remain the same from frame to frame. The codec can represent those pixels collectively with very few codes, thereby reducing the data required to represent the frames.
In addition to intraframe and interframe compression, many codecs also use probability to estimate the appearance of certain codes produced by other codecs, and use this estimation to reduce data.
Some codecs, like the DV codec, use intraframe compression to achieve broadcast-quality video at a compression ratio of 5:1. Other codecs, like MPEG-1, use both intraframe and interframe compression to achieve compression ratios of up to 100:1. But video compressed this much is only useful for multimedia applications like CD-ROM, where the size, frame rate and quality of the images are not up to normal video standards.
Got Time? Every time manufacturers stun us with new and improved technologies, we tend to drool at the possibilities while simultaneously moaning about the learning curve.
If you haven’t had the pleasure of nonlinear editing, you may be shocked by how many technical issues you need to master. You need to know about A/V hard drives, how to optimize your computer, software editing, compression schemes and a bunch more. It can be quite overwhelming for a person who just wants to edit some video.
Luckily, manufacturers are starting to release some reasonably-priced turnkey nonlinear systems–systems that work right out of the box without any installation necessary. Systems like the DraCo Casablanca and ItWorks’ Digital Video Director 20 video editing bundle have made things a whole lot easier for beginning nonlinear editors.
And what about compatibility? Are we looking at another Betamax debacle? Actually, the manufacturers were stung by this nasty episode as much as the poor consumers–and they intend to avoid a repeat performance.
On the computer side, all the major computer makers are striving to provide the performance and capability for some kind of compatible digital video standards. Even Internet providers are gearing up for the inevitable onslaught.
On the digital videotape side, all the big players have agreed to the DV standard and so far, things look pretty good. Some of the cables are not compatible, but as a whole, the companies are aiming for a critical mass of consumers that can only appear with standards.
Won’t the standards change? Well, already there are a number of broadcast digital camcorders with different specs, owing to the debate over futuristic digital delivery technologies. But hey, what’s new? Even in the analog world, standards change all the time. Played any 8-track tapes lately?
So where is that enormous global audience I mentioned earlier? The answer is: movies on the Internet. Although the quality and size are pretty crummy, these are just growth pangs. This is a technology worth watching.
Currently, you can send a postcard-sized movie over the ‘Net at 28 kilobaud. As compression improves and the ‘Net gets greater bandwidth, video applications are quickly improving on the new medium. With a software plug-in to your Netscape or Internet Explorer browser, you can drop those wedding videos into your home page for relatives around the world to see.
Already, companies are making digital video a part of their Web sites, and there is a demand for videographers who understand the possibilities and limitations of this new delivery medium.
Hopefully, this article has provided you with a basic understanding of digital video, nonlinear editing, DV camcorders and new markets. And, although you should be watching your wallet, digital video is coming and it’s time to prepare yourself. So get out there and start exploring!
Scott Anderson is the author of an animation program and a book about digital special effects.
Each pixel on a TV screen is actually a combination of three colored phosphors: red, green and blue. The RGB system does a great job of creating most of the colors of the rainbow, and it is the basis of both video and computer imaging.
To digitize a signal, the technique is to sample the voltage of the waveform a few million times per second. As this happens, you assign each sample a number corresponding to the strength of the voltage. With all of these voltage measurements, you can reconstitute the signal as an analog wave that the TV will understand. There are a lot of ways to sample a video stream, but they are all very similar.
Component Video (YUV)
Research has shown that, to a human eye, the brightness (or luminance) part of the picture is more important than the color information. This means that you can reduce the color information in an RGB video signal without sacrificing the perceived quality of the picture. But since luminance is not a separate part of the RGB signal, video engineers had to develop a new analog encoding method. It’s called component video, or YUV. Y stands for luminance, U is Y minus the red signal, and V is Y minus blue. U and V together constitute the chrominance, or color, parts of the signal.
The upshot of this separation is that when you convert component analog video to digital form, you can sample the color components of the signal two to four times slower than the luminance part of the signal.
The relative amount of sampling alotted to each component of the YUV signal is represented by a three-way ratio, like 4:2:2 (the sampling ratio for professional digital video). These numbers correspond to the YUV components, and indicate that the chrominance (U and V) components are sampled at half the rate of the luminance (Y) component. The sampling rate for the consumer DV format is 4:1:1. The chrominance components are sampled at one-fourth the rate of the luminance component.
If you combine the U and V components of an analog video signal into a single chrominance channel, you get S-video. You can take it another step and cram all three of the YUV components into one–a system commonly referred to as composite video. That’s the approach taken by most consumer video gear, through the omnipresent yellow RCA-style connector.
Combining these signals makes video equipment simpler and cheaper, but it doesn’t do much for the video itself. The more the signals are combined, the worse the quality. Sad to say, composite video is how analog broadcast (NTSC-standard) video reaches your home, be it by satellite, through cable lines or over the airwaves. And though composite video is at the heart of broadcasting, it doesn’t really make the grade in the editing room.
Fortunately, the new DV format keeps all of the components separate and pristine–so long as you’re in the digital realm. Once it sends the signal out over a standard RCA cable, however, you’re back in the realm of composite, analog video.
One final note about component and composite video: often, you’ll see consumer-level video digitizers advertised with 4:2:2 sampling as a feature, but these same cards will have only composite or Y/C input connectors. In this situation, the composite or Y/C signal must be reconstituted into its component (YUV) parts before the 4:2:2 sampling can take place (see figure 1). This process does not maintain the same quality as one that keeps the components separate throughout the whole procedure, and it’s one of the things that separates under-$1000 video capture cards from their more expensive brethren.
- Component Video
- A type of analog video that separates the brightness (luminance) information from the color (chrominance) information in pure RGB video by converting it into three parts, one for luminance (Y) and two for chrominance (U and V).
- Composite Video
- A type of analog video that combines the luminance and chrominance information into a single signal. This is carried by the most common type of video connector found on consumer gear, usually represented by a small yellow RCA-style plug.
- The IEEE 1394 Serial Bus, which is currently present only on a select few DV camcorders. It carries digital video information from one piece of equipment to another.
- A type of analog video that reduces the three components of YUV video into two parts by combining the two color components (U and V) into one chrominance signal.
- S/N Ratio
- Signal-to-noise ratio. The ratio of signal strength to the strength of the electronic noise that accompanies it. One of the most important specifications in the video world, S/N is where digital video really shines.
- The process of measuring the voltage of an analog signal in very small, very precise increments, then assigning each measurement a binary number for digital storage.
For other definitions and clarifications, see the Video Digitizer Buyer’s Guide in this issue.