Computer Editing: 5 Phases of Editing

Computer-based video editing seems cursed with enough tools for a Sears hardware department and a learning curve steeper than Everest. To cope with this complexity, most of us plod through half a tutorial, read random bits of a manual, futz with a few simple projects, and basically fool around, while our expertise grows like moss on a tree trunk, though not quite that fast.

To speed things up, we need to cut our way out of the trees, climb that hill over there, and look down on the whole forest. From this perspective we can see that the tangled thickets of post production in fact have a design, an evident pattern of operations. Post production falls into five major phases: organizing, assembling, enhancing, synthesizing and archiving. By understanding this work flow, we can flatten the learning curve and get a grip on the whole post-production process.

As we do this, keep two major footnotes in mind:

  •  For simplicity, we’ll pretend that you complete each phase in order, before progressing to the next one. In real-world editing, you may be working in several phases at once.


    8 Tips for Making a Stellar First Video

    Free eBook


    8 Tips for Making a Stellar First Video

    Free eBook


    Thank you! Your free eBook will be sent to you via email

  •  Though the five phases of post production are more obvious in hard drive-based editing, they’re relevant to tape-based linear cutting as well.

    With the fine print out of the way, let’s start with phase one: organizing.


    In an earlier sermon from this pulpit, we’ve harangued you on the importance of organizing your material before you start editing (see November 1999 issue or read the article online at so let’s just review the material in fast-forward.

    Classify each and every piece of video and/or audio material.

  •  Identify it with a slate if it doesn’t already have one. Slate with codes (like "27A3," meaning sequence 27, shot A, take 3). Avoid descriptive slates ("helnngcu") because they won’t make sense a week later.

  •  Log it in a shot database (typically part of editing software packages) with a description. A really big project might need fields for slate (27A3), angle (CU), content (Helen reacts to news), quality (ng), and notes ( bad framing; 1st half OK). A short program might get away with a single field (27A3 CU Helen NG).

  •  Locate it by adding tape roll number, in/out-point (time code address or control track time), and file name (as stored on your hard drive).

  •  Stash it. For larger projects, establish a separate bin or folder for each sequence, plus bins for music, effects and other wild audio components. (Of course, this last step doesn’t apply to linear editing.)

    How compulsive should you be about organizing? In direct proportion to the complexity of your project. If you blew half a roll of tape on a family picnic, hey, go ahead and wing it. But if you’re staring at ten, 60-minute cassettes on a feature intended for Sundance, you’d better classify every blessed shot.


    With your building blocks labeled and stowed, the next step is to assemble them into a program for your video. Every single piece of your video must be selected, sequenced, timed and transferred to the master you’re constructing.

    In digital post especially, selecting shots is often a batch operation. It’s easier to pick and mark all the shots you want transferred, and then let your software do the job for you.

    Typically, sequencing, timing and transferring go together. Few editors will stick the unclipped shots in order on a timeline and then trim them all to length. Most people choose a shot, spec its in-point to match the previous shot, and lay it in. Personally, I like to leave the out-point open until I have the following shot roughed in. Then I experiment until I find the perfect cut point and set out-point 1 and in-point 2 at the same time.

    In linear cutting, transferring is a discrete step because you have to determine in- and out-points before copying the source shot to the assembly tape.


    Enhancing means touching up shots by improving, conforming, or downright redesigning them. I recommend that you do this after assembling the show. Otherwise, you might get so hung up on minute shot adjustments that you lose sight of the overall design.

    Improving video usually involves exposure, contrast or color balance. There’s not much you can do with grainy gain-up originals (though you can tweak the color saturation a skosh) but contrast is definitely tunable, especially with underexposure. Often those inky shadows do have details lurking in them. If your software permits, use a tone curve adjustment to boost dark areas without brightening highlights.

    Even when a shot’s quality is already good, it may not match the shots around it because it was recorded at a different time – you know: sunny one day and cloudy the next. That’s where conforming comes in. (In film, it’s called color timing.) Working shot by shot, you ensure that everything in a sequence looks the same.

    Redesigning means changing the basic character of a shot: turning straight color into a sunset glow or pure black and white, applying slo-mo or strobe effects, adding all kinds of digital filters, or flopping shots for screen direction.

    If you’re making a commercial or music video, you may stylize the color across the whole program to create a unique special effect.

    And let’s not forget audio. You improve it by optimizing volume and equalization, conform it by matching level and perspective from shot to shot, and redesign it by altering speed, pitch and presence. Inherently more plastic than video, audio can be molded into just about anything you want.


    Audio is also more susceptible to synthesizing: stacking multiple layers of material and blending them into a single program strand. In editing, it’s common to have two tracks each of dialogue, effects, background and music, and double that number for stereo. Mixing audio to synthesize a final sound track is one of the most creative and satisfying jobs in the post production process.

    With digital post, the same flexibility has come to the visual tracks as well as audio. Multi-layer visuals include transitions, composites, superimpositions, multiple screens, graphics and titles.

    All transitions (except for fades to/from a color) involve multiple images. Extend the middle of a dissolve and you have a superimposition, with two half-strength images sharing the entire screen. Place picture in picture and you get multiple screens (or carve the frame up into as many images as you like). Super a transparent bar or square and you’re bringing graphics into play; and with text, you’ve got titles working as well. Finally, you can chromakey a new foreground into a background, or vice-versa; and with a good plug-in matting program you can combine all the elements seamlessly.

    If this sounds like quite a list, consider a single moment that you might see on TV any time: as Mr. Announcer intones off screen, "Up next: today’s weather on News at Six," the following things happen:

  •  A cube tumbles on screen to end as a 1/4-size window.

  •  The window shows the weather person in front of a chromakeyed map.

  •  A "smoked glass" rectangle fades on to fill most of the screen.

  •  A color bar sweeps in across the bottom of the screen.

  •  The numeral 6 pulsates in a square to the left of the bar.

  •  "Channel 6 Weather" fades up on the color bar.

  •  The whole schmeer fades to black for a commercial break.

    Today, these multi-part syntheses are so ho-hum that we have to enumerate their parts to remind ourselves of how complex they really are.


    In linear editing, storing the finished program is automatic because the assembly tape is the archive (such as it is). In digital post, however, you have only a virtual video until you’ve output it in final form; and that final form may involve multiple decisions about recording methods, media, protocols and display standards.

    First decision: analog or digital? Digital storage wins hands-down if the only issue is permanence through perfect duplication. But for a short-term program that will need many copies (like a video press release) an analog master my offer easier mass duplication.

    If you do go analog, what’s your display standard? NTSC, PAL or SECAM? And dont forget that each standard has its subsets for different countries. If digital, which codec (compressor/decompressor): DV, MPEG, MJPEG, etc. and again, which flavor? If you’re streaming to the Web, you also have to cope with screen size and frame rate.

    Finally, what’s your storage medium? Until recently, some kind of tape was the only answer. But with DVD-RAM burner’s down to $300, maybe you want to choose a disc-based medium with its advantages of capacity, compactness and random access. The moral here is that the archiving process that was once automatic now requires a number of thoughtfully considered decisions.

    Organizing, assembling, enhancing, synthesizing, archiving. Okay, fine; but what are these abstractions really good for? Basically, they take a complex and shapeless process and impose a kind of order on it – an order that helps you keep it all straight. Obviously, no one is going to think, "Having duly organized and enhanced, I will now proceed to phase four: synthesize."

    But as you edit, you can operate more confidently if you have an intuitive grasp of the organization that underlies the post-production process.

  • The Videomaker Editors are dedicated to bringing you the information you need to produce and share better video.