No, this is not about Star Trek (though the Enterprise will do a fly-by a bit further on). Instead, this
column explains the rules and regulations of a different universe–one we look at through the window of a
monitor or movie screen. A universe, you’ll soon discover, that operates quite differently from our
own.

Understanding how the video universe works is as important as knowing how your lens functions or
how to set white balance, because if you understand video space, you can make it work for you.

Incidentally, everything here applies equally to film and video. Whatever their other differences, these
two movie media follow exactly the same cosmic rules. But since we’re all videomakers here in this galaxy,
we’ll call it video space.

We’ll start by looking at how space really works in video.

Real Space and Video Space

To see how video space is different, consider some things about real-world space:

Real space has three dimensions–width, height, and depth.

Real space extends endlessly beyond the limit of your vision, and it contains whatever happens to be
there, quite independently of your preferences.

Real space has a fixed scale; a meter is always one meter long and a mile is always one mile.

Real space has an equally fixed geography. If a grocery store is one mile north of your house, it’s always
one mile distant and its position is always to the north of your home (assuming neither the store or your
house move).

None of these things is true about video space!

First of all, video has only two dimensions: width and height. There’s no true depth in video,
which plays on an essentially flat screen. Instead, depth is an illusion the videomaker and the camcorder
creates. By manipulating this "synthetic" third dimension, videomakers can control the size and
relationships of objects in the video universe.

Because video has only two dimensions, its directions are those of the screen on which it appears: up
and down, and, more importantly, left and right. In this universe, the real-world compass has no
meaning.

For instance, when a character in a movie flies from New York to London, we always see the aircraft
moving from screen-left to screenright. Why? In the real world, this would be true only if we
watched the plane from a position south of it. If we watched from the north, we’d see the plane flying right
to left; and if we were looking from the west, the plane would be moving away from us, as you can see in
figure 1. The apparent direction of the plane would depend on the observer’s point of view, but regardless
of viewpoint, it would still be flying from west to east.

On the screen, the plane flies left to right exclusively, because on a conventional map, New York is on
the left and London is on the right. If you showed the plane going right to left, the audience would think it
was heading for Los Angeles. In fact, the actual plane you videotaped could be flying in any old direction,
as long as you showed it pointed left to right. In short, screen direction has no relationship to real
directions.

About Screen Direction

Screen direction does, however, have a powerful relationship with the borders of the picture.

When two actors are having a conversation, you’ll notice that, usually, one is looking more toward the
right edge of the frame and the other is looking left. Though the camera position may change from shot to
shot, the looks remain consistent: one right, one left.

But if the director moves the camera to the opposite side of an imaginary line between the two actors,
their looks reverse direction. The right-looker’s now looking left, and vice versa, even though in real-world
geography, the two actors have not budged a single inch. You can see the effect in figure 2.

Who cares whether someone’s look direction changes? The audience, that’s who, because they’re
subconsciously locating the actors in the two dimensions of the video screen, not the three dimensions of
the real world. When an actor’s screen direction switches, the viewer perceives a distracting jump.

In action sequences, too, screen geography is more important than real-world geography. If a person is
going somewhere, the director tries to keep him moving consistently screen-left to right or screen
right to left.

Suppose, for instance, your actor walks one block south along a sidewalk, turns one block east, then
turns and walks north up the next street. In the real world, the actor reverses direction completely by
walking first south and then eventually north. But the screen direction always remains left to right. Check
this out in figure 3.

Screen direction delivers important cues to the audience. Suppose you’re cutting back and forth between
two moving people:

If they’re both moving in the same screen direction, then the audience concludes that B is following A
or, at least, headed for the same destination.

If they’re moving in consistently opposite screen directions, the audience senses that they’re moving at
the same time, but bound for different destinations.

If A’s screen direction keeps changing while B’s remains constant, the audience feels that A’s doing a
variety of different things, but B’s pursuing one continuous action.

Again, all these movements are with respect to the screen borders, not the geography of the actual
world.

And here is where the Starship Enterprise makes its brief guest appearance, because if we gauge
movement in relation to the frame, then you can make a fixed object appear to move by moving the frame
(camera) instead. For the original TV series, the model of the Starship Enterprise simply hung before a
blank backing while the camera moved around it. It was really not the starship but the camera that boldly
went where none had gone before.

Both Tiny and Immense

We said that in the real world, space extends beyond your ability to see, and you have no say about
what that space may contain.

In the video universe, the exact opposite is true, because the borders of the video screen do far more
than provide a frame of reference for movement. They actually define the frontiers of the entire video
universe. And they do this by proclaiming two opposites at once: the video universe is both as limited as
the screen itself and as limitless as the videomaker says it is. You could call these contrary definitions
immediate and virtual.

The immediate rim of the video universe is the screen border; nothing exists outside the frame. In the
real world, the sailing ship is just a piece of carved railing with a blue sky drop behind it, Alan Ladd is
standing on a box to kiss the leading lady because he’s so short, and a bored studio crew of 20 are spying
on their passionate embrace.

But however real the edge of the set, the box on the floor, the crew and the equipment may be in the real
world, they don’t exist in the video world because they’re all outside the magic frame. It’s this principle–the
frame defines the world–that makes moviemaking possible.

On the other hand, it’s the principle of the virtual movie world that makes the process convincing. That
principle states that if you imply or suggest something beyond the edge of the screen, then it too
exists, even if we can’t see it.

How does it work? A wind machine ruffles the lady’s curls, a piece of fabric waved in front of a
spotlight simulates sunlight through a billowing sail, and the camera rocks gently from side to side. Result:
a tacky little rail and blue backing become a man ‘o war at sea–all because of what the audience infers
about the briny world beyond the edge of the image.

A Matter of Scale

We said that real space has a fixed scale. In other words, everything remains the same size. But this is
not true in the video universe. Here, things can shrink and grow like Alice.

Several model spaceships played the role of the Enterprise, ranging from little more than a foot in length
to well over ten feet. On screen, they all look like the same object because in this world, there’s no fixed
scale.

You can also vary scale by changing the camera’s height, the distance from camera to object, or both.
That’s how TV commercials entice children into buying Humongous Killer Masters of the Cosmos
dolls that turn out to be three inches high. It’s also how makers of cheap science fiction movies turn
lizards into hilariously phony dinosaurs.

You can even change scale in a single object by forcing perspective. I recall the porch of a sound stage
bungalow that was only ten feet wide. The end nearest the camera, where the actors appeared through the
front door, was full-size, but the other end was not even half normal height. Since the lines of the roof, the
railings, the door, the windows and the floor all converged toward the miniature far end, the effect was that
of parallel lines receding into the distance. In the real world, the effect looked silly; but in the two-
dimensional world on the screen, the bungalow appeared to be 30 feet wide. To see how this works, see
figure 4.

Where Is Everything?

The last major rule of screen space is that there is no fixed geography. This might be the most useful
rule of all because it allows you to do all sorts of impossible tricks.

First, there are no fixed distances. A wide angle lens can stretch the space in a house trailer until it looks
like a mansion. This trick is done in studios all the time to make small sets look bigger.

Telephoto lenses have the opposite effect, squeezing the depth out of the picture to make distant things
look closer. There’s a famous shot of a train bearing down on a car stuck on the tracks. The train comes so
close that it must destroy that car, yet at the last minute it switches to a parallel track and roars harmlessly
by.

How could it possibly do that when it seemed only three feet away from the car? Because it was actually
30 feet back, with the hidden switch in between. The telephoto lens did the trick.

My students exploited this effect for a class exercise. They shot footage of two young men engaged in a
furious fist fight. The effect was dismayingly convincing until they moved the camera 90 degrees to reveal
that the two combatants were standing three feet apart and harmlessly punching the air. From the side,
however, the telephoto lens "removed" that three feet and placed the sluggers toe to toe. To see how they
did this, see figure 5.

You can also change screen space by combining different shots. For another class exercise, my students
shot three men chasing a fugitive along a third story balcony with the ground and street visible through the
railings. In the second shot, the camera, recording the fugitive’s point of view, rushed right up to the rail at
the balcony’s end, again emphasizing the ground far below.

Next, we see the man vault the railing and fall out of frame. Hold the now-empty shot a beat, then cut to
a high angle of the lawn three stories below. Another beat, and then the man tumbles into the frame, rolls,
rises, and looks upward. We cut to his point of view: a worm’s eye angle of his frustrated pursuers at the
railing fifty feet above. When viewed, this edited sequence drew gasps from the audience, all of whom
knew the "real-world" geography of that classroom building.

The trick? Instead of vaulting the third floor railing, the actor really used a ground-level cinder block
wall. It was high enough so that the videomaker could frame empty blue sky beyond it but low enough so
that the actor could drop harmlessly on the other side. The camera person placed the top of the wall just
barely below the frame line, so you couldn’t see that this was not really a railing.

That out-of-place wall brings up another way to control video space: by juxtaposing two different
locations to make them one. Doors are a common way to use this trick. An actor opens an outside door on
location, goes through it, and then closes a sound stage door behind him. Though the two shots may be
made miles apart in space (and weeks apart in time), cutting them together brings the spaces together and
turns two different doors into one.

In this way, the fine British actor Derek Jacoby was able to play a character in Los Angeles without ever
leaving London. In the film Dead Again, the Jacoby character visits the Los Angeles apartment of
our hero, Kenneth Branagh. Though we see location shots of the apartment exterior, Jacoby is never in
them. He appears only as he comes through the front door into the apartment interior–an interior built on a
British sound stage.

In addition to altering space by cutting together shots of distant points, we can actually put different
places on the screen in a single shot by combining two separate images. In films, this means process or
matte shots; in video, it means chromakey.

To combine two physical places, simply shoot actors in front of a chromakey background color and then
electronically matte them into any other footage you choose.

By combining objects of very different sizes, you can also use chromakey to alter scale. In heartfelt
homage to Plan 9 From Outer Space, my students built a cheesy flying saucer out of two eight-
inch pie pans and hung it by a thread before our chromakey backing. Then they keyed their saucer into
footage of the campus. Watching the result on a monitor, they were able to pan, tilt, and zoom with the pie
plates to make their saucer appear 100 feet across as it soared and hovered over suitably terrified
students.

The Four Big Ideas

So that’s our mini-tour through the four big ideas of movie space. These principles are as important to
the video universe as Newton’s laws of motion are to the real one. Just remember:

  • In the video world, there’s no depth, but only width and height.

  • In the video world, anything real that’s outside the frame does not exist; but at the same time, anything
    merely suggested does exist.

  • In the video world, objects have no fixed scale; they’re whatever size the videomaker makes them.

  • In the video world, objects have no fixed position; they’re wherever the videomaker puts them.

Master these four big ideas and you can do almost anything in that magic world inside the screen.

Good shooting!

LEAVE A REPLY

Please enter your comment!
Please enter your name here