What is interlacing?

Visualize yourself taking sick leave from school or work. In your amazement, you flip the switch for the first time in months. You forgot how terrible daytime television is in your spare time as an adult. All those game shows and soap operas look terrible, don’t they?

Behind every disappointing TV show hides a historic pillar of broadcasting: interlacing. There’s a reason your favorite movies are so much more exciting to watch.

What is interlacing?

A transmitter that does the heavy lifting for us.

In the early days of broadcast media, engineers had a whole new problem to solve: finding the most economical way to deliver the same thing to a million different homes nationwide.

The industry’s forerunner, the theatrical exhibition, used physical and progressive images instead of interlaced video. Many will recognize these images as a reel of discrete film cells. Sending audiovisual media through the same method was impractical as it would have resulted in sending every family in the country an identical set of physical media. This is the opposite of the intention of a true audiovisual medium, especially in its original context.

Removing part of most of the broadcast signal lightens the load. It also doubles what is called the vertical repetition rate of the video stream without compromising the resolution. In all other cases, those producing the signal would either have to drastically lower the resolution of their offering or broadcast a much bigger and heavier signal to begin with.

How does interlacing work?

You can see the difference between this progressive sequence image and this second shot between moments.

Think of it like this: With progressive display video, each frame consists of exactly one sequence frame in terms of temporal duration. An interlaced video image, however, does not. On the contrary, an interlaced frame is equal to two half-frames; Forgive us for mince words, but the difference is profound.

The first field of the first frame corresponds to the second field of the frame shown previously. The second field of the first frame accompanies the first field of the frame which comes immediately after. The two pairs of fields exactly represent the value of an original image.

Each interlaced frame, individually, contains half of the two consecutive frames that were in the original progressive source material. Persistence of vision visually marries these two asynchronous signals to our human eyes, resulting in video quality that gets us there while using much less signal bandwidth.

What are Interlaced Scan Lines?

scan-lines-1

Field one and field two of a traditionally broadcast interlaced image.

Signal bandwidth is a term that strictly relates to the media as it is conveyed; the size of the load faces the width of the tunnel it is supposed to pass through.

A film camera or one using a magnetic DV tape will naturally produce a full and continuous frame by frame. To equip this image for transit, each broadcast frame must be broken down into smaller, simpler pieces that are easier to convert into an analog signal. Sending each original aggregate framework in its entirety would have been logistically impossible under the circumstances of the time.

Their solution: horizontal scanning lines. Each horizontal scan line of the image was sent to a receiver, where the image would then be reconstructed on the ground.

The NTSC standard requires that each frame be divided into 525 horizontal scan lines, with 262.5 belonging to each field. The order of the fields determines whether the even or odd field comes first. Usually, the even field will be the first to be generated at the signal destination. This is done sequentially, from top to bottom.

When transmitting a progressive video signal, the same happens. The only difference is that each horizontal scan line is rather part of a single continuous field; this control is made up of the entire image.

Vertical repetition rate

One thing is true in a general sense: drivetrain doesn’t come cheap. Transmitting large amounts of data requires proportionately larger amounts of resources as the amount of data to be moved increases and the physical extent of your transmission range expands. Interlacing is one way to alleviate this problem while still allowing a broadcast image large enough to take advantage of it.

The sparkle effect has plagued engineers since the inception of the industry. Many factors contribute to this aspect of the viewer experience, including things like the effective frame rate of the video and even the ambient lighting conditions in the room that the viewer is consuming.

The quality of the video signal is, of course, where each other is to make the biggest difference. A flicker-free video signal typically requires between forty and sixty large area flashes of light per second. These large-area flashes of light occur whenever a new image replaces the one that preceded it on the screen.

The vertical repetition rate describes how many of these jarring changes occur over a period of time. These changes are responsible for triggering the phi biophysical phenomenon on which interlaced video is based.

As mentioned earlier, the primordial beginning of television was constrained by the technology of the time. To stay within the limit of what could realistically be broadcast under these rudimentary conditions, television engineers had to find a way to refresh the picture more frequently without increasing the number of pictures sent over a distance.

Fields per second and frames per second

Each AC field signal is cascaded through the one following it. They are displayed in tandem but remain totally separate in a technical sense, instead of two signals first rendered together and then displayed for viewing. Our eyes perceive these additional large area flashes, however, even when the presentation rate remains the same.

The people leading this movement understood that at least four hundred scan lines of resolution per frame were needed to result in a readable video stream. In North America, NTSC is the only type of analog video signal that our infrastructure will support at full scale. This is due to the way electricity is produced (at a frequency of 60 Hz) as opposed to most of the rest of the world (at a frequency of 50 Hz).

Physically, the data transmission rate is directly related to the rate at which the power used to convey them is consumed. This is where NTSC and PAL get their characteristic frame rates.

With that fatality in mind, an interlaced US signal transmitted at 60Hz will result in an effective frame rate of approximately 29.97 frames per second after being received. In contrast, an interlaced PAL signal will be perceived by the viewer at 25fps.

The difference between fields per second and frames per second has a lot to do with how those extra large area flashes of light differ from the “real” time divisions that separate each video frame at the time of capture. As a result, the eye is more engaged with a video stream that appears to be much more dynamic than it actually is.

Although the true “resolution” of each image displayed on the screen is exactly half of the original image, this loss will not unduly affect the audience under the right circumstances. Thanks to the persistence of the vision, the show continues without wasting time.

Common challenges associated with interlaced video

A side-by-side comparison of interlacing under several circumstances.

Scan lines are a cherished feature of old-fashioned DV camcorders and archival material from the early days of mass media. These artifacts occur when interlaced sequences have been manipulated after being syndicated or in sequences which have naturally degraded to some extent. The same can happen when digitally rendering video in some form of compression.

This can cause an unpleasant “shiver”, causing on-screen items to remain visually “trapped” between two adjacent positions. The effect will generally be much more noticeable when the video is evaluated by the image. Objects moving quickly through the frame are most likely to end up with artifacts like this. This is especially the case if the moving object contrasts with the background behind it.

Reconstructing interlaced video to restore it to its previous progressive state can result in these artifacts. One reason for this may be that the reversion means did not match the original signal’s field order protocol.

When to cut corners is written right in the book

Intertwining is one of those inspiring stories of deadly victory over the tyranny of the iron rule of nature. When the laws of physics tell you to take it slow, it takes a very special type of change maker to just push their show through the pipeline anyway. And, damn it, have they ever done it.

So rarely in life, we are allowed to take advantage of shortcuts like this. The many modern applications of interlacing testify to the resistance of a truly lateral deviation of thought in any industry.


Premiere Pro moving at high speed, while a speedometer sits behind.  Indeed, it is optimized thanks to the tips and advice detailed in this article.
Is Adobe Premiere Pro slow? 5 tips to boost performance

If you experience crashes or slowdowns while editing in Premiere Pro, these tips can help you avoid it.

Read more


About the Author

Comments are closed.