Dreams Per Second: A History of the Frame Rate

On cinema’s ability to conjure reality at a paradoxical remove, from Notebook Issue 7.
Bilge Ebiri

This piece was originally published in Issue 7 of Notebook magazine as part of a broader exploration of the unfilmable. The magazine is available via direct subscription or in select stores around the world.

FLIP AND FLICKER

As a child, one of the only pleasures of having a big, thick school textbook was the opportunity to use one corner of it as a canvas. The pages were plentiful and dense enough that you could draw individual images on them and then flip through the book to create the illusion of movement. The marvel of watching my stick figures awkwardly walking and flying and jumping across the pages of my ersatz flipbook was sometimes as enchanting as watching actual movies in a theater. In part, this was because I had some rudimentary control over these images: I could turn the book upside down so that a flight would become a fall; I could flip the pages a bit more gradually and get slow motion, extending the blank pauses in between the frames. It was all so basic, so simple, and yet it gave me space to dream.

This was a common bored-in-class activity for kids of my analog generation. Maybe it still is, for some. Moving images are so ubiquitous now that perhaps we don’t dwell much on what causes them anymore, the ocular phenomenon known as “persistence of vision”: Our eyes are quickly shown an image, then another, then another, in rapid succession, until we eventually perceive movement. Some call it a flaw in human perception—a flashed image remains in our mind for a split second after it’s gone—and maybe it is. But upon this flaw an entire art form was born, and took over the world, relaying visions to us on filmstrips running through projectors at 24 frames per second.

If we hold up an individual celluloid strip in the air, we can see the frames in question—each ever-so-slightly different. But we’re not seeing the full picture. The projector itself doesn’t just show us one frame and then another; via the shutters built into the mechanism, it also gives us additional frames of darkness in between. These frames mask the movement of the film as it is being pulled via sprockets through the mechanism. They also mitigate the flickering effect this process creates. There’s a fascinating paradox at play here: The history of cinema technology has been a push–pull between the flicker created by motion pictures and attempts to hide that flicker. “Out of the two hours you spend in a movie theater, you spend one of them in the dark,” the legendary filmmaker Chris Marker once said. “It’s this nocturnal portion that stays with us, that fixes our memory of a film in a different way than the same film seen on television or a monitor.”

In the early, hand-cranked days of the silents, film moved through both cameras and projectors at varying speeds. As camera technology advanced, 16 fps became an unofficial standard, primarily because it provided smooth enough motion at the lowest (and thus cheapest) possible speed. In 1899, Edward Turner and his financier Frederick Marshall Lee patented the Lee–Turner color process, which required the film to run at 48 fps instead of 16, to accommodate the three-color rotary filter that operated alongside the shutter. The process proved unstable, however—the projector required three different lenses to be lined up to convey the colors, which made for unsteady images—and was eventually simplified to create Kinemacolor, which only had two colors on its filter and thus required 32 fps instead of 48. That too was imperfect—two colors couldn’t replicate the full spectrum, and the equipment was unwieldy and expensive—but Kinemacolor did manage to produce hundreds of pictures over its seven-year existence from 1908 to 1915.

Though Alan Crosland’s The Jazz Singer (1927) gets credited as the first “talkie,” it was preceded as a sound feature by his Don Juan (1926), alongside dozens of shorts. When sound arrived on the scene, 24 fps famously became the industry standard. The sound systems that dominated the market, Warner Brothers’ Vitaphone and Fox’s Movietone, ran at 24 fps because engineers had determined that it was the lowest (and, again, cheapest) frame rate possible without adversely affecting the quality of the sound. (Other experimental sound film systems had used lower frame rates: Radio Corporation of America [RCA] had been developing a 22 fps technology, while an earlier effort, Phonofilm, had run at 20 fps.)

Thus, 24 fps remained fixed for most of cinema history—so much so that it entered the lexicon as a poetic term for the medium itself: “The truth, 24 times a second,” in the words of Jean-Luc Godard. But what was it about the flicker and that imperceptible gravity of film movement that gave cinema such splendor? If the arts of the twentieth century represented a break with tradition and sought new ways of seeing, film was the ultimate modernist art, in both content and form. Watching it, we were watching technology at work: The images onscreen represented both an illusion and its inner workings laid bare. For many viewers, the distant whirr of the projector in the back of the auditorium added to the experience, as if they were all part of the machine.

Schematic diagram for Arnulf Rainer (Peter Kubelka, 1960).

BREAKING THE FRAME

There may be a scientific explanation for that weirdly mesmerizing effect that flicker has on viewers, rooted in a neurocognitive phenomenon called entrainment. The term generally refers to the idea of two processes combining. In this case, it speaks to the way the human mind can synchronize with repeated external stimuli. Not unlike the rapid beats in a piece of dance music (or, for that matter, the drums that accompany an army marching to war), the flickering image fires our neurons and puts us in something of a trance state. It hypnotizes us, at least a little bit. When we talk about movies as waking dreams, we’re not being all that hyperbolic.

Maybe that’s why the avant-garde was so fascinated by the physical and mechanical properties of film, and with frame rates and flicker specifically. Jonas Mekas turned his home movies into expansive, rapid-fire poems by under-cranking a 16mm Bolex camera, which resulted in a wash of faces, places, and events echoing the mad rush of time itself. Robert Breer’s experimental animations played with abstract forms, letting shapes transform in split-second flashes as they glided across the screen—thereby combining the smooth, “conventional” effects of animation with the jagged rupture caused by sharp changes in individual frames.

Peter Kubelka’s Arnulf Rainer (1960), considered by many to be the first “flicker film,” put together solid black and white frames in accelerating and decelerating succession, creating waves of strobe-like effects. The seven-minute film starts in delightfully aggravating fashion (the repetitive white noise on the soundtrack doesn’t help—or helps immeasurably, depending on your point of view) before lulling us into a strange reverie, and then presents longer passages of black or white, prompting us to start anticipating when the strobing will return. Tony Conrad’s The Flicker (1966) also alternates between solid black and white frames with increasing frequency; watching it, we might feel like we’re stuck inside a projector. That movie opens with a handwritten warning, waiving the filmmakers and the exhibitors from “all liability for mental or physical injury” that might be caused by the screening. The warning is playful, but Conrad was genuinely worried that the strobing effect might cause epileptic seizures in some people. The avant-garde looked to change not just our perception but also how we thought about perception. As Paul Sharits put it when describing his 1966 film Ray Gun Virus, made up of black, white, and colored frames that flicker at differing rhythms: “Goal: the temporary assassination of the viewer’s normative consciousness.”

An assassination, perhaps, but it’s also an exaltation. What comes through in so many of these experimental works is a sense of returning to something fundamental, organic, natural, and mystical, via the extreme manipulation of technology—machines used for almost shamanic processes. For Sharits’s N:O:T:H:I:N:G (1968), in which color frames are intercut with occasional flashes of a chair and a picture of a light bulb, the director based the film’s rhythms—alternating between rapid and slower flickers toward a placid center, then accelerating back out—on “the Tibetan Mandala of the Five Dhyani Buddhas.” To make Mothlight (1963), Stan Brakhage pasted moth wings, leaves, grass, and petals onto strips of editing tape, and then printed them onto film. The resulting film displays rapidly changing abstract fields of organic material, a startling combination of nature and machine. Brakhage said he had been moved by the sight of moths flying directly, maybe even cavalierly, into the flame of a candle. He saw his work as giving them new life—in essence, reanimating them. But the film also drew an evocative (and impressionistic) parallel with the artist’s own futile role in a structured, transactional, even parasitic world.

Brainstorm (Douglas Trumbull, 1983).

THE BLEEDING LINES OF VIDEO AND FILM

Cinema is a medium forever caught between art and technology, between dreamers and engineers. And just as experimental artists embraced the unique properties of the film frame and how it moved, many in the mainstream weren’t happy with the flicker. In 1986, special effects visionary Douglas Trumbull (2001: A Space Odyssey [Stanley Kubrick, 1968], Close Encounters of the Third Kind [Steven Spielberg, 1977]) demonstrated a high frame rate (hfr) format called Showscan, which he’d developed after deciding that 35mm projected film was too fuzzy and too rough—that it needed to be clearer, smoother, more realistic. (Trumbull settled on 60 fps shot on 65mm, after studies showed that audience members’ emotional responses were more heightened at that speed and size.) He had intended, in his 1983 thriller Brainstorm, about a company that records people’s sensory experiences and then sells them to others, to demonstrate Showscan in specific scenes, but the studio decided that the logistics and the cost of projecting a wide release film at differing speeds was impractical.

Other filmmakers have tried to develop similar methods, but there was, and remains, an ongoing problem with HFR technologies: Even when they’re shown on celluloid, the images look like video, which evokes all sorts of things that are decidedly not cinematic—like news reports, soap operas, sports broadcasts, and home movies of birthday parties.

The two formats bled into one another for decades. Experiments with shooting video for cinematic projection had been a regular occurrence in the world of the avant-garde, and there had even been some mainstream attempts in the 1980s, with Michelangelo Antonioni’s The Mystery of Oberwald (1980) and Peter Del Monte’s Julia and Julia (1987), the latter the first feature narrative to be shot in then-nascent high-definition (HD) video. In all these cases, however, the footage was transferred to 35mm and projected at the traditional speed for theatrical distribution. (While film has a frame rate, video has a “refresh rate,” denoted in hertz—and its rate has always been higher than film’s. Today’s HDTVs generally run at 60 Hz or 120 Hz, though the technology does keep advancing.) Even when seeing these video images in 35mm, a “video-ness” is still palpable in the image, a strangely cheapened, phony quality of the action and the performances. When video was used to create ostensibly cinematic images and scenes, something felt off to viewers—as if there was a disconnect between medium and content.

The use of video for mainstream narrative features really took off in the late 1990s. Video cameras with higher resolutions had made such efforts more viable, as evidenced by the initial buzz around the Danish-born Dogme 95 movement (which took the Cannes Film Festival by storm in 1998, with Thomas Vinterberg’s The Celebration and Lars von Trier’s The Idiots) as well as Eduardo Sánchez and Daniel Myrick’s box-office hit The Blair Witch Project in 1999, a horror movie shot documentary-style with prosumer cameras. These films embraced the unique formal properties of video. They manipulated the vernacular of reality, those associations that had previously prevented video from ever feeling cinematic. Some of these movies were shot handheld. Some utilized surveillance-camera aesthetics. Some played off the style and textures of reality television. Some even left in glitches that broke the fourth wall. (At one point in The Celebration, one of the actors hits the camera; Vinterberg kept it in, because, well, he could.) There was art here, and it was an art that felt organic to video.

These stylistic transformations—shaky cameras, rough aesthetics, quick editing—seemed to become moot as the years wore on, however, because eventually HD video took over almost completely from 35mm. (The process had actually begun as early as 1999, with the release of George Lucas’s first Star Wars prequel, The Phantom Menace, partially shot on HD cameras but released in 35mm. Over the course of the next two decades, video would become the industry standard in both production and exhibition, with around 98 percent of the world’s theater screens having converted to digital by the end of 2017.) No longer did filmmakers have to recalibrate their styles to accommodate the pixelated agita of video.

Ironically, one of the innovations that made HD cameras finally viable for mainstream widespread use was the development of progressive scan technology, which allowed video to capture footage that could replicate motion at 24 fps. Although video’s traditional frame rate was higher, the rate had to effectively go back down to the age-old motion-picture standard in order to create images that audiences would accept. In other words, the cameras had to bring back an imperceptible bit of that old flicker.

Billy Lynn’s Long Halftime Walk (Ang Lee, 2016).

THE BATTLE OVER MOTION

In recent years, other filmmakers have pushed against the supremacy of 24 fps, but newfangled attempts at HFR filmmaking continue to crash and burn. Peter Jackson shot his Hobbit trilogy (2012–14) at 48 fps, and while the movies were financially successful, the format didn’t catch on. (Most theaters weren’t equipped to screen in HFR, and those who did see the films in Jackson’s preferred version, particularly critics, didn’t like the look. “At first … I thought I was watching a video game: pellucid pictures of indistinct creatures,” wrote Richard Corliss of Time magazine. “After a while my eyes adjusted, as to a new pair of glasses, but it was still like watching a very expensively mounted live TV show on the world’s largest home TV screen.”) Ang Lee shot both his home-front war drama Billy Lynn’s Long Halftime Walk (2016) and the Will Smith action vehicle Gemini Man (2019) in 120 fps, convinced that the technology would transform cinema forever. The films looked dreadful. “The effect is like watching a Jason Bourne dinner-theater production in the grip of a migraine,” wrote A. O. Scott in The New York Times. “The performances feel slow and deliberate, and the hyper-clarity of the images undermines realism rather than enhancing it.”

The truth is that HFR does create a more realistic and detailed image—and therein lies its downfall. What we see onscreen is so realistic, in fact, that film acting, a specific style of performance that developed over more than a century, suddenly starts to look fake. So realistic, in fact, that, suddenly, an explosion loses its grandeur and looks more like someone set off a glorified firecracker. So detailed that, suddenly, we start to notice things we’re not supposed to notice. Cinema isn’t just the art of pointing a camera and shooting; cinema is the art of directing our attention, and with high frame rates, our attention dissipates in ways it’s not supposed to. Is it possible that we could have been conditioned to accept a different, higher frame rate as inherently cinematic if those Western Electric engineers in the 1920s had settled on 26 fps, or 36 fps, as the industry standard? Perhaps—maybe even probably. But what made cinema such an uncanny dream was precisely the vital edge it walked, between the obvious flicker and the subconscious one. Regardless, we have the language that we have.

James Cameron, another occasional proponent of HFR, decided against a uniform frame rate for his sequel Avatar: The Way of Water (2022), opting instead to release the picture in a variable rate, where certain scenes would be projected in HFR, others in regular 24 fps. Cameron clearly understands something fundamental that Lee and Jackson didn’t. In The Way of Water, HFR was generally used for heavy VFX shots in action scenes where clarity of movement was of paramount importance, while 24 fps was used for performance-driven scenes. I still preferred the entire film in 24 fps (which was how most theaters screened it), but the judicious use of HFR was far more tolerable; sometimes, it was barely noticeable, which after all is the main intention of HFR.

There is an existential question at the heart of this, one the avant-garde would certainly appreciate as well: Should we know we’re watching a movie when we’re watching a movie? The advocates of HFR would probably answer no, but ironically enough their preferred format exposes the artifice even more than traditional frame rates ever could. In the end, maybe it all comes down not to the things we can see, but the things we can’t—those black frames that give us the invisible flicker of movies, and that slightly slowed-down, otherworldly quality of cinematic movement. A few years ago, David Niles, an engineer and pioneer in the development of HDTV, told me that he had once tested different frame rates on viewers, showing them the same footage in various formats. “We would take a scene between a couple of actors,” he said, “shoot it at 60 frames per second, or even 30 frames, and then shoot it at 24 and put it in front of audiences to see how they interpreted it. With 24 frames, people liked the actors better—they felt the performances were better. In reality, it was exactly the same thing.” Niles speculated that 24 fps conjured an “intellectual distance” between the audience and what they were watching. “The viewer imagines more,” he said.

His words speak to a truth that becomes clear when one looks at the history of moving pictures—at how the cinematic form has captured the imagination of visionary artists, humble storytellers, and all the mad types in between. When we watch a movie, maybe we don’t need to be inside the image, fully immersed in a hyperreal universe where everything seems no different from the world outside. Maybe it’s our slight removal from the screen that really gives us all that room to dream.

Continue reading selections from Issue 7’s exploration of the unfilmable.

Don't miss our latest features and interviews.

Sign up for the Notebook Weekly Edit newsletter.

Tags

1
Criar conta para adicionar um novo comentário.

PREVIOUS FEATURES

@mubinotebook
Notebook is a daily, international film publication. Our mission is to guide film lovers searching, lost or adrift in an overwhelming sea of content. We offer text, images, sounds and video as critical maps, passways and illuminations to the worlds of contemporary and classic film. Notebook is a MUBI publication.
TermosPolítica de PrivacidadeSuas opções de privacidade

Contact

If you're interested in contributing to Notebook, please see our pitching guidelines. For all other inquiries, contact the editorial team.