Since the mid 1980s I’ve been designing, developing, and using real-time digital cinema systems that allow me to navigate digital and analog materials I’ve collected. My collection techniques are those of a cinematic diary process. I then load them in my system, and record the process of examining them.
The Simultaneous Opposites Engine is the fourth such system I’ve programmed. I load two 720X1280 video clips into the system, which begins marching through both clips, jumping a couple seconds ahead and then behind between each frame’s display. The video’s single-frame audio is also played back, in synch. The time span of each jump creates something of a temporal depth of field.
As the engine moves through the clips, I am able to deviate, manipulate and modulate both audio and video using three interfaces: a MIDI guitar, a MIDI foot pedal board, and the standard computer keyboard and mouse. Most of the interface programming I’ve done is to allow me to examine the visual and audial experience as it appears, repeating moments of interest, varying them to bring out their unique attributes, and jumping out of them once the interesting moments have been teased out.
At the moment I have about fifty short (from a couple minutes to maybe15 minutes) such traversals. They were all produced in real time, with no later editing or audio mixing. Starting with the first traversal, the clips reflect the status of the engine at the time. The Simultaneous Opposites Engine is a work continually in progress, and as I extend and revise its functionality, the experience of its output changes.
The threads that led to my development of this particular engine are manifold.
In 1972 I created a film I titled “Simultaneous Opposites”, that I scripted for single-framing, using a film camera panning across a protractor-like tool I made for the purpose. The film has up to six pans happening concurrently, all meshed together like a zipper. A video of this may be seen on my portfolio site: www.robertedgar.com.
Three years later, I included footage shot in a similar manner with diary footage, in a film called Intersticies. Working at Synapse in Syracuse NY, I transferred the footage to video and mixed it electronically. An online version of this—containing electronic audio I composed at the same time—is also available at www.robertedgar.com.
Since Intersticies, I set that strategy aside, first focusing on Memory Theatres (Memory Theatre One, 1985, Apple //e and GraForth), and then creating real-time montage systems combining video and personal computers (Living Cinema, 1987, Targa board, video disc and Microsoft C; and SAND, or How Computers Dream of Truth in Cinema, 1994, Amiga, video board, MIDI guitar, AmigaVision and AREXX).
After a hiatus during which I concentrated on music, I integrated a performance system for performing real-time examinations of music that I’d written. This led to The Duchamp Examinations, 2006, MIDI guitar, VSampler 3 software, and Boomerang loop pedal.
At this time, it occurred to me that I could program a system for examining my own diary footage in real time, using the jump-frame technique I’d used in Simultaneous Opposites. However, instead of painstakingly single-framing it in one position, I could jump through the footage itself (both the visual and the audio), one frame at a time. After delaying a year while I created two new digital memory theatres, I have been developing the Simultaneous Opposites Engine since 2008. It is an ongoing work, and its development can be traced in the many short videos I’ve created with it since that time. The pieces I’ve submitted to MUBI exemplify its development.
The SOE is programmed in MAX/MSP/Jitter, a wonderful digital media authoring environment maintained and distributed by Cycling74 (www.cycling74.com) of San Francisco.
Well, I like the general concept from what I understand of it (by the way, if you were to “pitch” this concept to a studio/distributer/marketer/etc. how would you describe it summarily? This may make it a little easier to grasp for the rest of us), but what irked me about a lot of the clips you posted on Mubi is that there’s a ‘skipping’ sound in almost all of them. Perhaps this was intentional (if so, then I would ask, “Why is it necessary?”), perhaps not, but it makes many of them nearly unwatchable (or forces one to mute the sound completely). So, my main question is, “What are you trying to accomplish with this approach?”
Perhaps I don’t understand all the intricacies of this, but just looking at it as an ‘outsider,’ I would suggest that perhaps some polishing/smoothing of the images/sounds would make a vast improvement. Again, perhaps you were going for a naturalistic thing here, but naturalism isn’t (contrary to popular belief) always the best route. You say there was “no later video editing or sound mixing” but is this wise? I’m sure I don’t understand what you’re trying to do, or at least, I hope I don’t.
I create and employ software engines to examine mediated artifacts forged at my zone of proximal development.
Dunno if that is helpful. But that’s what I do.
There is a cinematic tradition that comes through Ken Jacobs, Paul Sharits, Ernie Gehr etc. that is seen again in You Tube video poops, that provide elements not unlike those in Simultaneous Opposites. This isn’t a narrative cinema, and for the most part not a painterly cinema, nor a music video cinema.
For Simultaneous Opposites, the soundtrack is generated the same way the visuals are generated: playing back one frame at a time. The “skipping” sound is the sound of the single frame audio playing back. It is more than intentional, it is sculptural—it comes from the structure of the piece.
My aesthetic is unlike most other cinema aesthetics (though its sources are explicitly and consciously from Eisenstein’s montage formulations). as it is not crafted toward a specific shape and feel. These are the results of navigating through footage that I shot previously, using the spiraling, single-frame navigation I’ve programmed in the S.O. Engine. The results provide their own surface and syntax. While those results can be annoying, that is not at all my aim. My purpose is to produce a unique object for sensory contemplation.
One more note on this approach.
It seems to me like cinema is a lot like one’s thoughts…you kind of hold it in front of your eyes, like a model for what is mirrored in your mind. They are similar, a kind of balance, through the iris, some type of structural commons. One thing with the standard cinematic experience, though, is that while you’re in control of your thoughts (to varying degrees), the screen is independent, pre-programmed, and does not rearrange itself with your thoughts.
The Simultaneous Opposites Engine is optimized for a balanced malleability: as you see and/or hear, you notice, you have a synaesthetic experience, you react on the guitar and modify what is on the screen and audio, you anticipate what will happen, something happens and you notice; these two spaces imitate, respond to and affect each other, through you. And while this is happening, you are suspended in a apperceptual state—when you’re dialed in.
I’ve really enjoyed what I have seen so far, and while I am completely ignorant to the methods you are using to create the work, I think the sound as it is, is an integral part of the experience and do not have any idea why someone would suggest “polishing/smoothing the images/sounds” for easier viewing. I found the work to be hypnotic and a nice bit of visual/audio re-programming. The idea of re-processing, re-imagining past video memories live into a sort of meditation on their basic elements is fantastic. I will be watching more.
Thanks Tony. I think that if someone can’t find that meditative focus with these videos, they can be extremely irritating. Same video, different people.