Dobrian’s early interactive compositions

As computer processing speed became faster, in the 1980s and 1990s it became clear that a computer program could compose and synthesize sound instantaneously “in real time”, meaning “without noticeable time delay”. Composing in real time is essentially what people often mean when they use the word “improvisation”; a person (or computer) makes up music and performs it at the same time. However, improvisation between two or more musicians involves the important components of listening to the music others are making, reacting in real time to what others do, and interacting with them. Can computer improvisation incorporate these apparently very human traits of listening, reacting, and interacting?

Artificial intelligence is defined by some as having the quality or appearance of intelligent behavior, of successfully imitating human intelligence even if lacking the awareness and understanding that humans have. Similarly, computer perception, cognition, and reaction/interaction can be successfully emulated to the point where we might say the computer exhibits the qualities of those abilities, even if we know it is not actually “hearing” or “interacting” the same way humans do. For this sort of emulation of interaction, we might say that the computer exhibits “interactivity”, the quality of interacting, even if its way of doing so is different from that of a human.

In the 1990s I began to focus less on music in which the computer composed and performed/synthesized its own composition, and more on music that involved some sort of interactivity between the computer and a human performer in real time. In order for the computer to seem to be interactive, its program has to include some rudimentary form of perception and cognition of the musical events produced by the human, and it must also have some degree of autonomy and unpredictability. (If these factors are absent, the computer would be acting in a wholly deterministic way, and could not be said to be in any way truly interactive.)

In the composition There’s Just One Thing You Need To Know (1998) for Disklavier, synthesizer, and interactive computer system, the notes played by a human pianist are transmitted via MIDI to a computer program that responds and contributes musically in various ways.

The piece is composed as a “mini-concerto” for the Disklavier piano, which is accompanied by a Korg Wavestation A/D synthesizer controlled by Max software interacting automatically with the performer in real time. The conceptual and musical theme of the piece is “reflection”. The music composed for the pianist requires — almost without exception — symmetrical hand movement; one hand mirrors precisely the position of the other hand, albeit often with some delay or other rhythmic modification. Even the music played by the computer is at any given moment symmetrical around a given pitch axis. The human and computer performers also act as “mirror images” or “alter egos”, playing inverted versions of the other’s musical material, playing interlocking piano passages in which they share a single musical gesture, and reinforcing ideas presented by the other.

The computer plays three different roles in the piece. In one role, it is a quasi-intelligent accompanist, providing instantaneous synthesizer response to the notes played by the pianist. In a second role, the computer acts as an extension of the pianist’s body—an extra pair of virtual hands—playing the piano at the same time as the pianist, making it possible for the piano to do more than a single pianist could achieve. In a third role, the computer is an improviser, answering the pianist’s notes with musical ideas of its own composed on the spot in response to its perceptions of the pianist’s music.

The title comes from a statement made by composer Morton Feldman. “There’s just one thing you need to know to write music for piano. You’ve got a left hand, and you’ve got a right hand. [gleefully] That’s ‘counterpoint’!”


Another work composed shortly thereafter is Microepiphanies: A Digital Opera (2000), an hour-long multimedia theatrical and musical performance in which the music, sound, lights, and projections are all controlled by computer in response to the actions of the live performers onstage, without there being any offstage technicians running the show.

The performance is conceived as a satire of the tropes and cliches that were commonly found in performances of so-called interactive music of the time. The performers describe their activities to the audience (sometimes deceitfully) as they perform music with various unusual technological devices. Because the technology occasionally seems to malfunction, or functions mysteriously, it’s difficult for the audience to know the true extent to which the computer system is actually interactive. Because it’s made clear that there are in fact no offstage technicians aiding the performance, the apparent musical and interactive sophistication of the computer seems at times magical.


A couple years later I had the opportunity to spend a year living in Seoul, Korea. I was interested to see in what ways interactive computer music could be used in the context of traditional Korean music, not simply to use the sounds of Korean music as a sort of musical exoticism, but rather as a way to find a true symbiosis between two apparently disparate musical worlds, the traditional music of an Asian nation and the modern technological music being practiced in the West.

This project required that I study traditional Korean classical music as seriously as I could, in order to be properly knowledgeable and respectful of that music as I composed the music and designed software for interaction with a live musician. Because I had previously done a series of interactive pieces involving the flute, I chose to work with the Korean bamboo flute known called the daegeum. I was helped by Serin Hong, who was at that time a student of traditional Korean music at Chugye University, specializing in daegeum performance; Serin would play the music I composed for the instrument, and would give me criticism on any passages that were not sufficiently idiomatic or playable.

I eventually wrote a complex interactive computer program and composed a thirteen-minute piece titled Mannam (“Encounter”) (2003) for daegeum and interactive computer system, which was premiered by Serin Hong in the 2003 Seoul International Computer Music Festival.

The computer was programmed to capture the expressive information (pitch and volume fluctuations) from the live daegeum performance; the program used pitch, loudness, and timbre data to shape the computer’s sound synthesis and realtime processing. The computer modifies the sound of the daegum in real time, stores and reconfigures excerpts of the played music, and provides harmonic accompaniment in “intelligent” response to the daegeum notes. The daegeum music is composed in idiomatic style, and leaves the performer considerable opportunity for rubato, ornamentation, and even occasional reordering of phrases, in order to respond to the computer’s performance, which is different every time the piece is played.

The techniques I developed for tracking the performer’s nuances are described in detail in “Strategies for Continuous Pitch and Amplitude Tracking in Realtime Interactive Improvisation Software“, an article I wrote for the 2004 Sound and Music Computing conference in Paris.