Timeline

Here is a timeline of some of the significant technological innovations and musical compositions discussed in class.

1787 — The composition of Musikalisches Würfenspiel, attributed to Wolfgang Amadeus Mozart, and considered to be one of the earliest examples of algorithmic composition; the roll of dice was used to choose a measure of music from among a collection of possibilities, and after several such dice rolls a full composition would have been completed by assembling suitable measures. There were literally quadrillions of possible compositions that theoretically could be generated by this system.

1877 — Invention of the phonograph by Thomas Alva Edison, a device that recorded sound by inscribing an indentation into a tinfoil sheet on a cylinder such that the variations in the indentation were analogous to the amplitude of the sound; the sound of the stylus being dragged lightly on that surface at a later time could be amplified to audible level, allowing the previously inscribed sound to be heard back.

1886 — Invention of the graphophone by Alexander Graham Bell, which differed from the phonograph in that it inscribed the sound horizontally on wax-covered cardboard cylinders, making the recordings better-sounding and more durable.

1887 — Emile Berliner patented the gramophone, which etched the sound horizontally in a spiral on a wax-coated disc. The disc would prove to be the preferred recording format.

1896 — Invention of the telharmonium, generally considered the first electronic instrument, by Thaddeus Cahill, whose idea was for the sound of the telharmonium to be transmitted via telephone lines to homes and places of business, which would license the music service for a fee.

1899 — Maple Leaf Rag by Scott Joplin, which was recorded by the composer on a player piano scroll in the early 20th century; the player piano was a significant music storage/retrieval invention of the turn of the 20th century that enjoyed popularity in the first quarter of that century.

1920 — Invention of the theremin (a.k.a. thereminvox) by Leon Theremin, an electronic musical instrument that could be performed (could be controlled in its pitch and its volume) without the performer physically touching the instrument.

1924 — Ballet Mécanique by George Antheil was a large composition for pianos, player pianos, percussion instruments, sirens, and airplane propellers; it was intended to accompany a film of the same name by Fernand Léger.

1928 — The theremin was patented in the U.S. It would later be the first product manufactured by synthesizer pioneer Robert Moog, and would still later be used as a special melodic effect in compositions such as Good Vibrations by the Beach Boys and the theme for the TV show Star Trek.

1932 — The first commercially available electric guitar was produced by Adolph Rickenbacker and George Beauchamp. Guitars were amplified to compete in loudness with other band instruments. The fact that the vibration of the guitar’s strings was transduced into an electrical signal meant that its sound could be easily altered and distorted to get an extremely wide range of timbres, which were exploited by rock musicians.

1935 — The Gibson guitar company announces its production of the ES-150 “Electric Spanish” archtop electric guitar. Gibson remains one of the foremost manufacturers of electric guitars, the makers of several famous models popular among rock guitarists, including the Les Paul, the ES-335, the SG, and the Flying V.

1935 — The Magnetophon tape recorder was developed by AEG Telefunken in Germany, featuring lightweight tape (instead of the heavy, dangerous metal tape used in some prior devices) and a ring-shaped magnetic head. Although not commercially available for about another decade, this was the fundamental design of later models.

1948 — The Ampex company in America, with financial backing from radio star Bing Crosby, produced a commercially available tape recorder, the Model 200. The availability of tape recording made it much easier to record large amounts of sound material for radio broadcasts and records.

1948 — Etude aux chemins de fer by Pierre Schaeffer was one of the first examples of musique concrète, music composed entirely of recorded non-instrumental sounds. He recorded railroad sounds on discs, and developed techniques of looping, editing, and mixing to compose music with those sounds.

1950 — Alan Turing describes a definition of artificial intelligence in his article “Computing Machinery and Intelligence“.

1956 — Louis and Bebe Barron produced a soundtrack score for the film Forbidden Planet that consisted entirely of electronic sounds they generated with their own homemade circuitry. The musicians’ union convinced MGM not to bill their work as music, so they were credited with “electronic tonalities”.

1956 — German composer Karheinz Stockhausen composed Gesang der Jünglinge for electronic and concrete sounds.

1956 — Lejaren Hiller and Leonard Isaacson programmed the Illiac I computer at the University of Illinois to compose the Illiac Suite for string quartet, the earliest example of algorithmic music composition carried out by a computer.

1958 — Edgard Varèse composed Poème électronique for electronic and concrete sounds, which was composed to be played out of multiple speakers inside the Philips Pavilion at the 1958 World’s Fair in Brussels, an innovative building design credited to Le Corbusier but largely designed by architect/engineer/composer Iannis Xenakis.

1958 — Luciano Berio composed Thema (Omaggio a Joyce), a tape composition in which the initial “theme” is a reading by Cathy Berberian of text written by James Joyce, and the recording is used as the source material for the remainder of the composition, made by editing and mixing Berberian’s taped voice.

1950s — Early experimentation in the musical application of computers included attempts at algorithmic composition by Hiller and Isaacson (resulting in the Illiac Suite in 1956) and work on audio and voice synthesis at Bell Labs, resulting in Max Mathews‘s MUSIC programming language, the precursor to many subsequent similar music programming languages (known collectively as Music N languages).

1950s — American expatriate composer Conlon Nancarrow composed the majority of his Studies for Player Piano during this decade, including Study No. 21, also known as Canon X, a composition in which two melodic lines constantly change tempo in opposite ways. Nancarrow punched the notes of his compositions into player piano scrolls by hand. The mechanical means of performing the music permitted him to explore musical ideas involving very complex rhythm and tempo relationships that are practically impossible for human performers.

1961 — Max Mathews and others at Bell Labs synthesize a singing voice and a piano in a completely digitally-produced rendition of the song Daisy Bell (A Bicycle Built for Two).

1964 — Understanding Media, a critique of media and technology by Marshall McLuhan.

1968 — Revolution 9, a musique concrète composition by The Beatles (credited to Lennon-McCartney, but composed primarily by John Lennon with the assistance of Yoko Ono and George Harrison), demonstrating the artists’ interest in avant garde contemporary art and music. It was remarkable for a popular rock group of their stature to include such music on an album rock music.

1968 — Switched-On Bach was an album of compositions by Baroque-period German composer Johann Sebastian Bach performed (with overdubbing) on a Moog modular synthesizer by Wendy Carlos. The album popularized the sound of electronic music and even made it to the Billboard Top 40 and won three Grammy awards.

1968 — Composer Steve Reich noticed that when a suspended microphone swung past a loudspeaker it momentarily produced a feedback tone. This inspired him to make Pendulum Music, a piece in which several suspended microphones are swung pendulum-like over loudspeakers to produce feedback tones periodically. The tones happened at different periodicities based on the rate at which each microphone swung, creating an unpredictable rhythmic counterpoint. This type of piece was consistent with the conceptual art of the 1960s, in which the idea behind the creation of the artwork was considered more important than, or even considered to be, the artwork itself. It’s also an example of process music, in which a process is enacted and allowed to play out, and the result of that process is the composition. It led to other process pieces by Reich (and soon by others, too) such as his tape loop pieces It’s Gonna Rain and Come Out, and instrumental process pieces such as Piano Phase.

1960s — Robert Moog developed the voltage-controlled modular synthesizer, made famous by Wendy Carlos’s album Switched-On Bach, and used by various experimental composers and popular musicians. The synthesizer consisted of a cabinet filled with diverse sound-generating and sound-processing modules that could be interconnected with patch cords in any way the user desired. A significant feature was the ability to use oscillators not only as sound signals but also as control signals to modulate the frequency of other oscillators, the gain of amplifiers, and the cutoff frequency of filters.

1972 — The end of the tune From the Beginning by Emerson, Lake & Palmer includes a Moog modular synthesizer solo. The Moog modular was included in studio recordings of several rock bands of that period. However, the later Minimoog synthesizer proved more popular for live performances for various reasons, especially its relative simplicity and compactness.

1976 — The Minimoog synthesizer included a pitchbend wheel and a modulation wheel next to the keyboard for additional expressive control. Few people mastered the Minimoog (and the pitchbend wheel) more than Chick Corea, who used the Minimoog in the jazz-rock group Return to Forever. The tune Duel of the Jester and the Tyrant includes a good example of Corea’s prowess on the Minimoog.

1977 — Producer-composer Giorgio Moroder was the producer of the disco hit I Feel Love by Donna Summer, in which the instrumental accompaniment is completely electronic.

1978 — Giorgio Moroder is well known for composing the film score Midnight Express, the Chase theme of which typifies the driving rhythmic periodic electronic sequences in the score. Moroder used a wide range of synthesizers (Moog, Minimoog, ARP, etc.) and other electronic keyboards.

1978 —The German band Kraftwerk composed synthesizer music that seemed to comment on the mechanization and dehumanization of modern technological society. Their song The Robots, composed with synthesizers such as the Minimoog, overtly sings of cyberbeings, but may in fact be a commentary on class disparities, evoking a dehumanized working class.

1980 — John Searle published “Minds, Brains, Programs“, an article disputing Alan Turing’s definition of intelligence.

early 1980s — The MIDI protocol for communication between digital instruments was established by music manufacturers.

1980s — David Cope began to develop his Experiments in Musical Intelligence (EMI) software, which composed music convincingly in the style of famous composers.

1985 — Degueudoudeloupe is an algorithmic composition by Christopher Dobrian composed and synthesized by computer (programmed by the composer) exploring computer decision making in metric modulations, and using continually changing tuning systems.

1989 — The Vanity of Words by Roger Reynolds uses computer algorithms to edit and process the voice of baritone Philip Larson reciting text by Milan Kundera, resulting in a digital form of musique concrète, in concept not unlike the 1958 composition Thema (Omaggio a Joyce) by Luciano Berio.

1991 — Entropy, an algorithmic composition for computer-controlled piano and computer graphics by Christopher Dobrian, exploring the use of changing statistical probabilities as a way of organizing a musical composition.

1994 — Textorias for computer-edited guitar by Arthur Kampelas demonstrates how intricate digital editing can be used to organize a highly gestural modern-day musique concrète composition.

1995 — Barrage No. 4, a composition for computer-edited electric guitar sounds by John Stevens, a digital form of musique concrète; the guitar sounds are very noisy and of radically changing pitch, defying classification as traditional instrumental sound.

1997 — Rag (After Joplin) is a piano rag composed by David Cope‘s artificially intelligent EMI software emulating the musical style of famous ragtime composer Scott Joplin. Cope’s software draws on databases of compositions by famous composers, and reorders moments of those compositions, resulting in new works that are remarkably similar stylistically to the composer’s actual works.

1998 — There’s Just One Thing You Need To Know is a computer music composition by Christopher Dobrian for computerized piano (Yamaha Disklavier), synthesizer, and interactive computer system in which the computer responds to the performed piano music with synthesizer accompaniment and even algorithmic improvisations of its own.

2000 — Microepiphanies: A Digital Opera is a full-length theatrical music and multimedia performance by Christopher Dobrian and Douglas-Scott Goheen in which the music, sound, lights, and projections are all controlled by computer in response to the actions of the live performers onstage, without there being any offstage technicians running the show.

2003 — Mannam is a composition by Christopher Dobrian for daegeum (Korean traditional bamboo flute) and interactive computer system, featuring interactive processing of the flute’s sound as well as synthesized and algorithmically-arranged accompaniments.

2005 — Data.Microhelix by Ryoji Ikeda, exclusively used digital “glitches” as the fundamental musical material, an example of the so-called “aesthetics of failure”.

2011 — Eigenspace by Mari Kimura employed the “augmented violin” gesture-following system developed at IRCAM to control the type of audio processing that would be applied to her violin sound in real time.

2013 — Modus 01 by Danny Sanchez used sounds triggered by piano to make an interactive combination of piano and glitch.

Recent computer music technologies and aesthetics

Link

The “New Aesthetic

Ferruccio Laviani (Fratelli Boffi)
Good Vibrations furniture designs
Pixelated scupture
– The pixelated animals of Shawn Smith
– Digital Orca by Douglas Coupland

Glitch

The Aesthetics of Failure” (or just more “new aesthetic”?)

Ryoji Ikeda
Data.Microhelix
.mzik
The Transfinite
Danny Sanchez
Modus 01

Gesture following

IRCAM IMTR
Gesture Follower
Mari Kimura (IRCAM)
Augmented violin
Eigenspace

Sergi Jordà (MTG)
Reactable

Christopher Dobrian
MCM
Gestural

Kinect, Wii, etc.

Robotic musicmaking

LEMUR
JazzBot
Byeong Sam Jeon
Telematic Drum Circle

Laptop orchestras

PLOrk
performance video
SLOrk
television news feature

Telematic Performance

JackTrip
Dessen, Dresser, et al
Byeong Sam Jeon

Live Coding

ChucK
Reactable

Dobrian’s early interactive compositions

As computer processing speed became faster, in the 1980s and 1990s it became clear that a computer program could compose and synthesize sound instantaneously “in real time”, meaning “without noticeable time delay”. Composing in real time is essentially what people often mean when they use the word “improvisation”; a person (or computer) makes up music and performs it at the same time. However, improvisation between two or more musicians involves the important components of listening to the music others are making, reacting in real time to what others do, and interacting with them. Can computer improvisation incorporate these apparently very human traits of listening, reacting, and interacting?

Artificial intelligence is defined by some as having the quality or appearance of intelligent behavior, of successfully imitating human intelligence even if lacking the awareness and understanding that humans have. Similarly, computer perception, cognition, and reaction/interaction can be successfully emulated to the point where we might say the computer exhibits the qualities of those abilities, even if we know it is not actually “hearing” or “interacting” the same way humans do. For this sort of emulation of interaction, we might say that the computer exhibits “interactivity”, the quality of interacting, even if its way of doing so is different from that of a human.

In the 1990s I began to focus less on music in which the computer composed and performed/synthesized its own composition, and more on music that involved some sort of interactivity between the computer and a human performer in real time. In order for the computer to seem to be interactive, its program has to include some rudimentary form of perception and cognition of the musical events produced by the human, and it must also have some degree of autonomy and unpredictability. (If these factors are absent, the computer would be acting in a wholly deterministic way, and could not be said to be in any way truly interactive.)

In the composition There’s Just One Thing You Need To Know (1998) for Disklavier, synthesizer, and interactive computer system, the notes played by a human pianist are transmitted via MIDI to a computer program that responds and contributes musically in various ways.

The piece is composed as a “mini-concerto” for the Disklavier piano, which is accompanied by a Korg Wavestation A/D synthesizer controlled by Max software interacting automatically with the performer in real time. The conceptual and musical theme of the piece is “reflection”. The music composed for the pianist requires — almost without exception — symmetrical hand movement; one hand mirrors precisely the position of the other hand, albeit often with some delay or other rhythmic modification. Even the music played by the computer is at any given moment symmetrical around a given pitch axis. The human and computer performers also act as “mirror images” or “alter egos”, playing inverted versions of the other’s musical material, playing interlocking piano passages in which they share a single musical gesture, and reinforcing ideas presented by the other.

The computer plays three different roles in the piece. In one role, it is a quasi-intelligent accompanist, providing instantaneous synthesizer response to the notes played by the pianist. In a second role, the computer acts as an extension of the pianist’s body—an extra pair of virtual hands—playing the piano at the same time as the pianist, making it possible for the piano to do more than a single pianist could achieve. In a third role, the computer is an improviser, answering the pianist’s notes with musical ideas of its own composed on the spot in response to its perceptions of the pianist’s music.

The title comes from a statement made by composer Morton Feldman. “There’s just one thing you need to know to write music for piano. You’ve got a left hand, and you’ve got a right hand. [gleefully] That’s ‘counterpoint’!”


Another work composed shortly thereafter is Microepiphanies: A Digital Opera (2000), an hour-long multimedia theatrical and musical performance in which the music, sound, lights, and projections are all controlled by computer in response to the actions of the live performers onstage, without there being any offstage technicians running the show.

The performance is conceived as a satire of the tropes and cliches that were commonly found in performances of so-called interactive music of the time. The performers describe their activities to the audience (sometimes deceitfully) as they perform music with various unusual technological devices. Because the technology occasionally seems to malfunction, or functions mysteriously, it’s difficult for the audience to know the true extent to which the computer system is actually interactive. Because it’s made clear that there are in fact no offstage technicians aiding the performance, the apparent musical and interactive sophistication of the computer seems at times magical.


A couple years later I had the opportunity to spend a year living in Seoul, Korea. I was interested to see in what ways interactive computer music could be used in the context of traditional Korean music, not simply to use the sounds of Korean music as a sort of musical exoticism, but rather as a way to find a true symbiosis between two apparently disparate musical worlds, the traditional music of an Asian nation and the modern technological music being practiced in the West.

This project required that I study traditional Korean classical music as seriously as I could, in order to be properly knowledgeable and respectful of that music as I composed the music and designed software for interaction with a live musician. Because I had previously done a series of interactive pieces involving the flute, I chose to work with the Korean bamboo flute known called the daegeum. I was helped by Serin Hong, who was at that time a student of traditional Korean music at Chugye University, specializing in daegeum performance; Serin would play the music I composed for the instrument, and would give me criticism on any passages that were not sufficiently idiomatic or playable.

I eventually wrote a complex interactive computer program and composed a thirteen-minute piece titled Mannam (“Encounter”) (2003) for daegeum and interactive computer system, which was premiered by Serin Hong in the 2003 Seoul International Computer Music Festival.

The computer was programmed to capture the expressive information (pitch and volume fluctuations) from the live daegeum performance; the program used pitch, loudness, and timbre data to shape the computer’s sound synthesis and realtime processing. The computer modifies the sound of the daegum in real time, stores and reconfigures excerpts of the played music, and provides harmonic accompaniment in “intelligent” response to the daegeum notes. The daegeum music is composed in idiomatic style, and leaves the performer considerable opportunity for rubato, ornamentation, and even occasional reordering of phrases, in order to respond to the computer’s performance, which is different every time the piece is played.

The techniques I developed for tracking the performer’s nuances are described in detail in “Strategies for Continuous Pitch and Amplitude Tracking in Realtime Interactive Improvisation Software“, an article I wrote for the 2004 Sound and Music Computing conference in Paris.

Dobrian’s early algorithmic compositions

When I started learning about computer music in 1984, I became interested in the challenge of trying to program the computer to compose music. My first piece of computer music was an experiment in algorithmic composition focusing on automated composition of metric modulations. Since a metric modulation is essentially a mathematical operation, it seemed reasonable that a computer could calculate and perform such tempo changes more readily than humans. So I set out to write a computer program that would compose and synthesize music that included metric modulations.

The problem that I immediately confronted, though, is how to establish a clear sense of tempo and beat for the listener that would be musically interesting. It’s one thing to use standard Western notation to compose metric modulations on paper, but when there’s no paper or notation involved, and no human performers to add inflection to the music, it’s another thing to figure out exactly how to make the beat evident to the listener so that the metric modulation would be clearly audible.

I was obliged to answer the question, usually taken for granted by musicians, “What are the factors that give us a sense of beat?” The most obvious one is that things happen periodically at a regular rate somewhere roughly in the range of walking/running speed, say from 48 to 144 beats per minute. But music consists of more than simply a single constant rhythm, it contains a wide variety of rhythms. So how do we derive the beat? We use other factors that exhibit regularity such as dynamic accent (loudness), timbral accent (choice of instrument, brightness), melodic contour (a repeating shape), harmonic rhythm (implied chord changes), and harmonically-related beat divisions (triplets, sixteenths, quintuplets, etc.), and we take all of those into consideration simultaneously to figure out rates of repetition that might be considered the beat. (There are also stylistic considerations; we may be familiar with a body of works that all belong to the same style or genre, so that we have established cultural cues about where the beat is considered to be in that type of music.)

So I wrote a program in C that composed phrases of music, and that made probabilistic decisions along the way about how to modulate to a new harmonically-related tempo using beat divisions of the same speed in each tempo. It used factors of melodic contour, dynamic accent, and timbral accent to provide a sense of periodicity. (It incidentally also used continuously changing pitch ranges and continuously changing tuning systems, so that the music has a unique inharmonic character.) The program produced a type of musical score for each phrase of music, a time-tagged list of notes that would be used in the Cmusic sound synthesis language to synthesize the notes on a VAX 11/780 (Unix) computer and store the results in a sound file. (The sampling rate was a mere 16 kHz, but that was barely adequate for the sounds being synthesized.)

The composition was titled Degueudoudeloupe (1985), a nonsense word I thought was evocative of the kinds of sounds and rhythms in the piece. The sounds in the piece are string-like, drum-like, and bell-like sounds synthesized using two methods:  Karplus-Strong synthesis and Frequency Modulation synthesis.


In another composition several years later I again used probabilistic computer decision making to compose time-tagged lists of notes, this time intended to be transmitted as MIDI note data to a Yamaha Disklavier computer-controlled piano. The program was written in the C language, and the MIDI data that that program produced was saved as a MIDI file on a Macintosh Plus computer and transmitted to the Disklavier.

The piece is titled Entropy (1991), and was largely inspired by the following passage from The Open Work by Umberto Eco:

Consider the chaotic effect (resulting from a sudden imposition of uniformity) of a strong wind on the innumerable grains of sand that compose a beach: amid this confusion, the action of a human foot on the surface of the beach constitutes a complex interaction of events that leads to the statistically very improbable configuration of a footprint. The organization of events that has produced this configuration, this form, is only temporary: the footprint will soon be swept away by the wind. In other words, a deviation from the general entropy curve (consisting of a decrease in entropy and the establishment of improbable order) will generally tend to be reabsorbed into the universal curve of increasing entropy. And yet, for a moment, the elemental chaos of this system has made room for the appearance of an order…

The composition was conceived as a vehicle with which to explore ideas of information theory and stochasticism in an artistic way. It explores the perception of randomness and order (entropy and negentropy) in musical structure, and demonstrates the use of stochasticism not only as a model for the distribution of sounds in time, but also as a method of variation of a harmonic “order”.

The notes of this piece were all chosen by a computer algorithm written by the composer. The algorithm takes as its input a description of some beginning and ending characteristics of a musical phrase, and outputs the note information necessary to realize a continuous transformation from the beginning to the ending state. Such a transformation can take place over any period of time desired by the composer (in this piece anywhere from 3 to 90 seconds). The input description is stated in terms of relative probabilities of different musical occurrences, thus allowing the composer to describe music which ranges between totally predictable (negentropic) and totally unpredictable (entropic), and which can transform gradually or suddenly from one to the other.

Artificial intelligence and algorithmic composition

The development of computers, and devices that have computers embedded within them, has encouraged the exploration of machines that perform human actions. This exploration includes software that performs intellectual tasks, such as playing chess and composing music, and software-hardware systems that control machines robotically to perform physical tasks.

In the early years of modern computer development, mathematician and computer scientist Alan Turing developed ideas about computational algorithms and artificial intelligence. He hypothesized about the fundamental definition and nature of intelligence in his 1950 article “Computing Machinery and Intelligence“. In that article he proposed what eventually came to be known as the Turing test, which, if passed by a computer would qualify that machine as exhibiting intelligence. His premise can be paraphrased as implying that the appearance of human intelligence, if it is indistinguishable from real human behavior, is equivalent to real intelligence, because we recognize intelligence only by witnessing its manifestation. (This assertion was interestingly disputed by John Searle in his 1980 article “Minds, Brains, and Programs“.)

How can musical intelligence, as manifested in music composition, be emulated by a computer? To the extent that musical composition is a rational act, one can describe the methodology employed, and perhaps can even define it in terms of a logical series of steps, an algorithm.

An early example of a music composition algorithm is a composition usually attributed to Wolfgang Amadeus Mozart, the Musikalisches Würfelspiel (musical dice game), which describes a method for generating a unique piece in the form of a waltz. You can read the score, and you can hear a computer generated realization of the music. This is actually a method for quasi-randomly choosing appropriate measures of music from amongst a large database of possibilities composed by a human. Thus, the algorithm is for making selections from among human-composed excerpts—composing a formal structure using human-composed content—not for actually generating notes.

In the 1950s two professors at the University of Illinois, Lejaren Hiller and Leonard Isaacson, wrote a program that implemented simple rules of tonal counterpoint to compose music. They demonstrated their experiments in a composition called the Illiac Suite, named after the Illiac I computer for which they wrote the program. The information output of the computer program was transcribed by hand into musical notation, to be played by a (human-performed) string quartet.

Another important figure in the study of algorithmic music composition is David Cope, a professor from the University of California, Santa Cruz. An instrumental composer in the 1970s, he turned his attention to writing computer programs for algorithmic composition in the 1980s. He has focused mostly on programs that compose music “in the style of” famous classical composers. His methods bear some resemblance to the musical dice game of Mozart, insofar as he uses databases of musical information from  the actual compositions of famous composers, and his algorithm recombines fragmented ideas from thos compositions. As did Hiller and Isaacson for the Illiac Suite, he transcribes the the output of the program into standard musical notations so that it can be played by human performers. Eventually he applied his software to his own previously composed music to generate more music “in the style of” Cope, and thus produced many more original compositions of his own. He has published several books about his work, which he collectively calls Experiments in Musical Intelligence (which is also the title of his first book on the subject). You can hear the results of his EMI program on his page of musical examples.

Music Technology in the Industrial Age

Technology has always been an important part of the development of music. Musical instruments are perfect examples of technology developed to extend human capabilities. Presumably vocal music and percussion music must have existed before instruments were developed. How else could the idea of making instruments of music occur? Instruments were developed to extend music beyond what we were capable of producing just with our own voice or the sounds of everyday objects being struck.

The development of keyboard instruments (virginal, clavichord, harpsichord, fortepiano, piano) provides a good case study in how instrumental art and craft were driven by musical imperatives, and how music itself was affected by instrument development.

In the industrial revolution, the most transformative technology for music was the phonograph invented by Thomas Edison in 1877 and quickly improved upon by Alexander Graham Bell who developed the idea of engraving the sound signal horizontally on wax cylinders instead of vertically as on Edison’s tinfoil sheets. Bell’s device, patented in 1886, was dubbed the graphophone. At the same time, Emile Berliner was developing a method of engraving the signal horizontally on wax discs, patenting a device called the gramophone in 1887.

Another technology for music reproduction being developed at the same time was the player piano. Many inventors were experimenting with various technologies for a self-playing piano. The idea that eventually proved most viable was a pneumatic system inside the piano in which air from a foot-powered bellows passed through holes in a moving scroll of paper, activating valves for each key of the piano that moved a push rod to push the piano action, playing a note. The player piano became a viable commercial product in the first decade of the twentieth century, and reached the height of its popularity as an entertainment device in the next two decades, but radically diminished in popularity after the economic crash of 1929. Its fading popularity may also be attributed to the rise of radio sales in the 1920s.

The player piano figured in the work of at least two American experimental composers in the twentieth century who composed specifically for that instrument. In 1924 George Antheil composed an extraordinary work titled Ballet mécanique, to accompany an experimental film of the same name by the French artist Fernand Léger, which called for an ensemble of instruments that included four player piano parts, two human-performed piano parts, a siren, seven electric bells, and three airplane propellers. In the 1950s and ’60s Conlon Nancarrow composed over forty works, which he titled Studies, for player piano. Rather than record the music by playing on a roll-punching piano, he punched the paper rolls by hand, which enabled him to realize music of extraordinary complexity, with multiple simultaneous tempos and sometimes superhuman speed.

Thaddeus Cahill patented the Telharmonium, one of the first electronic instruments, in 1896. It was remarkable for its size and complexity, and for its ability to transmit its sound over telephone wires. It gained considerable attention and support from venture capitalists interested in marketing music on demand via the telephone. The idea eventually proved unsuccessful commercially for various reasons, not the least of which was the problem of crosstalk interference from and with phone conversations. The instrument was so large and unwieldy, and consumed so much energy, that it was abandoned and eventually disassembled.

An instrument of much more enduring interest was the theremin, invented by Russian physicist Leon Theremin in 1920, and patented in the U.S. in 1928. It was remarkable because it operated on the principle of capacitance between the performer and the instrument, so that the sound was produced without touching the instrument. It created a pure and rather eerie pitched tone, which could be varied in pitch (over a range of several octaves) and volume based on the distance of the performer’s hands from two antennae. The instrument has retained considerable interest and popularity over the past century, and has been mastered by several virtuosic performers, most famously Clara Rockmore.