Computer Music Center: Six Decades of Creativity and Technology

By
Eve Glasberg
May 16, 2018

In an industrial warren of rooms brimming with instruments and electronic equipment, a student sits hunched over a computer screen, composing a piece of music with the aid of machine learning. In a studio down the hall, another pokes her head in and out of five differently sized boxes, each housing an immersive sound environment. Nearby, a student wearing dark, oversize goggles is building an interactive virtual reality iPhone app that creates audio worlds.

These are some of the innovative projects under way at Columbia’s Computer Music Center, which operates at the intersection of musical expression and technological development.

“Our primary mission is to create music, but technology has changed the landscape,” said Brad Garton, director of the center and of undergraduate studies in the Department of Music. “As a result, our definition of music has expanded.”

Founded in 1958, and originally known as the Columbia-Princeton Electronic Music Center, it is the oldest center for electro-acoustic music in the United States. In its earliest days, there were four tape studios for electronic composition, along with the custom-built RCA Mark II, the first programmable sound synthesizer.

The historic machine, installed in 1957, took up an entire 10’ x 20’ wall and had nine massive racks of knobs, panels and vacuum tubes that resembled a Louise Nevelson wall sculpture. It still stands at the center in Prentis Hall, despite repeated requests by the Smithsonian Institution to acquire it. Over the years, computers superseded electronic equipment, reflected in the center’s name change in 1996.

Equal parts composer, engineer and programmer, Garton arrived at Columbia in 1987 after a “failed career as a pharmacist,” he said, which he studied at Purdue University before switching gears and earning a doctorate in computer music composition from Princeton.

His approach to teaching is grounded in a focus on fundamentals, such as coding and hardware, while encouraging students to think broadly. “I’ll say, ‘What would you like to do?’ and someone might say, ‘I want to make a trombone sound like geese flying,’” Garton said with a laugh. “So we’ll deconstruct how the sound is made.”

He uses the same approach in his collaborations across the University with colleagues from the computer science department, the electrical engineering department and even the Columbia University Irving Medical Center who come to the Computer Music Center to work together on music-related projects.

Some of these projects utilize RTcmix, a computer music language Garton wrote two decades ago that is used in many different applications, one of which is data sonification. Users take data from other domains and attach audio parameters to it, just like data visualization except that the data is heard instead of seen.

“This is an interesting research tool because there are certain relationships where you can hear better than you can see,” said Garton. “For example, anything that has an exponential series, you can’t see that on a graph well because things tend to get either close together quickly or wide apart, but you can hear it because exponential series in sound is just octaves on a keyboard.”

One of Garton’s collaborators who employs this programming language is Ben Holtzman, a professor at Lamont-Doherty Earth Observatory and a leading expert on earthquakes, who develops methods for auditory representation of earthquake data in his seismic sound lab. Holtzman teaches a class on data sonification at the Computer Music Center, which he and Garton will expand thanks to a grant they got in 2017 from Columbia’s Data Science Institute. Holtzman and his team put on a show every October in the Hayden Planetarium at the American Museum of Natural History called the Seismodome in which visitors feel as if they’re inside the earth experiencing an earthquake through both sound and sight.

Another partner is Dave Sulzer, a neurophysiologist at the Medical Center, as well as a musician and composer, who has been working with Garton since 2008 on the Brainwave Music Project. Sulzer, whose main research focus is the chemical transmission of brain signals and the neuroscience of neurological and psychiatric disorders, had heard about measurement of the brain waves of drummers playing together using electroencephalography (EEG), a technique that measures electrical activity in the brain. The longer the drummers jammed, the more their brain waves began to synch. Why not see if the musicians could use their own brain waves to make new music together?

Via a software program that Garton wrote, they developed the means to generate music using brainwave data. Garton and Sulzer have done many performances and presentations over the years, along with teaching a class in the technology at the Computer Music Center. The Brainwave Music Project recently released its first CD featuring pieces with titles such as Serotonin, Dopamine and Amygdala. “The pieces sound like an avant-garde jazz orchestra with a strange coherence to the music,” said Garton.

The Computer Music Center and the music department now offer a two-year MFA degree in Sound Art in conjunction with the School of the Arts. “We noticed that some of our composition students weren’t just writing pieces for string quartets or concertos. They were making installations, sculptures that had sonic characteristics, and they were engaging deeply with sound as a medium,” said Garton. “The new interdisciplinary program grew out of this need to serve these students. And it speaks to our particular strength—combining technology with creativity.”