TRIBECA, MANHATTAN/LONG BEACH, CA: No, your speakers aren’t completely messed up – that’s Lou Reed’s Metal Machine Music. Entrancing to some, physically repulsive to others, since the record’s 1975 release it’s been guaranteed to make sense to just one head on the planet: Lou Reed’s.
But what if you could be Lou Reed for a 64-minute live performance of his maniacal masterpiece? Not stuck out somewhere in the audience, but up there onstage, and not to the NYC rock pioneer’s right or left, but inside his ears? It turns out, now you can do just that.
No, this doesn’t depend on a body occupancy wormhole a la Being John Malkovich. Instead, now through April 15, California State University at Long Beach’s University Art Museum is presenting an audio installation of Reed’s Metal Machine Trio. The show provides audience members with an accurate sonic replica of Reed’s experience onstage during an April 2009 performance of the composition, with special guest John Zorn alongside, at NYC’s Gramercy Theatre.
Those who make the pilgrimage to CSULB are getting the chance to experience “Metal Machine Trio: The Creation of the Universe” exactly as it was first presented on vinyl, and again that night in 2009, with the four 16-minute segments of the piece running in a continuous loop throughout the duration of the show.
So why did Reed, and the multinational engineering firm Arup, go to such lengths to present the performance of an album called “unlistenable”, “ear-wrecking electronic sludge”, and ranked #4 on Q magazine’s “Fifty Worst Albums of All Time” list? Because Metal Machine Trio has also been hailed as a landmark album, one that made industrial music, noise rock, and modern sound art possible.
Wooing Lou Reed
Making the 3D recording and subsequently reproducing it in a public installation was a serious test for Arup’s acoustic consultants, applying the company’s extremely deep resources – including a 3D SoundLab in the heart of their TriBeCa offices – to make a unique method for Metal Machine immersion.
The project began when Arup’s Music, Arts and Multimedia Consultant Mike Skinner approached Reed and invited him to visit the Arup SoundLab, a facility designed by Arup’s acousticians to recreate the acoustics of any physical (or imagined) space using 3D acoustic modeling techniques. It also uses a technique to capture the acoustics characteristics of existing spaces in 3D, which can be used to recreate these spaces within the SoundLab.
“I knew Lou had been interested in 3D sound for many years,” Skinner recalls, “and I thought he would be interested in hearing what Arup had done, and also discussing my thoughts for where it could go. I was hoping that he would be sufficiently impressed that he might want to work on a project together.
“In the subsequent discussion between Lou, me, and Raj Patel, Arup’s Principal of Acoustics/Theatre Consulting Leader Americas, Lou expressed how he had never heard a live recording of a performance that sounded the way it does to him on stage. So the idea was born to amalgamate these techniques and capture the event from the performer perspective in his upcoming shows for the Metal Machine Trio.”
Once the live show coordinates came into focus — a quick two-night stint at the intimate but roomy Gramercy Theatre on 23rd Street – Skinner and Patel started strategizing how they would record, mix and ultimately present this noteworthy content.
“We already knew that there were two ways we could present the content to listeners,” Patel says. “Either binaural, over headphones, or over a sound system capable of reproduction in full 3D ambisonics, the system used in the SoundLab.
“The first step was to carefully consider every aspect of the performance – the players, instruments, the room acoustics, and the recording equipment that would be needed. Lou met with Arup’s team at the venue and together we talked through the main elements of the performance to develop a recording concept and approach.
“We also discussed specific microphones and other equipment that Lou had used in the past and felt had produced successful sonic outcomes. Over a couple of weeks, we discussed the ways and methods to capture the performers and the room, using a range of techniques that would provide us sufficient audio to create a 3D sonic landscape back in the lab.”
3D Audio – What Makes it Different
Not surprisingly, recording for 3D audio is a different beast than it is for the traditional two-speaker playback. “Stereo recordings use panning techniques to make sounds move between the left and right channel,” explains Patel. “In good listening conditions, you can also play with level to make sounds appear to move forward or backward in the mix. When you listen to stereo recordings over headphones, when a sound moves from left to right, it does so inside your head.”
As Patel points out, in the movie business, to enhance the listener experience, techniques were developed to make sounds move around. For example 5.1 uses three front channels left-center-right as the primary channels, and uses panning to the rear two left-right channels to give a sense of movement.
“All of this happens in the horizontal plane and within the field/boundary of the loudspeakers,” Patel says. “So it is almost impossible to give a realistic sense of sound moving above or below you, or to provide a sense of depth – e.g. sound coming from 100ft away, passing you, and then going off into the distance. And it only really works in idealized listening conditions – which most people do not have access to.
According to Patel, the limitations of these systems in reproducing sound in 3D have actually been well known for many years. The fundamental mathematics for capturing sound in complete 3D, known as Ambisonics (additional information on the field can also be found at Ambisonics.net), was developed in the 1970’s, but only in the last 15 years or so has the processing power been available to start making use of it.
“Essentially the technique involves having three overlapping figure-of-eight microphones, individually capturing sound in the X [front-to-back], Y [side-to-side] and Z [floor-to-ceiling] axis at the listening location, and a W channel capturing the omnidirectional responses,” says Patel. “The captured signal type is called B-Format. With the appropriate decoding you can re-create for a listener, either within an appropriate loudspeaker array, or on headphones, the true 3d sound as would be experienced by the listener.”
Inside the SoundLab
Skinner, Patel and their colleagues are aided in their understanding of multichannel audio by an extremely powerful tool at Arup’s NYC headquarters, the aforementioned SoundLab. What appears to the casual observer to simply be a square room with acoustic paneling, transforms into a transporting – but highly scientific — sonic sanctuary once the battery of speakers inside start sounding out.
The room was designed not to enable the reproduction of Lou Reed noiserock concerts, but to aid Arup in its day-to-day acoustics consulting work on the development and design of new buildings, or the refurbishing of existing ones. “In working with clients, design teams, musicians etc.., for many years we had to explain aspects about sound in words and persuade our clients to change aspects of the building to meet our acoustic goals,” Patel notes. “This can be an uphill process, because we work with architects, who are very visually inclined, and they find it hard to change this based on our expert advice alone.”