Musical instruments are the product of engineering, each one an expression of the level of technological sophistication achieved by the era and culture in which they were conceived. Many are also works of craftsmanship, of artistry, even works of art in their own right. Over time the musical instrument can become a record of the technique and physical interaction provided by the musician, as wear patterns emerge from playing.
Of course, the instrument is the implement of music making, producing the sound heard by the listener, and that’s vitally important as well! As recording engineers, it is our job to faithfully capture the sound of that instrument, in a manner that is as true to the source as possible. But let’s not forget that we’re recording a musician as well. We use technology in the recording process, and that technology is also an expression of the level of sophistication of our times.
We must be careful as audio engineers not to lose ourselves in the gear. We must not lose sight of the musicality, both inherent in and necessary to the final outcome. Many of us who find ourselves involved in sound engineering have arrived here as part of pursuing our own musical expression. Many engineers also play an instrument. Many others do not, possibly because attempts to learn an instrument led to frustration or dissatisfaction. Learning an instrument to any level of competence is a daunting challenge, never mind achieving high proficiency. Studio engineering might be seen as an alternative, as a way to stay involved in music without having to spend eight hours every day playing scales.
But hold on there! Engineering requires a similar level of practice and perseverance. The best engineers tend to take a very similar approach to recording as the ones followed by the musicians they record: practice, repetition, discipline, critique from others. And its no coincidence than many of the habits used in operating the gear follow the same techniques used by musicians. For example, counting bars and beats in order to pull off a punch-in on the eighth notes, so as not to screw up the downbeat. The point is, if we try to reduce engineering to bar graphs and button pushing, we may be shortchanging ourselves. To embrace the musicality that lies just beyond the technology is to get into the musician’s head, and to see their performance from their perspective.
In addition to making more compelling recordings, an engineer who is able to do this will become far more valuable to the musicians! After all, musicians who play traditional instruments don’t generally see those instruments as products of technology. They see their axe as their axe! Way back when it was first introduced, the piano (piano-forte, for the purists) was a high-tech instrument. It was the workstation of its age. But any perception of the piano as a complex machine melts away the moment someone sits down and plays a beautiful piece of music. Even someone who bangs out beats on an MPC will eventually stop seeing it as a computer with buttons and menus. They’ll start to see it as another voice, one that they can use to express themselves. Perhaps we should start seeing our microphones and recorders as part of a process that expresses and communicates.
There are numerous experiences that I’ve had as an engineer at Omega Recording Studios that have emphasized the importance of this perspective. About a year ago, a presentation was given at Omega by Tim Dolbear on behalf of the Samplitude and Sequoia software systems. It was a great demo, and he showed off some really neat features in the software. He talked about some of his experiences as a recording engineer. The part that stuck with me was when he mentioned taking drumming lessons for about a year or so. The thing was, he revealed that he wasn’t taking those lessons in order to become a drummer. He was taking them in order to do a better job recording the instrument. He didn’t mention whether he ever got proficient at playing the drums, but I suspect that wasn’t the point. The point was gaining the drummers perspective on the instrument, and perhaps just sitting listening to other drummers play wasn’t as effective. I had always looked at taking music lessons with the perspective that the solitary goal was to play that instrument in a band or something. If I wasn’t going to be able to do that (as I was convinced by my own early experiences with lessons), what was the point? Tim’s discussion widened my point of view about the other things that music instruction could accomplish.
A couple of months ago, we had a jobs conference at Omega for students and graduates of the Omega Studios’ School. It was an opportunity for our students to meet and talk with professionals and employers. Marc Oliver came over from Silver Spring Studios to be a part of the discussion. Marc’s work at Silver Spring concerns mostly dialog editing and audio production work for television. Not surprisingly, though, he has some background in playing music. When talking about what he was looking for when hiring new engineers, he mentioned seeking out people who play an instrument or have done music production work. Wait a second, for dialog work? Yup. Marc said that people who understand rhythm and phrasing in music generally do a better job following rhythm and phrasing in dialog editing. If musicians find a voice through their music, perhaps we can find musicality in voice!
More recently, one of the graduates from Omega’s Sound Reinforcement for Live Performance Program (Live Sound) asked me about mixing guitars at live shows. He brought up the common problem of rock guitars drowning out the vocals at a gig. Every live sound engineer has faced this one, myself included. When I first got into mixing live shows, I took the point of view that some guitarists (most?) just play too loud. And probably because of their egos. I spent a fair amount of time trying to diplomatically persuade guitarists to play at a lower level, and couldn’t understand when they ignored my advice. Thankfully, a guitarist finally put me straight on my presumptions. He pointed out that the rock guitarist who plays an electric through a combo amp learns that he can achieve a certain quality of crunch by overdriving the speaker. It’s a sound they just don’t get through pedals, and as it only occurs when the amp is cranked, they can’t achieve it when the volume knob is turned down. Once I saw the problem from the musician’s perspective, I understood that my solution (just turn down) was no solution at all. It doesn’t mean that there isn’t a conflict there to be resolved, just that more effort is needed in order to find a better solution.
By the way, if you’re looking for some quick live sound tricks to deal with this problem, here’s what I told my student:
If you’re mic’ing the amp, that amp no longer needs to be pointed directly at the front row of the audience. It can be pointed to the side, or even to the back. On big tours, they’ll go so far as to put the amp underneath the stage. I’ve even seen coffins created into which they place the combo amp, the mic, and some sound absorption materials, and then seal it up. This allows the amp to be driven much hotter, and provides the crunch. I should quickly point out that if you try any of these options, you’ve got to be ready to provide the guitarist with a really good monitor mix to make up from the sound they are no longer getting directly off the amp. It won’t work in every situation (it turns out that some guitarists have big egos, and do actually play too loud), but the important thing is that it is a solution that is arrived at not by ignoring or writing off the musicians perspective, but by embracing it.
I’ve seen many musicians adopt the mantle of engineer in order to make their own recordings. I’ve even taught a bunch here at the Omega Studios’ School. Many are able to make great recordings (and therefore feel a diminished need to hire an engineer, if you are getting my drift . . . ). I’ve also seen some very talented musicians struggle with the recording process, as well as the technology. Hey, it’s very left-brain, whereas musical ability and creativity tends to be very right brain. But they struggle along, and if you ask them why they don’t just get a techie type to handle the gearhead part, they often say things like “I’ve tried that, but none of the engineers I’ve worked with understand MY SOUND!” That is often what it comes down to for the musician. They’re dying to put down the software manual and just PLAY, but if they don’t hear those magical and unique aspects of the sound that they’ve labored to perfect in the results, they take the D.I.Y. path.
If we want to make the case for ourselves that as engineers, we play a useful, productive, and appealing role in the process of capturing that magic, we’ve got to hear what the musician hears.
We’ve got to get inside the musician’s head and see their perspective. How to do that? Here’s a suggestion: even if you aren’t a musician, or don’t aspire to be, pick up an instrument. Play it. Be bad at it if that’s how it goes, but don’t put it down until you understand its sound a little better. Even if you don’t end up a virtuoso, you’ll be better off for it. You’ll become a better engineer. And if you start to gain an appreciation of what it took for the musicians at your session to get good at what they do? That’s the point. Once you’ve experienced that, you’ll want to put the same kind of diligence and discipline into your engineering. As the director of The Omega Studios’ School is fond of saying at every orientation, “If you’re going to be involved in the music industry, it can’t hurt you to learn a little about music!” You might be pleasantly surprised where this advice takes you!