Q&A: Jack Joseph Puig, Grammy-winning Music Engineer, Producer
Jack Joseph Puig, a Grammy-winning producer, has worked with acts as diverse as U2, Faith Hill and Mary J Blige. Puig’s latest project with Waves Audio focuses on the ability of consumer electronics devices to faithfully reproduce recorded music.
Puig’s latest project venture as the director of creative innovation for the professional audio company Waves Audio bridges the gap between the pro audio world and consumer electronics. Puig recently chatted with CE Pro about handling artists, recording their music and the collaborative effort between Waves Audio, Dell and Skullcandy to improve consumer audio devices.
How does the music creation process start; does the artist or group walk in with song ideas/concepts and walk through them with the producer?
The process starts in a lot of ways only because it is predicated on art. Art is not objective, it is subjective. Sometimes, there is a clear demo created prior to the recording transpiring. That demo is what a demo is … it is a demonstration. With the insane amount of digital tools that are now available, people’s ability to make a demo as a good sketch of what a song should be like is fairly high. Anyone can go to a store and go back to any kind of space and create some impressive stuff. Those demos may be a basis to start with and they can record on top of them to create a finalized product. Sometimes people will come in with a guitar or piano and sing, and create it on the spot with no schematics.
The creative process starts right there. Sometimes they only have a piece of an idea. Maybe they only have a verse or chorus and extrapolate what they have and create something remarkable. That can start from an instrument or idea. I had an artist once come in and couldn’t think of what to do … it was Susanna Hoffs [a member of the Bangles]. We sat down and had a few cups of coffee. She says she had been on the phone with her brother and she said he is the king of tragedy, and I said there’s a song. We talked for 35 minutes and I took notes. We recorded a song later that day called ‘King of Tragedy.’
When I was working with the Rolling Stones Mick [Jagger] came in with a demo he created in his kitchen. The demo was fantastic. It was called ‘Streets of Love’ and it sounded like Mick when he was 17. It was amazing … it made me want to cry. When I heard the Stones’ version it didn’t have the same level of emotion. What ended up happening is that I took certain elements of the track Mick made in his kitchen and fused them with the Stones’ version.
How much of their own gear (amps, guitars, drums) do artists bring into the studio and how much equipment do you have on hand to provide tonal variety?
I have a fairly substantial collection of equipment, but I’ve picked gear that is oddball pieces so they can augment whatever the artist has. I haven’t collected Fender amps or Les Paul guitars.
Tools of the trade are important to the person creating. So, you have to be careful with what you give them. There is a skill set with that. You have to look at what they have and figure out if they can use it.
An emerging artist may show up with one guitar, one amp or you may have an artist that has made six successful records and they show up with a semi full of equipment. The tools have to inspire them.
Do you use a handful of micing techniques that are based on the room characteristics and specific qualities of the guitar amp, drum kit etc?
I could do a whole interview on drum recording, and I could the same thing on guitar or bass. There is a lot of technique that goes into recording those instruments. If you look at sound and simplify sound as three simple frequencies bands, low, middle and highs, the midrange is the most important frequency band. The midrange is the only band represented in all listening scenarios. It’s the one thing you can hear in the car, the airport or an expensive audio system. The midrange is where the heart and soul of music live. It’s where you feel the real emotion that someone is trying to convey with an instrument or voice, and when micing an instrument you have to be aware to capture a focused part of that frequency band.
With a guitar amp, you can move the mic around the speaker and create muddy lows and as you go towards the center of the speaker you can get too many highs. This applies to any instrument. You have to be sure to listen to the instrument in the room and interpret the heart of the matter. One of the most important things to keep in mind is that as humans we have 30 senses and we distill it down to five. In the audio world we use one sense. There’s nothing visually appealing coming from the speaker. We are trying to interpret what the feeling of the song is and how do I get that to come out of the speakers. When whoever it is listens they get the feeling of what the artist is trying to convey and a lot of that lives in the midrange.
Are simulation software solutions for guitars becoming more popular? Has the sound quality of these products improved over the past few years?
The digital revolution, the renaissance we are in now has only gotten better with time. With chipsets becoming faster, as people understand modeling technologies better, the ability to copy the emotion and feeling of certain amps, guitars, etc has gotten better. Putting the technical excellence aside, it’s more of a choice creatively. If you have a guitar that is squeaky and raw it evokes one type of feeling or emotion. If you have a guitar that is perfect, no buzz, no hum, it’s focused, it’s compressed … it gives you a different feeling.
People would argue that some records that are raw have a longer shelf life. And people that argue that records recorded with software have a certain shelf life. The consumer, whether it’s a prosumer, child or educated ear, they feel the emotion when it’s real. In some sense, the high-brow simulations of things come off as karaoke. They don’t feel real, they feel like an imitation. Sometimes you want the record to feel real, other times you don’t. It all comes back to the same thing: the emotion.
How many guitar tracks, keyboard, vocal, drums will a song typically contain?
Anywhere from 60 to 90 and that’s because you multi layers of instruments playing the same parts. The pop sound is a processed sound. Pop records are the most difficult to make. It takes an educated mind to take something to manipulate it. It takes experience that knows how to grab the ear to take listeners through the journey. Before you know it 3:45 has gone by and you don’t know it. You need lots of guitars, pianos, cymbals, multiple background vocal tracks. It’s a plethora of multiple instruments that entertain … that’s a pop record.
What are the tools you use to blend these tracks together; can you explain what compression, EQ and reverb are and how you use them?
Compression, there are different kinds. Compression can be used to make something loud. That means the average level and RMS level are increased and the peak level is decreased. This is a large part of what people don’t like about modern recordings. The dynamics are a large part of what moves you through music. If the loudness is on 10 all the time then there’s no journey. There’s another kind of compression and this is my favorite tool. This type of compression has an attack and release. The attack is when it grabs the audio and manipulates it. The release is how long it holds onto the manipulation of the compressor and how quick it lets it go back into its original state.