Tag Archives: Listening

Does It Have To Be This Loud?

A love-letter to patrons of live music.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

loveWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Dear Live-Music Patron,

Occasionally, you have a question to ask an audio human. That question isn’t often posed to me personally, although, in aggregate, the query is probably made multiple times in every city on every night. The question isn’t always direct, and it can morph into different forms – some of which are statements:

“I can’t hear anything except the drums.”

“The guitar on the right is hurting my ears.”

“It’s hard to talk to my girlfriend/ boyfriend in here.”

“Can you keep everything the same, but turn the mains down?”

“Can you make it so the mic doesn’t make that screech again?”

And so on.

Whenever the conversation goes this way, there’s a singular question lying at the heart of the matter:

“Does it have to be this loud?”

There are a number of things I want to say to you regarding that question, but the most important bit has to come first. It’s the one thing that I want you to realize above everything else.

You’re asking a question that is 100% legitimate.

You may have asked it in one way or another, only to be brushed off. You may have had an exasperated expression pointed your way. You may have been given a brusque “Yes” in response. You may have encountered shrugging, swearing, eye-rolling, sneering, or any number of other responses that were rude, unhelpful, or downright mean.

But that doesn’t mean that your question is wrong or stupid. You’re right to ask it. It’s one of the minor tragedies in this business that production people and music players talk amongst themselves so much, and yet almost never have a real conversation with you. Another minor tragedy is that us folks who run the shows are usually not in a position to have a nuanced discussion with you when it would actually be helpful.

It’s hard to explain why it’s so loud when it’s so loud that you have to ask if “it has to be this loud.”

So, I want to try to answer your question. I can’t speak to every individual circumstance, but I can talk about some general cases.

Sometimes No

I am convinced that, at some time in their career, every audio tech has made a show unnecessarily loud. I’ve certainly done it.

As “music people,” we get excited about sonic experiences as an end in themselves. We’re known for endlessly chasing after tiny improvements in some miniscule slice of the audible spectrum. We can spend hours debating the best way to make the bass (“kick”) drum sound like a device capable of extinguishing all multicellular life on the planet. The sheer number of words dedicated to the construction of “massive” rock and roll guitar noises is stunning. The amount of equipment and trickery that can be dedicated to, say, getting a bass guitar to sound “just so” might boggle your mind.

It’s entirely possible for us to become so enraptured in making a show – or even just a small portion of a show – sound a certain way that we don’t realize how much level we’re shoveling into the equation. We get the drums cookin’, and then we realize that the guitars are a little low, and then the bass comes up to balance that out, and then the vocals are buried, so we crank up the vocals and WHAT? I CAN’T HEAR YOU!

It does happen. Sometimes it’s accidental, and sometimes it’s deliberate. Some techs just don’t feel like a rock show is a rock show until they “feel” a certain amount of sound pressure level.

In these cases, when the audio human’s mix choices are the overwhelming factor in a show being too loud, the PA really should be pulled back. It doesn’t have to be that loud. The problem and the solution are simple creatures.

But Sometimes Yes

The thing with live audio is that the problems and the solutions are often not so simple as what I just got into. It’s very possible, especially in a small room, for the sound craftsperson’s decisions to NOT be the overwhelming factor in determining the volume of a gig. I – and others like me – have spent lots of time in situations where we’ve had to deal with an unfortunate consequence of the laws of physics:

The loudest thing in the room is as quiet as we can possibly be, and quite often, a balanced mix requires something else to be much louder than that thing.

If the instrumentalists (drums, bass, guitars, etc) are blasting away at 110 dB without any help from the sound system, then the vocals will have to be in that same neighborhood in order to compete. It’s a conundrum of either being too loud with a flat-out awful mix, or too loud with a mix that’s basically okay. In a case like that, an audio human just has to get on the gas and wait to go home. Someone’s going to be mad at us, and it might as well not be the folks who are into the music.

There’s another overarching situation, though, and that’s the toughest one to talk about. It’s a difficult subject because it has to do with subjectivity and incompatible expectations. What I’m getting at is when some folks want background music, and the show is not…can not be presented as such.

There ARE bands that specialize in playing “dinner” music. They’re great at performing inoffensive selections that provide a bed for conversation at a comfortable volume. What I hope can be understood is that this is indeed a specialization. It’s a carefully cultivated, specific skill that is not universally pursued by musicians. It’s not universally pursued because it’s not universally applicable.

Throughout human history, a great many musical events, musical instruments, and musical artisans have had a singular purpose: To be noticed, front and center. For thousands of years, humans have used instruments like drums and horns as acoustic “force multipliers” – sonic levers, if you will. We have used them to call to each other over long distances, or send signals in the midst of battle. Fanfares have been sounded at the arrivals of kings. On a parallel track, most musicians that I know do not simply play to be involved in the activity of playing. They play so as to be listened to.

Put all that together, and what you have is a presentation of art that simply is not meant to be talked over. In the cases where it’s meant to coexist with a rambunctious audience, it’s even more meant to not be talked over. From the mindset of the players to the technology in use, the experience is designed specifically to stand out from the background. It can’t be reduced to a low rumble. That isn’t what it is. There’s no reason that it has to be painfully loud, but there are many good reasons why a conversation in close proximity might not be practical.


Does it have to be this loud?


What Just Changed?

If an acoustical environment changes significantly, you may start to have mysterious problems.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

screenWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Just recently, I was working a show for Citizen Hypocrisy. I did my usual prep, which includes tamping down the “hotspots” in vocal mics – gain before feedback being important, and all that.

Everything seemed copacetic, even the mic for the drumkit vocal. There was clarity, decent volume, and no ringing. The band got in the room and set up their gear. This time around, an experiment was conducted: A psychedelic background video was set to play in a loop as a backdrop. We eventually did a quick, “just in time” soundcheck, and we were off.

As things kicked into gear, I noticed something. I was getting high frequency feedback from one of the mics, it wasn’t running away by any means, but it was audible and annoying. I thought to myself, “We did just give DJ’s microphone a push in monitor world. I probably need to pull the top-end back a bit.” I did just that…but the feedback continued. I started soloing things into my headphones.

“Huh, guitar-Gary’s mic seems okay. DJ’s mic is picking up the feedback, but it doesn’t seem to be actually part of that loop. That leaves…*unsolos DJ’s mic, solos drum-Gary’s mic* Well, THERE’S the problem.”

The mic for vocals at the drumkit? It was squeaking like a pissed-off mouse. I hammered the offending frequency with a notch filter, and that was it.

But why hadn’t I noticed the problem when I was getting things baselined for the night? Gary hadn’t changed the orientation of the mic so that it was pointing at the drumfill, and neither the input gain nor send gains had changed, so why had the problem cropped up?

The answer: Between my setup and actually getting the show moving in earnest, we had changed the venue’s acoustical characteristics, especially as they concerned the offending microphone. We had deployed the screen behind the drums.

Rolling With The Changes

Citizen Hypocrisy was playing at my regular gig. Under conditions where we are NOT running video, the upstage wall is a mass of acoustical wedge foam. For most purposes, high-mid and high frequency content is soaked up by the treatment, never to be heard again. However, when we are running video, the screen drops in front of the foam. For high frequencies, the screen is basically a giant, relatively efficient reflector. My initial monitor-EQ solution was built for the situation where the upstage wall was an absorber. When the screen came down, that solution was partially invalidated. Luckily, what had to be addressed was merely the effective gain of a very narrow frequency range. We hadn’t run into a “showstopper bug,” but we had still encountered a problem.

The upshot of all this is:

Any change to a show’s acoustical environment, whether by way of surface absorption, diffusion, and reflectance, or by way of changes in atmospheric conditions, can invalidate a mix solution to some degree.

Now, you don’t have to panic. My feeling is that we sometimes overstate the level of vigilance required in regards to acoustical changes at a show. You just have to keep listening, and keep your brain turned on. If the acoustical environment changes, and you hear something you don’t like, then try to do something about it. If you don’t hear anything you don’t like, there’s no reason to find something to do.

For instance, at my regular gig, putting more people into the room is almost always an automatic improvement. I don’t have to change much (if anything at all), because the added absorption makes the mix sound better.

On the reverse side, I once ran a summer show for Puddlestone where I suddenly had a “feedback monster” where one hadn’t existed for a couple of hours. The feedback problem coincided with the air conditioning finally getting a real handle on the temperature in the room. My guess is that some sort of acoustical refraction was occurring, where it was actually hotter near the floor where all the humans were. For the first couple of hours, some amount of sound was bending up and away from us. When the AC really took hold, it might have been that the refraction “flattened out” enough to get a significant amount of energy back into the mics. (My explanation could also be totally incorrect, but it seems plausible.) Obviously, I had to make a modification in accordance with the problem, which I did.

In all cases, if things were working before, and suddenly are no longer working as well, a good question to ask yourself is: “What changed between when the mix solution was correct, and now, when it seems incorrect?” It’s science! You identify the variable(s) that got tweaked, and then manage the variables under your control in order to bring things back into equilibrium. If you have to re-solve your mix equation, then that’s what you do.

And then you go back to enjoying the show.

Until something else changes.

Loud Doesn’t Create Excitement

A guest post for Schwilly Family Musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.


The folks in the audience have to be “amped up” about your songs before the privilege of volume is granted.

The full article is here.

Loud Thoughts

“Loud” is a subjective sort of business.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

splWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The concept of “loud” is really amorphous, especially when you consider just how important it is to live shows. A show that’s too loud for a given situation will quickly turn into a mess, in one way or another. Getting a desired signal “loud enough” in a certain monitor mix may be key to a great performance.

And yet…”loud” is subjective. Perceived changes in level are highly personalized. People tolerate quite a bit of level when listening to music they like, and tolerate almost no volume at all when hearing something that they hate. One hundred decibels SPL might be a lot of fun when it’s thumping bass, but it can also be downright abrasive when it’s happening at 2500 Hz.

Twice As Loud

Take a look at that heading. Do you realize that nobody actually, truly knows what “twice as loud” means?

People might think they know. You’ll hear statements like “people generally think 6 dB is about twice as loud,” but then later someone else will say, “people perceive a 10 dB difference to be twice as loud.” There’s a range of perception, and it’s pretty sloppy when you actually do the math involved.

What I mean is this. The decibel is a measure of power. (You can convert other things, like voltage and pressure, into power equivalents.) Twice the power is 3 dB, period. It’s a mathematical definition that the industry has embraced for decades. It’s an objective, quantitative measurement of a ratio. Now, think about the range of perception that I presented just now. It’s a little eyebrow raising when you realize that the range for perceiving “twice as loud” is anywhere from 4X to 10X the power of the original signal. If a 1000 watt PA system at full tilt is the baseline, then there are listeners who would consider the output to be doubled at 4000 watts…and other folks who wouldn’t say it was twice as loud until a 10kW system was tickling its clip lights!

It’s because of this uncertainty that I try (and encourage others to seriously consider) communicating in terms of decibels. Especially in the context of dialing up a PA or monitor rig to everybody’s satisfaction, it helps greatly if some sort of quantitative and objective reference point is used. Yes, statements like “I need the guitar to be twice as loud,” or “I think the mix needs 10% more of the backup singers” ARE quantitative – but they aren’t objective. Do you need 3dB more guitar? Six decibels? Ten? Do you want only 0.4 dB more of the backup singers? (Because that’s what [10 log 1.1] works out to.) Communicating in decibels is far less arbitrary.

(The irony of using a qualitative phrase like “far less” in the context of advocating for objective quantification is not lost on me, by the way.)

The Meter Is Only Partially Valid As An Argument

Even if nobody actually knows what “twice as loud” means, one thing that people do know is when they feel a show is too loud.

For those of use who embrace measurement and objectivity, there’s a tendency that we have. When we hear a subjective statement, we get this urge to fire up a meter and figure out if that statement is true. I’m all for this behavior in many scenarios. Challenging unsubstantiated hoo-ha is, I think, one of the areas of pro-audio that still has some “frontier” left in it. My opinion is that more claims need to be challenged with the question, “Where’s your data?”

But when it comes to the topic of “loud,” especially the problem of “too loud,” whipping out an SPL meter and trying to argue on the basis of objectivity is of only narrow appropriateness. In the case of a show that feels too loud for someone, the meter can help you calibrate their perception of loud to an actual number that you can use. You can then decide if trying to achieve a substantially lower reading is feasible or desirable. If a full-on rock band is playing in a room, making 100 dBC at FOH without the PA even contributing, and one person thinks they only ought to be 85 dB C…that one person is probably out of luck. The laws of physics are really unlikely to let you fulfill that expectation. At the same time, you have to realize that your meter reading (which might suggest that the PA is only contributing three more decibels to the show) is irrelevant to that person’s perception.

If something is too loud for someone, the numbers from your meter have limited value. They can help you form a justifying argument for why the show level is where it is, but they’re not a valid argument all by themselves.

It’s Not Actually About The Best Sound

What we really want is the best possible show at the lowest practical gain.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

soundWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

As it happens, there’s a bit of a trilogy forming around my last article – the one about gain vs. stability. In discussions like this, the opening statement tends to be abstract. The “abstractness” is nice in a way, because it doesn’t restrict the application too much. If the concept is purified sufficiently, it should be usable in any applicable context.

At the same time, it’s nice to be able to make the abstract idea more practical. That is, the next step after stating the concept is to talk about ways in which it applies.

In live audio, gain is both a blessing and a curse. We often need gain to get mic-level signals up to line-level. We sometimes need gain to correct for “ensemble imbalances” that the band hasn’t yet fixed. We sometimes need gain to make a quiet act audible against a noisy background. Of course, the more gain we add, the more we destabilize the PA system, and the louder the show gets. The day-to-day challenge is to find the overall gain which lets us get the job done while maintaining acceptable system stability and sound pressure.

If this is the overall task, then there’s a precept which I think can be derived from it. It might only be derivable indirectly, depending on your point of view. Nevertheless:

Live sound is NOT actually about getting the best sound, insofar as “the best sound” is divorced from other considerations. Rather, the goal of live sound is to get the best possible holistic SHOW, at the lowest practical gain.

Fixing Everything Is A Bad Idea

The issue with a phrase like “the best sound” is that it morphs into different meanings for different people. For instance, at this stage in my career, I have basically taken the label saying “The Best Sound” and stuck it firmly on the metaphorical box containing the sound that gets the best show. For that reason alone, the semantics can be a little difficult. That’s why I made the distinction above – the distinction that “the best sound” or “the coolest sound” or “the best sound quality” is sometimes thought of without regard to the show as a whole.

This kind of compartmentalized thinking can be found both in concert audio veterans and greenhorns. My gut feeling is that the veterans who still section off their thinking are the ones who never had their notions challenged when they were new enough.

…and I think it’s quite common among new audio humans to think that the best sound creates the best show. That is, if we get an awesome drum sound, and a killer guitar tone, and a thundering bass timbre, and a “studio ready” vocal reproduction, we will then have a great show.

The problem with this line of thinking is that it tends to create situations where a tech is trying to “fix” almost everything about the band. The audio rig is used as a tool to change the sound of the group into a processed and massaged version of themselves – a larger than life interpretation. The problem with turning a band into a “bigger than real” version of itself is that doing so can easily require the FOH PA to outrun the acoustical output of the band AND monitor world by 10 dB or more. Especially in a small-venue context, this can mean lots and lots of gain, coupled with a great deal of SPL. The PA system may be perched on the edge of feedback for the duration of the show, and it may even tip over into uncontrolled ringing on occasion. Further, the show can easily be so loud that the audience is chased off.

To be blunt, your “super secret” snare-drum mojo is worthless if nobody wants to be in the same room with it. (If you follow me.)

Removed from other factors, the PA does sound great…but with the other factors being considered, that “great” sound is creating a terrible show.


The correction for trying to fix everything is to only reinforce what actually needs help. This approach obeys the “lowest possible gain” rule. PA system gain is applied only to the sources that are being acoustically swamped, and only in enough quantity that those sources stop being swamped.

In a sense, you might say that there’s a certain amount of total gain (and total resultant volume) that you can have that is within an acceptable “window.” When you’ve used up your allotted amount of gain and volume, you need to stop there.

At first, the selectivity of what gets gain applied is not very narrow. For newer operators and/ or simplified PA systems, the choice tends to be “reproduce most of the source or none of it.” You might have, say, one guitar that’s in the PA, plus a vocal that’s cranked up, and some kick drum, and that’s all. Since the broadband content of the source is getting reproduced by the PA, adding any particular source into the equation chews up your total allowable gain in a fairly big hurry. This limits the correction (if actually necessary) that the PA system can apply to the total acoustical solution.

The above, by the way, is a big reason why it’s so very important for bands to actually sound like a band without any help from the PA system. That does NOT mean “so loud that the PA is unnecessary,” but rather that everything is audible in the proper proportions.


As an operator learns more and gains more flexible equipment, they can be more selective about what gets a piece of the gain allotment. For instance, let’s consider a situation where one guitar sound is not complementing another. The overall volumes are basically correct, but the guitar tones mask each other…or are masked by something else on stage. An experienced and well-equipped audio human might throw away everything in one guitar’s sound, except for a relatively narrow area that is “out of the way” of the other guitar. The audio human then introduces just enough of that band-limited sound into the PA to change the acoustical “solution” for the appropriate guitar. The stage volume of that guitar rig is still producing the lion’s share of the SPL in the room. The PA is just using that SPL as a foundation for a limited correction, instead of trying to run right past the total onstage SPL. The operator is using granular control to get a better show (where the guitars each have their own space) while adding as little gain and SPL to the experience as possible.

If soloed up, the guitar sound in the PA is terrible, but the use of minimal gain creates a total acoustical solution that is pleasing.

Of course, the holistic experience still needs to be considered. It’s entirely possible to be in a situation that’s so loud that an “on all the time” addition of even band-limited reinforcement is too much. It might be that the band-limited channel should only be added into the PA during a solo. This keeps the total gain of the show as low as is practicable, again, because of granularity. The positive gain is restricted in the frequency domain AND the time domain – as little as possible is added to the signal, and that addition is made as rarely as possible.

An interesting, and perhaps ironic consequence of granularity is that you can put more sources into the PA and apply more correction without breaking your gain/ volume budget. Selective reproduction of narrow frequency ranges can mean that many more channels end up in the PA. The highly selective reproduction lets you tweak the sound of a source without having to mask all of it. You might not be able to turn a given source into the best sound of that type, but granular control just might let you get the best sound practical for that source at that show. (Again, this is where the semantics can get a little weird.)

Especially for the small-venue audio human, the academic version of “the best sound” might not mean the best show. This also goes for the performers. As much as “holy grail” instrument tones can be appreciated, they often involve so much volume that they wreck the holistic experience. Especially when getting a certain sound requires driving a system hard – or “driving” an audience hard – the best show is probably not being delivered. The amount of signal being thrown around needs to be reduced.

Because we want the best possible show at the lowest practical gain.

“Shine On You Crazy Diamond:” The Best Soundcheck Song EVER

Everything takes its place at an unhurried pace.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

diamondWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Back in the days when I got to work with Floyd Show, I always preferred it when the night would start with “Shine On You Crazy Diamond.” Even when we’d had time to do extensive work on the show’s audio during the day, the luxury of “easing in” to the actual performance was something I savored.

Sure, nothing quite compares with the rush of, say, having the first set be “Dark Side Of The Moon.” The intro plays, building to a feverish peak, and then “shwOOM!” The guitars take you into “Breathe.” It’s really cool when it works, but there’s always that nagging fear in the back of your mind: “What if something doesn’t quite happen correctly?” Anything short of a catastrophic failure is insufficient to allow the show to stop, so a problem means that the impact of the show-open is wrecked…AND you’re going to have to fix things in a big hurry.


“Shine On You Crazy Diamond” is, in my opinion, THE template for a “soundcheck” song. Soundcheck songs are fantastic tools to have handy, because (let’s face it), the small-venue world is full of scenarios where the only option is grab-n-go. Combining an actual first song with a soundcheck lets you keep the show moving, maximizing your play time and audience engagement while getting things sorted out on deck. Not all soundcheck tunes are created equal, though, so learning the lessons available from “Shine On” is a Very Good Idea™ when it comes time to craft your own “multitasker’s minuet.”

Take Your Time

Because soundcheck songs naturally happen at the top of a set, the instinct is to start off with a bang. This is unhelpful. A fast tune means that the time available for an audio-human to catch, analyze, act on, and re-evaluate any particular problem is hugely compressed. Several musical phrases can go by while a tech tries to get sorted out during a lively song. The more phrases that go by without a fix, the more “wrong” the show seems to sound. A fast song, then, tends to push the show opening towards sounding bad. (You don’t want your show-open to sound bad.)

“Shine On,” of course, answers this issue in exactly the right way. It’s a leisurely piece – downright dreamy, actually – which means that the person managing the PA noises doesn’t have to rush around. They can focus, listen, and act deliberately. If something is wrong, it’s entirely possible to get a handle on the issue within a couple of musical phrases. Even very sticky problems can usually be wrangled by the end of the song, which allows the show to continue smoothly without stopping.

If you want to come out of the gate like a racecar, you need a proper soundcheck. If you’re going to do things on the fly, please fly slowly.

Everything Has Its Own Space

Another excellent feature of “Shine On You Crazy Diamond” is that it spends a long time being a series of solos. You get to hear the keys, then a guitar chimes in up front, then the other guitar plays for a bit, then you get some more keys, and then everything fires together with the drums. After that, you get some uncluttered drums along with another guitar solo, and then some vocals that happen over some subdued backing from the band. Next, you get a chance to hear the vocals against the higher-intensity version of the band, and finally, you get some saxophone over both gentle and more “wound-up” backgrounds.

Everything has a time (and quite a lot of it, due to the song being slow) where it is the front-and-center element. For an audio-human, this is tremendous. It gives a very clear indication of whether or not the basic level of the part is in a reasonable place, and it also still manages to say a lot about whether the part’s tonality is going to work in context. Further, this kind of song structure allows us to get as close as possible to a “check everything individually” situation without actually having that option available. The audio human gets time to think about each instrument separately, even though other parts are still playing in the background.

The antithesis of this is the soundcheck song where everything starts playing at once, usually with everybody trying to be louder than everybody else. The tech ends up losing precious time while trying to simply make sense of the howling vortex of noise that just hit them in the face. With nothing “presorted,” the only option is to struggle to pick things out either by luck or by force.

Again, if you want to start at a full roar, you should do that at the shows where you have the opportunity to get the roar figured out in advance. If you don’t have time to take turns getting sorted before the show, then you have to use the show to do that.

Waste Nothing

Some folks treat their soundcheck song as a bit of worthless rubbish. They toss it out to the audience as though it has no value, seemingly in the hopes that the showgoers will ignore it. It’s as though the band is saying “this isn’t real, so don’t pay attention yet.”

But it IS real, and the audience IS paying attention. A soundcheck tune is part of the actual show, and should NOT be a throwaway. It should be a “first-class” song that’s done as well as is possible.

Of course, because it is a soundcheck song, it probably shouldn’t be the tune that relies most on everything going perfectly. Songs used to get around production issues are tools, and you have to use the correct tool for any given job.

“Shine On” is a real song. It’s a very important part of Pink Floyd’s catalog, and was crafted with care. Floyd Show never (when I worked with them) played the tune with the idea of taking a mulligan afterwards, which is also what I would expect from the actual Pink Floyd. If the show was opened with “Shine On You Crazy Diamond,” the show was OPENED. We were not casually testing anything; we were going for it, even as remaining technical issues got sorted out.

You should care about your soundcheck song. It’s a real part of your show, a part that should be intentionally crafted to meet a specific need: Connecting with your audience while a mix comes together.


Trust your ears – but verify.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

analysisWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

It might be that I just don’t want to remember who it was, but a famous engineer once became rather peeved. His occasion to be irritated arose when a forum participant had the temerity to load one of the famous engineer’s tracks into a DAW and look at the waveform. The forum participant (not me) was actually rather complimentary, saying that the track LOOKED very compressed, but didn’t SOUND crushed at all.

This ignited a mini-rant from the famous guy, where he pointedly claimed that the sound was all that mattered, and he wasn’t interested in criticism from “engin-eyes.” (You know, because audio humans are supposed to be “engine-ears.”)

To be fair, the famous engineer hadn’t flown into anything that would pass as a “vicious, violent rage,” but the relative ferocity of his response was a bit stunning to me. I was also rather put off by his apparent philosophy that the craft of audio has no need of being informed by senses other than hearing.

Now, let’s be fair. The famous engineer in question is known for a reason. He’s had a much more monetarily successful career than I have. He’s done excellent work, and is probably still continuing to do excellent work at the very moment of this writing. He’s entitled to his opinions and philosophies.

But I am also entitled to mine, and in regards to this topic, here’s what I think:

The idea that an audio professional must rely solely upon their sense of hearing when performing their craft is, quite simply, a bogus “purity standard.” It gets in the way of people’s best work being done, and is therefore an inappropriate restriction in an environment that DEMANDS that the best work be done.

Ears Are Truthful. Brains Are Liars.

Your hearing mechanism, insofar as it works properly, is entirely trustworthy. A sound pressure wave enters your ear, bounces your tympanic membrane around, and ultimately causes some cilia deep in your ear to fire electrical signals down your auditory nerve. To the extent that I understand it all, this process is functionally deterministic – for any given input, you will get the same output until the system changes. Ears are dispassionate detectors of aural events.

The problem with ears is that they are hooked up to a computer (your brain) which can perform very sophisticated pattern matching and pattern synthesis.

That’s actually incredibly neat. It’s why you can hear a conversation in a noisy room. Your brain receives all the sound, performs realtime, high-fidelity pattern matching, tries to figure out what events correlate only to your conversation, and then passes only those events to the language center. Everything else is labeled “noise,” and left unprocessed. On the synthesis side, this remarkable ability is one reason why you can enjoy a song, even against noise or compression artifacts. You can remember enough of the hi-fi version to mentally reconstruct what’s missing, based on the pattern suggested by the input received. Your emotional connection to the tune is triggered, and it matters very little that the particular playback doesn’t sound all that great.

As I said, all that is incredibly neat.

But it’s not necessarily deterministic, because it doesn’t have to be. Your brain’s pattern matching and synthesis operations don’t have to be perfect, or 100% objective, or 100% consistent. They just have to be good enough to get by. In the end, what this means is that your brain’s interpretation of the signals sent by your ears can easily be false. Whether that falsehood is great or minor is a whole other issue, very personalized, and beyond the scope of this article.

Hearing What You See

It’s very interesting to consider what occurs when your hearing correlates with your other senses. Vision, for instance.

As an example, I’ll recall an “archetype” story from Pro Sound Web’s LAB: A system tech for a large-scale show works to fulfill the requests of the band’s live-audio engineer. The band engineer has asked that the digital console be externally “clocked” to a high-quality time reference. (In a digital system, the time reference or “wordclock” is what determines exactly when a sample is supposed to occur. A more consistent timing reference should result in more accurate audio.) The system tech dutifully connects a cable from the wordclock generator to the console. The band engineer gets some audio flowing through the system, and remarks at how much better the rig sounds now that the change had been made.

The system tech, being diplomatic, keeps quiet about the fact that the console has not yet been switched over from its internal reference. The external clock was merely attached. The console wasn’t listening to it yet. The band engineer expected to hear something different, and so his brain synthesized it for him.

(Again, this is an “archetype” story. It’s not a description of a singular event, but an overview of the functional nature of multiple events that have occurred.)

When your other senses correlate with your hearing, they influence it. When the correlation involves something subjective, such as “this cable will make everything sound better,” your brain will attempt to fulfill your expectations – especially when no “disproving” input is presented.

But what if the correlating input is objective? What then?


What I mean by “an objective, correlated input” is an unambiguously labeled measurement of an event, presented in the abstract. A waveform in a DAW (like I mentioned in the intro) fits this description. The timescale, “zero point,” and maximum levels are clearly identifiable. The waveform is a depiction of audio events over time, in a visual medium. It’s abstract.

In the same way, audio analyzers of various types can act as objective, correlated inputs. To the extent that their accuracy allows, they show the relative intensities of audio frequencies on an unambiguous scale. They’re also abstract. An analyzer depicts sonic information in a visual way.

When used alongside your ears, these objective measurements cause a very powerful effect: They calibrate your hearing. They allow you to attach objective, numerical information to your brain’s perception of the output from your ears.

And this makes it harder for your brain to lie to you. Not impossible, but harder.

Using measurement to confirm or deny what you think you hear is critical to doing your best work. Yes, audio-humans are involved in art, and yes, art has subjective results. However, all art is created in a universe governed by the laws of physics. The physical processes involved are objective, even if our usage of the processes is influenced by taste and preference. Measurement tools help us to better understand how our subjective decisions intersect with the objective universe, and to me, that’s really important.

If you’re wondering if this is a bit of a personal “apologetic,” you’re correct. If there’s anything I’m not, it’s a “sound ninja.” There are audio-humans who can hear a tiny bit of ringing in a system, and can instantly pinpoint that ring with 1/3rd octave accuracy – just by ear. I am not that guy. I’m very slowly getting better, but my brain lies to me like the guy who “hired” me right out of school to be an engineer for his record label. (It’s a doozy of a story…when I’m all fired up and can remember the best details, anyway.) This being the case, I will gladly correlate ANY sense with my hearing if it helps me create a better show. I will use objective analysis of audio signals whenever I think it’s appropriate, if it helps me deliver good work.

Of course the sound is the ultimate arbiter. If the objective measurement looks weird, but that’s the sound that’s right for the occasion, then the sound wins.

But aside from that, the goal is the best possible show. Denying ones-self useful tools for creating that show, based on the bogus purity standards of a few people in the industry who AREN’T EVEN THERE…well, that’s ludicrous. It’s not their show. It’s YOURS. Do what works for YOU.

Call me an “engineyes” if you like, but if looking at a meter or analyzer helps me get a better show (and maybe learn something as well), then I will do it without offering any apology.

Pink Floyd Is A Bluegrass Band

If you beat the dynamics out of a band that manages itself with dynamics, well…

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

floydgrassWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Just recently, I had the privilege of working on “The Last Floyd Show.” (The production provided the backdrop for that whole bit about the lighting upgrade that took forever.) We recorded the show to multitrack, and I was tasked with getting a mix done.

It was one of the toughest mixdowns I’ve attempted, mostly because I took the wrong approach when I got started. I attempted a “typical rock band” mix, and I ended up having to basically start over once…and then backtrack significantly twice more. Things started to work much more nicely when I backed WAY off on my channel compression – which is a little weird, because a lot of my “mix from live” projects actually do well with aggressive compression on individual channels. You grab the player’s level, hold it back from jumping around, fit ’em into the part of the spectrum that works, and everything’s groovy.

Not this time, though.

Because Pink Floyd is actually an old-timey bluegrass act that inhabits a space-rock body. They use “full-range” tones and dynamics extensively, which means that preventing those things from working is likely to wreck the band’s sound.

General Dynamics (Specific Dynamics, Too)

Not every Floyd tune is the same, of course, but take a listen over a range of their material and you’ll discover something: Pink Floyd gets a huge amount of artistic impact from big swings in overall dynamics, as well as the relative levels of individual players. Songs build into rolling, thunderous choruses, and then contract into gentle verses. There are “stings” where a crunchy guitar chord PUNCHES YOU IN THE FACE, and then backs away into clean, staccato notes. Different parts ebb and flow around each other, with great, full-range tones possible across multiple instruments – all because of dynamics. When it’s time for the synth, or organ, or guitar to be in the lead, that’s what is in the lead. They just go right past the other guys and “fill the space,” which is greatly enabled by the other guys dropping far into the background.

If you crush the dynamics out of any part of a Pink Floyd production, it isn’t Pink Floyd anymore. It’s people playing the same notes as Floyd songs without actually making those songs happen. If those dynamic swings are prevented, the arrangements stop working properly. The whole shebang becomes a tangled mess of sounds running into each other, which is EXACTLY what happened to me when I tried to “rock mix” our Floyd Show recording.

This usage of dynamics, especially as a self-mix tool, is something that you mostly see in “old school acoustic-music” settings. Rock and pop acts these days are more about a “frequency domain” approach than a “volume domain” sort of technique. It’s not that there’s no use of volume at all, it’s just that the overwhelming emphasis seems to be on everybody finding a piece of the spectrum, and then just banging away with the vocals on top. (I’m not necessarily complaining. This can be very fun when it’s done well.) With that emphasis being the case so often, it’s easy to get suckered into doing everything with a “rock” technique. Use that technique in the wrong place, though, and you’ll be in trouble.

And yes, this definitely applies to live audio. In fact, this tendency to work on everything with modern rock tools is probably why I haven’t always enjoyed Floyd Show productions as much as I’ve wanted.

In The Flesh

When you have a band like Floyd Show on the deck, in real life, in a small room, the band’s acoustical peaks can overrun the PA to some extent. This is especially true if (like me), you aggressively limit the PA in order to keep the band “in a manageable box.” This, coupled with the fact that the band’s stage volume is an enormous contributor to the sound that the audience hears, means that a compressed, “rock band” mix isn’t quite as ruinous as it otherwise would be. That is, with the recording, the only sound you can hear is the reproduced sound, so screwing up the production is fatal. Live, in a small venue, you hear a good bit of reproduction (the PA) and a LOT of stage volume. The stage volume counteracts some of the “reproduction” mistakes, and makes the issues less obvious.

Another thing that suppresses “not quite appropriate” production is that you’re prepared to run an effectively automated mix in real time. When you hear that a part isn’t coming forward enough, you get on the appropriate fader and give it a push. Effectively, you put some of the dynamic swing back in as needed, which masks the mistakes made in the “steady state” mix setup. With the recording, though, the mix doesn’t start out as being automated – and that makes a fundamental “steady state” error stand out.

As I said before, I haven’t always had as much fun with Floyd Show gigs as I’ve desired. It’s not that the shows weren’t a blast, because they were definitely enjoyable for me, it’s just that they could have been better.

And it was because I was chasing myself into a corner as much as anyone else was, all by taking an approach to the mix that wasn’t truly appropriate for the music. I didn’t notice, though, because my errors were partially masked by virtue of the gigs happening in a small space. (That masking being a Not Bad Thing At All.™)

The Writing On The Wall

So…what can be generalized from all this? Well, you can boil this down to a couple of handy rules for live (and studio) production:

If you want to use “full bandwidth” tones for all of the parts in a production, then separation between the parts will have to be achieved primarily in the volume and note-choice domain.

If you’re working with a band that primarily achieves separation by way of the volume domain, then you should refrain from restricting the “width” of the volume domain any more than is necessary.

The first rule comes about because “full bandwidth” tones allow each part to potentially obscure each other part. For example, if a Pink Floyd organ sound can occupy the same frequency space as the bass guitar, then the organ either needs to be flat-out quieter or louder at the appropriate times to avoid clashing with the bass, or change its note choices. Notes played high enough will have fundamental frequencies that are away from the bass guitar’s fundamentals. This gives the separation that would otherwise be gotten by restricting the frequency range of the organ with EQ and/ or tone controls. (Of course, working the equalization AND note choice AND volume angles can make for some very powerful separation indeed.)

The second rule is really just an extension of “getting out of the freakin’ way.” If the band is trying to be one thing, and the production is trying to force the band to be something else, the end result isn’t going to be as great as it could be. The production, however well intentioned, gets in the way of the band being itself. That sounds like an undesirable thing, because it is an undesirable thing.

Faithfully rendered Pink Floyd tunes use instruments with wide-ranging tones that run up and down – very significantly – in volume. These volume swings put different parts in the right places at the right time, and create the dramatic flourishes that make Pink Floyd what it is. Floyd is certainly a rock band. The approach is not exactly the same as an old-school bluegrass group playing around a single, omni mic…

…but it’s close enough that I’m willing to say: The lunatics on Pink Floyd’s grass are lying upon turf that’s rather more blue than one might think at first.

Speed Fishing

“Festival Style” reinforcement means you have to go fast and trust the musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Last Sunday was the final day of the final iteration of a local music festival called “The Acoustic All-Stars.” It’s a celebration of music made with traditional or neo-traditional instruments – acoustic-electric guitars, fiddles, drums, mandolins, and all that sort of thing. My perception is that the musicians involved have a lot of anticipation wrapped up in playing the festival, because it’s a great opportunity to hear friends, play for friends, and make friends.

Of course, this anticipation can create some pressure. Each act’s set has a lot riding on it, but there isn’t time to take great care with any one setup. The longer it takes to dial up the band, the less time they have to play…and there are no “do overs.” There’s one shot, and it has to be the right shot for both the listeners and the players.

The prime illustrator for all this on Sunday was Jim Fish. Jim wanted to use his slot to the fullest, and so assembled a special team of musicians to accompany his songs. The show was clearly a big deal for him, and he wanted to do it justice. Trying to, in turn, do justice to his desires required that a number of things take place. It turns out that what had to happen for Jim can (I think) be generalized into guidelines for other festival-style situations.

Pre-Identify The Trouble Spots, Then Make The Compromises

The previous night, Jim had handed me a stage plot. The plot showed six musicians, all singing, wielding a variety of acoustic or acoustic-electric instruments. A lineup like that can easily have its show wrecked by feedback problems, because of the number of open mics and highly-resonant instruments on the deck. Further, the mics and instruments are often run at (relatively) high-gain. The PA and monitor rig need to help with getting some more SPL (Sound Pressure Level) for both the players and the audience, because acoustic music isn’t nearly as loud as a rock band…and we’re in a bar.

Also, there would be a banjo on stage right. Getting a banjo to “concert level” can be a tough test for an audio human, depending on the situation.

Now, there’s no way you’re going to get “rock” volume out of a show like this – and frankly, you don’t want to get that kind of volume out of it. Acoustic music isn’t about that. Even so, the priorities were clear:

I needed a setup that was based on being able to run with a total system gain that was high, and that could do so with as little trouble as possible. As such, I ended up deploying my “rock show” mics on the deck, because they’re good for getting the rig barking when in a pinch. The thing with the “rock” mics is that they aren’t really sweet-sounding transducers, which is unfortunate in an acoustic-country situation. A guy would love to have the smoothest possible sound for it all, but pulling that off in a potentially high-gain environment takes time.

And I would not have that time. Sweetness would have to take a back seat to survival.

Be Ready To Abandon Bits Of The Plan

On the day of the show, the lineup ended up not including two people: The bassist and the mandolin player. It was easy to embrace this, because it meant lower “loop gain” for the show.

I also found out that the fiddle player didn’t want to use her acoustic-electric fiddle. She wanted to hang one particular mic over her instrument, and then sing into that as well. We had gone with a similar setup at a previous show, and it had definitely worked. In this case, though, I was concerned about how it would all shake out. In the potentially high-gain environment we were facing, pointing this mic’s not-as-tight polar pattern partially into the monitor wash held the possibility for creating a touchy situation.

Now, there are times to discuss the options, and times to just go for it. This was a time to go for it. I was working with a seasoned player who knew what she wanted and why. Also, I would lose one more vocal mic, which would lower the total loop-gain in the system and maybe help us to get away with a different setup. I knew basically what I was getting into with the mic we chose for the task.

And, let’s be honest, there were only minutes to go before the band’s set-time. Discussing the pros and cons of a sound-reinforcement approach is something you do when you have hours or days of buffer. When a performer wants a simple change in order to feel more comfortable, then you should try to make that change.

That isn’t to say that I didn’t have a bit of a backup plan in mind in case things went sideways. When you’ve got to make things happen in a hurry, you need to be ready to declare a failing option as being unworkable and then execute your alternate. In essence, festival-style audio requires an initial plan, some kind of backup plan, the willingness to partially or completely drop the original plan, and an ability to formulate a backup plan to the new plan.

The fiddle player’s approach ended up working quite nicely, by the way.

Build Monitor World With FOH Open

If there was anything that helped us pull-off Jim’s set, it was this. In a detail-oriented situation, it can be good to start with your FOH (Front Of House) channels/ sends/ etc. muted (or pulled back) while you build mixes for the deck. After the monitors are sorted out, then you can carefully fill in just what you need to with FOH. There are times, though, that such an approach is too costly in terms of the minutes that go by while you execute. This was one such situation.

In this kind of environment, you have to start by thinking not in terms of volume, but in terms of proportions. That is, you have to begin with proportions as an abstract sort of thing, and then arrive at a workable volume with all those proportions fully in effect. This works in an acoustic music situation because the PA being heavily involved is unlikely to tear anyone’s head off. As such, you can use the PA as a tool to tell you when the monitor mixes are basically balanced amongst the instruments.

It works like this:

You get all your instrument channels set up so that they have equal send levels in all the monitors, plus a bit of a boost in the wedge that corresponds to that instrument’s player. You also set their FOH channel faders to equal levels – probably around “unity” gain. At this point, the preamp gains should be as far down as possible. (I’m spoiled. I can put my instruments on channels with a two-stage preamp that lets me have a single-knob global volume adjustment from silence to “preamp gain +10 dB.” It’s pretty sweet.)

Now, you start with the instrument that’s likely to have the lowest gain before feedback. You begin the adventure there because everything else is going to have to be built around the maximum appropriate level for that source. If you start with something that can get louder, then you may end up discovering that you can’t get a matching level from the more finicky channel without things starting to ring. Rather than being forced to go back and drop everything else, it’s just better to begin with the instrument that will be your “limiting factor.”

You roll that first channel’s gain up until you’ve got a healthy overall volume for the instrument without feedback. Remember, both FOH and monitor world should both be up. If you feel like your initial guess on FOH volume is blowing past the monitors too much (or getting swamped in the wash), make the adjustment now. Set the rest of the instruments’ FOH faders to that new level, if you’ve made a change.

Now, move on to the subsequent instruments. In your mind, remember what the overall volume in the room was for the first instrument. Roll the instruments’ gains up until you get to about that level on each one. Keep in mind that what I’m talking about here is the SPL, not the travel on the gain knob. One instrument might be halfway through the knob sweep, and one might be a lot lower than that. You’re trying to match acoustical volume, not preamp gain.

When you’ve gone through all the instruments this way, you should be pretty close to having a balanced instrument mix in both the house and on deck. Presetting your monitor and FOH sends, and using FOH as an immediate test of when you’re getting the correct proportionality is what lets you do this.

And it lets you do it in a big hurry.

Yes, there might be some adjustments necessary, but this approach can get you very close without having to scratch-build everything. Obviously, you need to have a handle on where the sends for the vocals have to sit, and your channels need to be ready to sound decent through both FOH and monitor-world without a lot of fuss…but that’s homework you should have done beforehand.

Trust The Musicians

This is probably the nail that holds the whole thing together. Festival-style (especially in an acoustic context) does not work if you aren’t willing to let the players do their job, and my “get FOH and monitor world right at the same time” trick does NOT work if you can’t trust the musicians to know their own music. I generally discourage audio humans from trying to reinvent a band’s sound anyway, but in this kind of situation it’s even more of something to avoid. Experienced acoustic music players know what their songs and instruments are supposed to sound like. When you have only a couple of minutes to “throw ‘n go,” you have to be able to put your faith in the music being a thing that happens on stage. The most important work of live-sound does NOT occur behind a console. It happens on deck, and your job is to translate the deck to the audience in the best way possible.

In festival-style acoustic music, you simply can’t “fix” everything. There isn’t time.

And you don’t need to fix it, anyway.

Point a decent mic at whatever needs micing, put a working, active DI on the stuff that plugs in, and then get out of the musicians’ way.

They’ll be happier, you’ll be happier, you’ll be much more likely to stay on schedule…it’s just better to trust the musicians as much as you possibly can.

Case Study – Compression

A bit about how I personally use dynamic compression in a live-audio setting.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Generally speaking, I try not to go too far into specifics on this site. I avoid a “do exactly this, then exactly that” approach, because it can cause people to become “button pushers.” Button pushers are people who have memorized a procedure, but don’t actually know how that procedure works. I think we need a lot fewer of those folks in live-audio, and a lot more of people who understand WHY they’re doing something.

Of course, in a “how-to” kind of situation, you do need some specific instructions. Those posts are the exception to the rule.


My avoidance of explicit procedures means that this site has very little in the way of “set this processor this way for this effect” kinds of information. This information does have its place, especially when it’s used as a starting point for creating your own live-sound solutions. Not too long ago, a fellow audio-human needed some help in getting started with compression. What I presented to him was a sort of case-study on what I use compression for, and how I go about setting up the processor to get those results.

Here’s what I said to him, with some clarification and expansion as necessary.

Sledgehammers VS. Paintbrushes

Compression is a kind of processing that has a vast range of uses. Some of those uses are subtle, and some smash you in the face like a frisbee that’s been soaked in something flammable, set alight, and hurled at your nose. Of course, there’s also everything in between those two extremes.

As a creative tool, compression acts all the way from a tiny paintbrush to the heaviest sledgehammer in the shop. For my part, I tend to use compression as a sledgehammer.

The reason that I’m so heavy-handed with compression basically comes down to me working a small room. In a small room, doing something subtle with compression on, say, a snare drum is rarely helpful. The reason is because the snare is usually so loud – even without the PA – that getting that subtle tweak across would require me to swamp the acoustic sound with the PA. That would be REALLY LOUD. Much too loud. The other piece of the small-room puzzle is that the acoustic contribution from the stage tends to wash over even very heavy compression with a large amount of transient content. Because of this, a lot of the compression I end up doing turns into “New York” or “parallel” compression by default. (Parallel compression is a technique where a signal is compressed to some degree, and then mixed together with an uncompressed version of itself. This is usually thought of in a purely electronic way. However, it can also happen with a signal that’s been compressed and reproduced by a PA, but also exists as an uncompressed acoustical event that’s independent of the electronics.) This partial “washing out” of even very heavy compression can let me get away with compression settings that wouldn’t sound very good if there were nothing else to listen to.

Also, there are logistical issues. A good number of small-venue shows are very “on the fly” experiences, where you just don’t have time to explore all the subtle options available for processing a sound. You need to get something workable NOW, and if you have time to refine it later then that’s great.

In another sense, you might say that I view compression less as a sculpting tool and more as a volume-management device. A utility of sorts. Compression, for me, is a hammer that helps me put signals into defined “volume boxes.” For signals like guitar and bass, sticking them into a well defined and not-wildly-changing volume box means that I can find a spot for them in the mix, and then not have to worry too much about level changes. If they get a bit quieter, they are much less likely to get lost, and if they get a bit louder, they probably won’t overwhelm everything. Used across the whole mix, compression lets me basically choose a “don’t exceed” volume level for the PA. I can then push the vocals – hard – into that mix limiter, which helps to keep things intelligible without having the vocal peaks completely flatten the audience.

WARNING REGARDING THE ABOVE: Pushing vocals into an aggressive compressor can be a way to invite feedback problems, because feedback is dependent on gain instead of absolute volume. You can also end up making the stagewash worse, because everything entering a vocal mic (besides vocals) is a kind of “noise.” Running hard into the dynamics processor effectively causes this “noisefloor” to go up. You have to be careful, and if you run into a problem, dropping the vocal level a bit and raising the limiter’s threshold a bit might be necessary. Listen, experiment, work settings against each other, iterate, iterate, iterate…

Setting Up The Sledgehammer

As I said, my tendency is to use compression as a way to stick something into a well-defined space of volume level. That goal is what drives my processor settings.

Attack: Attack time is the how quickly the compressor reduces gain upon signal changes that exceed the threshold. Because my usage for compression is to keep things in a box, I have very little use for slow(er) attack times that allow things to escape the box. Further, I will probably have plenty of “peak” material from the deck anyway, so I don’t have to worry too much. If I notice a problem, I can always act on it. So…I prefer short attack times. As short as possible. Very short attack times can cause distortion, because the compressor acts in a time range that’s less than one wave cycle (which is a pretty good recipe for artifacts.) Even so, I will often set a compressor to the minimum possible attack time – even 0 ms – and live with a touch of crunch here and there. It all depends on what I can get away with.

Release: Release time is how quickly the compressor gain returns toward normal (unity-gain) upon signal changes. I prefer compressors that have an auto-release function which also partially honors a manual release time. With a compressor like that, setting a longer release time with the auto-release engaged means that the auto-release takes proportionally longer to do its thing – while still being program-dependent overall. If I can’t get auto-release, then a manual setting of 50 – 100 ms is about my speed. That’s fast enough to keep things consistent without getting into too many nasty artifacts. (Fifty ms is the time required for one complete cycle of a 20 Hz wave. Letting the compressor release just slowly enough to avoid artifacts at 20 Hz means that you’ll probably be fine everywhere else, because the rest of the audible spectrum cycles faster than the compressor releases.)

Ratio: Ratio is the measure of how much the compressor should ultimately attempt to reduce the output relative to the input. A 1:1 ratio means that the compressor should do nothing except pass signal through its processing path. A 2:1 ratio means that, at or above the chosen threshold, 2 dB greater signal at the input should result in the compressor attempting to reduce the output to a signal that’s only 1 dB “hotter.” Since I’m all about shoving things into boxes, I tend to use pretty extreme ratios…unless it becomes problematic. If a compressor has an infinity:1 setting (where the threshold is the target for maximum output, period) I will almost always try that first.

Seriously, the way I used compression would make lots of other engineers weep. I’m not gonna lie.


Threshold: The threshold is the point at which the compressor’s program-dependent processing is to begin changing from unity-gain to non-unity-gain, assuming that the ratio is not 1:1. In other words, a signal that reaches the threshold and then would otherwise continue to increase should have gain reduction applied. As a signal decreases towards the threshold, gain reduction should be released. If the signal falls below the threshold entirely, then the compressor’s program-dependent processing should return to unity gain – the signal should pass straight through. (I qualify unity-gain with “program-dependent processing” because there are potentially other gain stages in a compressor which can be non-unity and also program invariant. For instance, a compressor’s makeup gain might be non-unity, but it doesn’t vary with the input signal. You manually set it somewhere and leave it there until you decide it needs to be something else.)

Threshold settings are chosen differently in different situations. If I need the compressor to just “ride” a signal a bit, then I’ll try to set the threshold in a place where I see 3 or so dB of gain reduction. If I want the compressor to really squeeze something, then I’ll crank down the threshold until I see 6 dB (or more) on the gain reduction meter. When I’m using a compressor as a “don’t exceed this point” on the main mix, the threshold is set in accordance with a target SPL (Sound Pressure Level) from the PA. However hard I end up hitting the compressor is immaterial unless I start having a problem.

An Important Disclaimer

This case-study was all about showing you how I tend to work. It may or may not work for you, and you should be aware that extreme compression can get an audio-human into BIG trouble in a BIG hurry. This isn’t meant to dissuade you from trying experiments, it’s just said so that you’ll be aware of the risks. I’m used to it all, but I’ll still trip over my own feet once a year or so.

When in doubt, hit “bypass” and let things ride.