Tag Archives: Feedback

Alan Parsons Is Absolutely Correct. And Wrong.

Live sound is not the studio, and it’s dangerous to treat it as such.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

closemicWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This article is the “closing vertex” of my semi-intentional “Gain, Stability, And The Best Holistic Show” trilogy.

I’m here to agree and disagree with Alan Parsons. Yes, that Parsons. The guy who engineered “Dark Side of the Moon.” A studio engineer with a career that most of us daydream about. An audio craftsperson who truly lives up to the title in the best way.

I am NOT here to trash the guy.

What I am here to do is to talk about a disagreement I have regarding the application of a specific bit of theory. It was a bit of theory that was first presented to me by the late Tim Hollinger (whom I greatly miss). Tim told me about an article he read where Alan Parsons explained why he (Parsons) mics guitar cabinets from a distance. Part of Parsons’ rationale is that nobody listens to guitar amps with their ear right up against the speaker, and also that guitar players are so loud that he doesn’t have a bleed problem.

I don’t know if the Premier Guitar article I found is the same one that Tim read, but it might be. You can read it here. The pertinent section is below the black and white picture of Parsons working in the studio.

Alan Parsons Is Academically Right

I don’t know of any guitar player who listens to their rig with an ear pressed up to the grill cloth. I can also tell you that, in lots of small-venue cases, a LOT of what the audience hears is the entirety of the guitar cab. A close-miced version of that sound might also be present in the PA, but it’s not the totality of the acoustical “solution.”

Also, yes, there are plenty of guitar players who run their rigs “hot.” Move a mic 18 inches from the cab when working with a player like that, and bleed might not be too problematic in a recording context, even if everybody’s in the same (largish) room. Solo the channel into a pair of headphones, and you’ll probably go, “Yup, there’s plenty of guitar in that mic.”

There’s not much to say about the correctness of Alan Parsons’ factual assertions, because they’re…well…correct.

The Problem Is Application

So, if Parsons is accurate about his rationale, how can there be a disagreement?

It’s pretty easy actually, and it comes from a statement that Parsons makes in the article I linked above: “Live sound engineers just don’t seem to get it.”

Parsons is correct about that too. Really! Concert-sound humans, in a live context, DON’T “get” studio recording applications. The disciplines are different. In precisely the same way, I can say that studio engineers don’t “get” live sound applications in the live context. This all comes back to what I’ve said in earlier articles: The live-audio craftsperson’s job is to produce the best holistic show possible at the lowest practicable gain. The studio craftsperson’s job is to capture the best possible sound for later reproduction. These goals are not always fully compatible, especially in a small-venue context.

(And before you write me hate-mail, it’s entirely possible for an audio human to become competent in both studio and live disciplines. What I’m getting at here is that each discipline ultimately has separate priorities.)

Obviously, there are some specifics that need addressing here. The divergent needs of the studio and live disciplines take different roads at a number of junctions.

I Don’t Want To Mic The Room, Thanks

In the studio, getting some great “ambience” is a prized piece of both the recording process and the choosing of a recording space. The very best studios have rooms that enhance the sound of the various sources put into them. Grabbing a bit of this along with the correct dose of the “direct” sound from a source is something to be desired. It enhances the playback of that recording, which takes place at another time in another room – or is delivered directly to a person’s ear canal, in the case of headphones.

But this is not at ALL what I want as a live-audio practitioner.

For me, more often than not, a really beautiful-sounding room is an unlikely thing to encounter. There are such things as venues with tremendously desirable acoustics, but most of the time, a venue is primarily built to satisfy the logistics of getting lots of people into one space in whatever way is practical. In general, I regard any environmental acoustics to be a hostile element. Even a relatively nice room is troublesome, because it still causes me to have to deal with multiple, indirect arrivals which smear and garble the overall sound of the show. Unlike in a recording context, I am guaranteed to hear the ambience of the room.

Lots of it.

Too much, in fact.

I do NOT want any more of it to get captured and shot out of the PA, thanks very much. I don’t need my problems to be compounded. In the very often occurring case that I need to forge a total solution by combining room sound with PA sound, I want the sound in the PA to NOT reinforce the “room tone” at all. I’ve already got the sound of the room. What I need is something else.

Close micing prevents my transducers from capturing “the room” and passing that signal on to the rest of the system.

Specificity Is My Friend

For a recording engineer, a bit of “bleed” from the drumkit (and everything else) is not necessarily a bad thing. For me, though, it’s counterproductive. If I need more guitar because the drums are too strong, I do NOT want any more drums at ALL. I want guitar only, or vice versa.

Especially in small-venue live-sound, you tend to have sources that are very close together (often much closer than they would be in a nice studio), and loud wedges instead of headphones. On a large stage, this problem is mitigated somewhat, but that’s not what I tend to run into. Also, in a studio, it’s very possible to arrange the band such that directional microphone nulls help to minimize the effects of bleed. Small venues and expectations of what a band’s setup is “supposed” to look like often get in the way of doing this live.

In any case, live show bleed tends to be much more severe than what a studio engineer might encounter. This compounds the “I need more this, not that” problem above.

As an example, I recently worked with a band where the drummer specifically asked for his kit to be miced with overheads. I happily obliged, because I wanted to be accommodating. (Part of producing the best holistic show is to have comfortable, happy musicians.) At soundcheck, I took a quick guess at where the overheads should be. I wouldn’t say that we could really hear them, but hey, we had a decent total solution in the room pretty much immediately. I didn’t really think about the overheads much. About halfway through the show, though, I got curious. I soloed the overheads into my headphones.

In order to get the drum monitors where he wanted them, we had so much guitar and bass coming through that they almost swamped the drums in the overheads.(!) The overheads were basically useless as drum reinforcement, because they would pretty much end up reinforcing everything ELSE.

If a mic is going to be useful for live sound reinforcement, specificity is critical. Pulling a mic away from a source is counterproductive to that discrimination, so I prefer not to do it.

Lowest Practicable Gain

In general, higher gain is not a problem for studio folks. Yes, it might result in greater noise, and it can also reduce electronic component bandwidth, but it’s really a very small issue in the grand scheme of things.

In live audio, higher gain is an enemy. Because microphones encounter sounds that they have already picked up and passed along to the rest of the PA, they exist in a feedback loop. As the gain applied to the mic goes up, the more likely it is that the feedback loop will destabilize and ring. If I can have lower gain, I will take it, even if that means a slightly unnatural sound.

Now, you might not think that feedback would be a problem with a source as loud as a guitar amp can be, but you also may not have been in situations that I’ve encountered. I have been in situations where players, even with reasonably loud amplifiers, have asked for a metric ton of level from the monitors. Yes, I’ve gotten feedback from mics on guitar amps. (And yes, we should have just turned up the amplifiers in the first place, but these situations developed in the middle of fluid shows where stopping to talk wasn’t really an option. Look, it’s complicated.)

Even if the chance of feedback is unlikely – as it usually is with louder sources – I do NOT want to do anything that causes me to have to run a signal path at higher gain. Close micing increases the apparent sound pressure level at the transducer capsule, which allows me to run at lower gain for a given signal strength.

The overall point of this is pretty simple: The desires of recording techs and the needs of live-sound humans don’t always intersect in a pretty way. When I disagree with Alan Parsons, it’s not because he doesn’t have his facts straight, and it’s not that I’m somehow more knowledgeable than he is. I disagree because applying his area of discipline to mine simply isn’t appropriate in the specific context of his comments, and the specific live-show contexts I tend to encounter.


The Inverse Relationship

The more gain you apply, the more unstable the system becomes.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

stabilitygainWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

If you want to louse up the sound of a PA system without actually damaging any components, there’s a really quick way to go:

1) Plug in some microphones.

2) Keep the PA and the microphones in the same room.

3) Apply enough gain to the microphones such that they actually become useful for sound reinforcement.

In other words, just go ahead and use the PA as you would normally expect to use it. As you add more gain to the system, the system’s sound quality will degrade progressively. If you want to avoid this degradation, don’t use the PA for anything except playback – not turntable playback, though! Those tone arms are sensitive to environmental vibration. Use a media player, or a phone with the right software.

Okay, so I’m kinda “winding you up” with this. To be practical, we have to use PA systems in the same room as the microphones they’re amplifying. We do this all the time. We tend not to agonize over the loss of sound reproduction quality, because it just isn’t worth it. The issue is just inherent to the activity.

The reason to present this in such a stark fashion, though, is to get your attention – especially if you’re new to live audio. There are plenty of inescapable facts in this business, but one of the most important bugaboos is this:

In any audio system that involves a closed or partially closed loop from the input to the output, the system’s stability decreases as the applied gain increases. Further, to use such a system means that the assemblage is at least partially destabilized as a matter of necessity.

Gain

We spend a lot of time working with and talking about “gain” in pro-audio, but we don’t usually try to formally define it very often. Gain is a multiplier applied to a signal’s amplitude. Negative gain is a multiplier that is less than one, and positive gain is a multiplier that is greater than one. A gain of exactly one (the multiplicative identity) is “unity,” where the input signal and the output signal amplitudes are the same.

For convenience, we usually express gain as the ratio of the input signal to the output signal in decibels. Unity gain in decibels is zero, because 0 dB relative to a given amplitude is that same amplitude.

Because our systems work in partially closed loops, we can also talk about concepts like “loop gain.” Loop gain is the ratio between the system output and system input, where the output is at least partially connected to the input. A system with a loop gain greater than one is in the classic “hard feedback” scenario, where an unwanted signal aggressively self-reinforces until it can no longer do so – or somebody fixes the problem. A loop gain of exactly one is still a huge problem, because a signal just continues to repeat indefinitely. The sound may not be getting progressively louder, but it’s still tremendously annoying and a grossly incorrect rendition of the original sonic event.

Especially in the context of system stability, it’s important to understand that there is a difference between gain settings and “effective loop gain.” For instance, a microphone with greater sensitivity increases the effective loop gain of a system, because it increases the system output for a given, re-entrant signal from an input…if the downstream gain settings remain fixed.

“We plugged in that condenser, and we got crazy feedback!”

“Of course we did. That condenser is 10 dB more sensitive than the mic it replaced, and you didn’t roll the preamp gain back at all. You would have gotten feedback with the original mic if you had suddenly gunned it +10 dB, that’s for sure.”

In the same vein, any physical change that increases the intensity of re-entrant signal relative to the original input is also an increase in effective loop gain. If somebody insists on having a microphone close to a PA speaker, then the system’s electronic gain structure has to be dropped if you want to compensate. (Sometimes, you don’t want to fully compensate, or you can’t for some reason.)

Stability

Okay, then.

What do I mean by “stability?”

For our purposes, “stability” is a tendency for a system to return to a desired equilibrium after having been disturbed. In an audio system, the “disturbance” is the input signal. If our sound rig was perfectly stable, the removal of the input signal would correspond with an instantaneous stoppage of output signal. The system would immediately come to “rest” at zero output (plus any self noise).

Systems used only for playback tend to have very high stability. When an input stops, the system stops making noise almost immediately.

Yes, there are limitations. Loudspeaker drivers don’t actually come to a stop instantly, for example.

Anyway.

Playback-only systems have such great stability because they tend to be “open loop.” The system’s output is not reintroduced to the system input in any meaningful way. (Record players are an exception to this, as I alluded to in the introduction.)

But PA systems being used for actual bands in an actual room are at least a “semi-closed” loop. Some portion of the output signal makes it back to the input devices, and travels through the system again. This increases the time necessary for the system to settle back to “zero output plus noise” for any given input signal – and, if you REALLY want to split hairs, you have to deal with the reality that the system never actually settles to zero at all. The signal runs through the loop indefinitely, until the loop is broken by way of a mute button, a fader being set to -∞, or the system having its power removed. To be fair, the repeating signal is usually lost completely to the noise floor in a relatively short amount of time. Even so.

Cooking up a “laboratory” example of this is fairly easy. You just take a sample of audio, run it through a delay line, and apply feedback to the delay line. To get a quantitative perspective on things, you can figure out the time required for the total output to decay into an arbitrary noisefloor. You do this by taking the signal loss through each traversal of the loop, dividing the noisefloor dB (a negative number indicating how much signal decay you want) by the “loop traversal loss” dB, and then multiplying that number by the loop traversal time.

For example, let’s say that I have a desired noisefloor of -100 dB, referenced to the original input signal level. The loop time is 10 ms, which I encounter regularly in real-life applications. If the loop traversal loss is -50 dB (meaning that the signal drops 50 decibels each time it exits and re-enters the system), then:

-100 dB/ -50 dB = 2

2 * 10 ms = 20 ms

In 20 ms, the signal has dropped far enough that I can ignore it.

Fifty dB of rejection is REALLY high for a small-venue PA system. That kind of system “instability” is impossible for me to hear. Take a listen yourself:

A traversal loss of 20 dB means that it takes over twice as long to hit the desired noisefloor – 50 ms. I can sorta start to hear some issues if I know what to look for, but it’s nothing that’s really bothersome.

A signal that decays at the rate of only 10 dB per loop traversal is audibly “smeared.” A 100 ms decay time is actually pretty easy to catch, and I’ll bet that if the instability was band-limited (as it usually is), we’d be well inside the area where the mic is starting to get “ringy and weird” in the monitors.

…and then the singer wants nine more dB on deck, which bumps the decay time to a full second. The monitor rig is getting closer and closer to flying out of control.

You get the idea. This simulation is rather abstract, but the connection to real life is that adding gain to a system reduces loop traversal loss. That is, if a signal has a loop traversal loss of -20 dB, and we increase the applied gain by 10 dB, the loop traversal loss is now only -10 dB. It takes longer for the signal to settle into the noisefloor. The system stability has decreased.

And, of course, if we go far enough with our gain we’ll get the total loop gain to be one or greater. FEEEEEEEDBAAAAAACK!

The Upshot

What this all comes down to is pretty simple:

Anything that causes you to increase a system’s effective loop gain is undesirable…but sometimes you have to do undesirable things.

Live sound is not simply an academic exercise. There are all kinds of circumstances that end up pushing us into the increase of total loop gain, and while that’s not our most preferred circumstance, we often have no choice. Even though any increase in gain also increases the instability of our systems, there’s a certain amount of instability which can be tolerated. Also, because there’s always SOME amount of re-entrant signal, there’s no setup which is fully stable – unless we give everybody in the room a set of in-ears. ($$$)

Also, we can get a bit of help in that our systems aren’t linearly unstable. We tend to get instabilities in strongly band-limited areas, which means that surgical EQ can patch certain problems without ruining the whole day. We reduce our loop gain in a very specific area, which hopefully buys us the ability to get more gain across the rest of the audible bandwidth.

Of course, if something comes along which lets us reduce our effective gain, that makes us happy. Because it helps keep us stable.


Why Broad EQ Can’t Save You

You can’t do microsurgery with an axe.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

peakingeqWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

I don’t have anything against “British EQ” as a basic concept. As I’ve come to interpret it, “British EQ” is a marketing term that means “our filters are wide.” EQ filters with gentle, wide slopes tend to sound nice and are pretty easy to use, so they make sense as a design decision in consoles that can’t give you every bell and whistle.

When I’m trying to give a channel or group a push in a specific area, I do indeed prefer to use a filter that’s a bit wider. Especially if I have to really “get on the gas,” I need the EQ to NOT impart a strange or ugly resonance to the sound. Even so, I think my overall preference is still for a more focused filter than what other folks might choose. For instance, when adding 6 dB at about 1kHz to an electric guitar (something I do quite often), the default behavior of my favorite EQ plugin is a two-octave wide filter:

1k2oct

What I generally prefer is a 1.5-octave filter, though.

1k1.5oct

I still mostly avoid a weird, “peaky” sound, but I get a little bit less (1 dB) of that extra traffic at 2-3 kHz, which might be just enough to keep me from stomping on the intelligibility of my vocal channels.

Especially in the rough-and-tumble world of live audio, EQ selectivity is a big deal. When everything is bleeding into everything else, you want to be able to grab and move only the frequency range that corresponds to what’s actually “signal” in a channel. Getting what you want…and also glomming onto a bunch of extra material isn’t all that helpful. In the context of, say, a vocal mic, only the actual vocal part is signal. Everything else is noise, even if it’s all music in the wider sense. IF I want to work on something in a vocal channel, I don’t also want to be working on the bass, drums, guitar, and keyboard noises that are also arriving at the mic. Selective EQ helps with that.

What Your Channel EQ Is Doing To You

Selective EQ isn’t always a choice that you get, though. If a console manufacturer has a limited “budget” to decide what to give you on a channel-per-channel basis, they’ll probably choose a filter that’s fairly wide. For instance, here’s a 6 dB boost at 1 kHz on a channel from an inexpensive analog console (a Behringer SL2442):

1kbehringer

The filter looks to be between 2.5 and 3 octaves wide. This is perfectly fine for basic tone shaping, but it’s not always great for solving problems. It would be nice to get control over the bandwidth of the filter, but that option chews up both what can be spent on internal components, and it also hogs control-surface real estate. For those reasons, and also because of “ease of use” considerations, fully parametric EQ isn’t something that’s commonly found on small-venue, analog consoles. As such, their channel EQs are often metaphorical axes – or kitchen knives, if you’re lucky – when what you may need is a scalpel.

If you need to do something drastic in terms of gain, a big, fat EQ filter can start acting like a volume control across the entire channel. This is especially true when you need to work on two or more areas, and multiple filters overlap. You can kill your problem, but you’ll also kill everything else.

It’s like getting rid of a venomous spider by having the Air Force bomb your house.

I should probably stop with the metaphors…

Fighting Feedback

Of course, we don’t usually manage feedback issues with a console’s channel EQ. We tend to use graphic EQs that have been inserted or “inlined” on console outputs. (I do things VERY differently, but that’s not the point of this article.)

Why, though? Why use a graphic EQ, or a highly flexible parametric EQ for battling feedback?

Well, again, the issue is selectivity.

See, if what you’re trying to do is to maximize the amount of gain that can be applied to a system, any gain reduction works against that goal.

(Logical, right?)

Unfortunately, most feedback management is done by applying negative gain across some frequency range. The trick, then, is to apply that negative gain across as narrow a band as is practicable. The more selective a filter is, the more insane things you can do with its gain without having a large effect on the average level of the rest of the signal.

For example, here’s a (hypothetical) feedback management filter that’s 0.5 octaves wide and set for a gain of -9 dB.

feedbackwide

It’s 1 dB down at about 600 Hz and 1700 Hz. That’s not too bad, but take a look at this quarter-octave notch filter:

feedbacknarrow

Its actual gain is negative infinity, although the analyzer “only” has enough resolution to show a loss of 30 dB. (That’s still a very deep cut.) Even with a cut that displays as more than three times as deep as the first filter, the -1 dB points are 850 Hz and 1200 Hz. The filter’s high selectivity makes it capable of obliterating a problem area while leaving almost everything else untouched.

To conclude, I want to reiterate: Wide EQ isn’t bad. It’s an important tool to have in the box. At the same time, I would caution craftspersons that are new to this business that a label like “British EQ” or “musical EQ” does not necessarily mean “good for everything.” In most cases, what that label likely means is that an equalizer is inoffensive by way of having a gentle slope.

And that’s fine.

But broad EQ can’t save you. Not from the really tough problems, anyway.


A Vocal Addendum

Forget about all the “sexy” stuff. Get ’em loud, and let ’em bark.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

micandmonsWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This article is a follow-on to my piece regarding the unsuckification of monitors. In a small-venue context, vocal monitoring is probably more important than any other issue for the “on deck” sound. Perhaps surprisingly, I didn’t talk directly about vocals and monitors AT ALL in the previous article.

But let’s face it. The unsuckification post was long, and meant to be generalized. Putting a specific discussion of vocal monitoring into the mix would probably have pushed the thing over the edge.

I’ll get into details below, but if you want a general statement about vocal monitors in a small-venue, “do-or-die,” floor-wedge situation, I’ll be happy to oblige: You do NOT need studio-quality vocals. You DO need intelligible, reasonably smooth vocals that can be heard above everything else. Forget the fluff – focus on the basics, and do your preparation diligently.

Too Loud Isn’t Loud Enough

One of the best things to ever come out of Pro Sound Web was this quiz on real-world monitoring. In particular, answer “C” on question 16 (“What are the main constituents of a great lead vocal mix?”) has stuck with me. Answer C reads: “The rest of the band is hiding 20 feet upstage because they can’t take it anymore.”

In my view, the more serious rendering of this is that vocal monitors should, ideally, make singing effortless. Good vocal monitors should allow a competent vocalist to deliver their performance without straining to hear themselves. To that end, an audio human doing show prep should be trying to get the vocal mics as loud as is practicable. In the ideal case, a vocal mic routed through a wedge should present no audible ringing, while also offering such a blast of sound that the singer will ask for their monitor send to be turned down.

(Indeed, one of my happiest “monitor guy” moments in recent memory occurred when a vocalist stepped up to a mic, said “Check!”, got a startled look on his face, and promptly declared that “Anyone who can’t hear these monitors is deaf.”)

Now, wait a minute. Doesn’t this conflict with the idea that too much volume and too much gain are a problem?

No.

Vocal monitors are a cooperative effort amongst the audio human, the singer(s), and the rest of the band. The singer has to have adequate power to perform with the band. The band has to run at a reasonable volume to play nicely with the singer. If those two conditions are met (and assuming there are no insurmountable equipment or acoustical problems), getting an abundance of sound pressure from a monitor should not require a superhuman effort or troublesome levels of gain.

So – if you’re prepping for a band, dial up as much vocal volume as you can without causing a loop-gain problem. If the vocals are tearing people’s heads off, you can always turn it down. Don’t be lazy! Get up on deck and listen to what it sounds like. If there are problem areas at certain frequencies, then get on the appropriate EQ and tame them. Yes, the feedback points can change a bit when things get moved around and people get in the room, but that’s not an excuse to just sit on your hands. Do some homework now, and life will be easier later.

Don’t Squeeze Me, Bro

A sort of corollary to the above is that anything which acts to restrict your vocal monitor volume is something you should think twice about. If you were thinking about inserting a compressor in such a way that it would affect monitor world, think again.

A compressor reduces dynamic range by reducing gain on signals that exceed a preset threshold. For a vocalist, this means that the monitor level of their singing may no longer track in a 1:1 ratio with their output at the mic. They sing with more force, but the return through the monitors doesn’t get louder at the same rate. If the singer is varying their dynamics to track with the band, this failure of the monitors to stay “in ratio” can cause the vocals to become swamped.

And, in certain situations, monitors that don’t track with vocal dynamics can cause a singer to hurt themselves. They don’t hear their voice getting as loud as it should, so they push themselves harder – maybe even to the point that they blow out their voice.

Of course, you could try to compensate for the loss of level by increasing the output or “makeup” gain on the compressor, but oh! There’s that “too much loop gain” problem again. (Compressors do NOT cause feedback. That’s a myth. Steady-state gain applied to compensate for compressor-applied, variable gain reduction, on the other hand…)

The upshot?

Do NOT put a compressor across a vocalist such that monitor world will be affected. (The exception is if you have been specifically asked to do so by an artist that has had success with the compressor during a real, “live-fire” dress rehearsal.) If you don’t have an independent monitor console or monitor-only channels, then bus the vocals to a signal line that’s only directly audible in FOH, and compress that signal line.

The Bark Is The Bite

One thing I have been very guilty of in the past, and am still sometimes guilty of, is dialing up a “sounds good in the studio” vocal tone for monitor world. That doesn’t sound like it would be a problem, but it can be a huge one.

The issue at hand is that what sounds impressive in isolation often isn’t so great when the full band is blasting away. This is very similar to guitarists who have “bedroom” tone. When we’re only listening to a single source, we tend to want that source to consume the entire audible spectrum. We want that single instrument or voice to have extended lows and crisp, snappy HF information. We will sometimes dig out the midrange in order to emphasize the extreme ends of the audible spectrum. When all we’ve got to listen to is one thing, this can all sound very “sexy.”

And then the rest of the band starts up, and our super-sexy, radio-announcer vocals become the wrong thing. Without a significant amount of midrange “bark,” the parts of the spectrum truly responsible for vocal audibility get massacred by the guitars. And drums. And keyboards. All that’s left poking through is some sibilance. Then, when you get on the gas to compensate, the low-frequency material starts to feed back (because it’s loud, and the mic probably isn’t as directional as you think at low frequencies), and the high-frequency material also starts to ring (because it’s loud, and probably has some nasty peaks in it as well).

Yes – a good monitor mix means listenable vocals. You don’t want mud or nasty “clang” by any means, but you need the critical midrange zone – say, 500 Hz to 3 KHz or 4 KHz – to be at least as loud as the rest of the audible spectrum in the vocal channel. Midrange that jumps at you a little bit doesn’t sound as refined as a studio recording, but this isn’t the studio. It’s live-sound. Especially on the stage, hi-fi tone often has to give way to actually being able to differentiate the singer. There are certainly situations where studio-style vocal tone can work on deck, but those circumstances are rarely encountered with rock bands in small spaces.

Stay Dry

An important piece of vocal monitoring is intelligibility. Intelligibility has to do with getting the oh-so-important midrange in the right spot, but it also has to do with signals starting and stopping. Vocal sounds with sharply defined start and end points are easy for listeners to parse for words. As the beginnings and ends of vocal sounds get smeared together, the difficulty of parsing the language goes up.

Reverb and delay (especially) cause sounds to smear in the time domain. I mean, that’s what reverb and delay are for.

But as such, they can step on vocal monitoring’s toes a bit.

If it isn’t a specific need for the band, it’s best to leave vocals dry in monitor world. Being able to extract linguistic information from a sound is a big contributor to the perception that something is loud enough or not. If the words are hard to pick out because they’re all running together, then there’s a tendency to run things too hot in order to compensate.

The first step with vocal monitors is to get them loud enough. That’s the key goal. After that goal is met, then you can see how far you can go in terms of making things pretty. Pretty is nice, and very desirable, but it’s not the first task or the most important one.


Unsuckifying Your Monitor Mix

Communicate well, and try not to jam too much into any one mix.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

monsWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Monitors can be a beautiful thing. Handled well, they can elicit bright-eyed, enthusiastic approbations like “I’ve never heard myself so well!” and “That was the best sounding show EVER!” They can very easily be the difference between a mediocre set and a killer show, because of how much they can influence the musicians’ ability to play as a group.

I’ve said it to many people, and I’m pretty sure I’ve said it here: As an audio-human, I spend much more time worrying about monitor world than FOH (Front Of House). If something is wrong out front, I can hear it. If something is wrong in monitor world, I won’t hear it unless it’s REALLY wrong. Or spiraling out of control.

…and there’s the issue. Bad monitor mixes can do a lot of damage. They can make the show less fun for the musicians, or totally un-fun for the musicians, or even cause so much on stage wreckage that the show for the audience becomes a disaster. On top of that, the speed at which the sound on deck can go wrong can be startlingly high. If you’ve ever lost control of monitor world, or have been a musician in a situation where someone else has had monitor world “get away” from them, you know what I mean. When monitors become suckified, so too does life.

So – how does one unsuckify (or, even better, prevent suckification of) monitor world?

Foundational Issues To Prevent Suckification

Know The Inherent Limits On The Engineer’s Perception

At the really high-class gigs, musicians and production techs alike are treated to a dedicated “monitor world” or “monitor beach.” This is an independent or semi-independent audio control rig that is used to mix the show for the musicians. There are even some cases where there are multiple monitor worlds, all run by separate people. These folks are likely to have a setup where they can quickly “solo” a particular monitor mix into their own set of in-ears, or a monitor wedge which is similar to what the musicians have. Obviously, this is very helpful to them in determining what a particular performer is hearing.

Even so, the monitor engineer is rarely in exactly the same spot as any particular musician. Consequently, if the musicians are on wedges, even listening to a cue wedge doesn’t exactly replicate the total acoustic situation being experienced by the players.

Now, imagine a typical small-venue gig. There’s probably one audio human doing everything, and they’re probably listening mostly to the FOH PA. The way that FOH combines with monitor world can be remarkably different out front versus on deck. If the engineer has a capable console, they can solo up a complete monitor mix, probably through a pair of headphones. (A cue wedge is pretty unlikely to have been set up. They’re expensive and consume space.) A headphone feed is better than nothing, but listening to a wedge mix in a set of cans only tells an operator so much. Especially when working on a drummer’s mix, listening to the feed through a set of headphones has limited utility. A guy or gal might set up a nicely balanced blend, but have no real way of knowing if that mix is even truly audible at the percussionist’s seat.

If you’re not so lucky as to have a flexible console, your audio human will be limited to soloing individual inputs.

The point is that, at most small-venue shows, an audio human at FOH can’t really be expected to know what a particular mix sounds like as a total acoustic event. Remote-controlled consoles can fix this temporarily, of course, but as soon as the operator leaves the deck…all bets are off. If you’re a musician, assume that the engineer does NOT have a thoroughly objective understanding of what you’re hearing. If you’re an audio human, make the same assumption about yourself. Having made those assumptions, be gentle with yourself and others. Recognize that anything “pre set” is just a wild guess, and further, recognize that trying to take a channel from “inaudible in a mix” to “audible” is going to take some work and cooperation.

Use Language That’s As Objective As Possible

Over the course of a career, audio humans create mental mappings between subjective statements and objective measurements. For instance, when I’m working with well-established monitor mixes, I translate requests like “Could I get just a little more guitar?” into “Could I get 3 dB more guitar?” This is a necessary thing for engineers to formulate for themselves, and it’s appropriate to expect that a pro-level operator has some ability to interpret subjective requests.

At the same time, though, it can make life much easier when everybody communicates using objective language. (Heck, it makes it easier if there’s two-way communication at all.)

For instance, let’s say you’re an audio human working with a performer on a monitor mix, and they ask you for “a little more guitar.” I strongly recommend making the change that you translate “a little more” as corresponding to, and then stating your change (in objective terms) over the talkback. Saying something like, “Okay, that’s 3 dB more guitar in mix 2” creates a helpful dialogue. If that 3 dB more guitar wasn’t enough, the stating of the change opens a door for the musician to say that they need more. Also, there’s an opportunity for the musician’s perception to become calibrated to an objective scale – meaning that they get an intuitive sense for what a certain dB boost “feels” like. Another opportunity that arises is for you and the musician to become calibrated to each other’s terminology.

Beyond that, a two-way dialogue fosters trust. If you’re working on monitors and are asked for a change, making a change and then stating what you did indicates that you are trying to fulfill the musician’s wishes. This, along with the understanding that gets built as the communication continues, helps to mentally place everybody on the same team.

For musicians, as you’re asking for changes in your monitor mixes, I strongly encourage you to state things in terms of a scale that the engineer can understand. You can often determine that scale by asking questions like, “What level is my vocal set at in my mix?” If the monitor sends are calibrated in decibels, the engineer will probably respond with a decibel number. If they’re calibrated in an arbitrary scale, then the reply will probably be an arbitrary number. Either way, you will have a reference point to use when asking for things, even if that reference point is a bit “coarse.” Even if all you’ve got is to request that something go from, say, “five to three,” that’s still functionally objective if the console is labeled using an arbitrary scale.

For decibels, a useful shorthand to remember is that 3 dB should be a noticeable change in level for something that’s already audible in your mix. “Three decibels” is a 2:1 power ratio, although you might personally feel that “twice as loud” is 6 dB (4:1) or even 10 dB (10:1).

Realtime Considerations To Prevent And Undo Suckification

Too Much Loop Gain, Too Much Volume

Any instrument or device that is substantially affected by the sound from a monitor wedge, and is being fed through that same wedge, is part of that mix’s “loop gain.” Microphones, guitars, basses, acoustic drums, and anything else that involves body or airborne resonance is a factor. When their output is put through a monitor speaker, these devices combine with the monitor signal path to form an acoustical, tuned circuit. In tuned circuits, the load impedance determines whether the circuit “rings.” As the load impedance drops, the circuit is more and more likely to ring or resonate for a longer time.

If that last bit made your eyes glaze over, don’t worry. The point is that more gain (turning something up in the mix) REDUCES the impedance, or opposition, to the flow of sound in the loop. As the acoustic impedance drops, the acoustic circuit is more likely to ring. You know, feed back. *SQEEEEEALLLL* *WHOOOOOwoowooooOOOM*

Anyway.

The thing for everybody to remember – audio humans and musicians alike – is that a monitor mix feeding a wedge becomes progressively more unstable as gain is added. As ringing sets in, the sound quality of the mix drops off. Sounds that should start and then stop quickly begin to “smear,” and with more gain, certain frequency ranges become “peaky” as they ring. Too much gain can sometimes begin to manifest itself as an overall tone that seems harsh and tiring, because sonic energy in an irritating range builds up and sustains itself for too long. Further instability results in audible feedback that, while self-correcting, sounds bad and can be hard for an operator to zero-in on. As instability increases further, the mix finally erupts into “runaway” feedback that’s both distracting and unnerving to everyone.

The fix, then is to keep each mix’s loop gain as low as possible. This often translates into keeping things OUT of the monitors.

As an example, there’s a phenomenon I’ve encountered many times where folks start with vocals that work…and then add a ton of other things to their feed. These other sources are often far more feedback resistant than their vocal mic can be, and so they can apply enough gain to end up with a rather loud monitor mix. Unfortunately, they fall in love with the sound of that loud mix, except for the vocals which have just been drowned. As a result, they ask for the vocals to be cranked up to match. The loop gain on the vocal mic increases, which destabilizes the mix, which makes monitor world harder to manage.

As an added “bonus,” that blastingly loud monitor mix is often VERY audible to everybody else on stage, which interferes with their mixes, which can cause everybody else to want their overall mix volume to go up, which increases loop gain, which… (You get the idea.)

The implication is that, if you’re having troubles with monitors, a good thing to do is to start pulling things out of the mixes. If the last thing you did before monitor world went bad was, say, adding gain to a vocal mic, try reversing that change and then rebuilding things to match the lower level.

And not to be harsh or combative, but if you’re a musician and you require high-gain monitors to even play at all, then what you really have is an arrangement, ensemble, ability, or equipment problem that is YOURS to fix. It is not an audio-human problem or a monitor-rig problem. It’s your problem. This doesn’t mean that an engineer won’t help you fix it, it just means that it’s not their ultimate responsibility.

Also, take notice of what I said up there: High-GAIN monitors. It is entirely possible to have a high-gain monitor situation without also having a lot of volume. For example, 80 dB SPL C is hardly “rock and roll” loud, but getting that output from a person who sings at the level of a whisper (50 – 60 dB SPL C) requires 20 – 30 dB of boost. For the acoustical circuits that I’ve encountered in small venues, that is definitely a high-gain situation. Gain is the relative level increase or decrease applied to a signal. Volume is the output associated with a signal level resultant from gain. They are related to each other, but the relationship isn’t fixed in terms of any particular gain setting.

Conflicting Frequency Content

Independent of being in a high-gain monitor conundrum, you can also have your day ruined by masking. Masking is what occurs when two sources with similar frequency content become overlaid. One source will tend to dominate the other, and you lose the ability to hear both sources at once. I’ve had this happen to me on numerous occasions with pianists and guitar players. They end up wanting to play at the same time, using substantially the same notes, and the sonic characteristics of the two instruments can be surprisingly close. What you get is either too-loud guitar, too-loud piano, or an indistinguishable mash of both.

In a monitor-mix situation, it’s helpful to identify when multiple sources are all trying to occupy the same sonic space. If sources can’t be distinguished from one another until one sound just gets obliterated, then you may have a frequency-content collision in progress. These collisions can result in volume wars, which can lead to high-gain situations, which result in the issues I talked about in the previous section. (Monitor problems are vicious creatures that breed like rabbits.)

After being identified, frequency-content issues can be solved in a couple of different ways. One way is to use equalization to alter the sonic content of one source or another. For instance, a guitar and a bass might be stepping on each other. It might be decided that the bass sound is fine, but the guitar needs to change. In that case, you might end up rolling down the guitar’s bottom end, and giving the mids a push. Of course, you also have to decide where this change needs to take place. If everything was distinct before the monitor rig got involved, then some equalization change from the audio human is probably in order. If the problem largely existed before any monitor mixes were established, then the issue likely lies in tone choice or song arrangement. In that case, it’s up to the musicians.

One thing to be aware of is that many small-venue mix rigs have monitor sends derived from the same channel that feeds FOH. While this means that the engineer’s channel EQ can probably be used to help fix a frequency collision, it also means that the change will affect the FOH mix as well. If FOH and monitor world sound significantly different from each other, a channel EQ configuration that’s correct for monitor world may not be all that nice out front. Polite communication and compromise are necessary from both the musicians and the engineer in this case. (Certain technical tricks are also possible, like “multing” a problem source into a monitors-only channel.)

Lack Of Localization

Humans have two ears so that we can determine the location and direction of sounds. In music, one way for us to distinguish sources is for us to recognize those instruments as coming from different places. When localization information gets lost, then distinguishing between sources requires more separation in terms of overall volume and frequency content. If that separation isn’t possible to get, then things can become very muddled.

This relates to monitors in more than one way.

One way is a “too many things in one place that’s too loud” issue. In this instance, a monitor mix gets more and more put in it, and at a high enough volume that the monitor obscures the other sounds on deck. What the musician originally heard as multiple, individually localized sources is now a single source – the wedge. The loss of localization information may mean that frequency-content collisions become a problem, which may lead to a volume-war problem, which may lead to a loop-gain problem.

Another possible conundrum is “too much volume everywhere.” This happens when a particular source gets put through enough wedges at enough volume for it to feel as though that single source is everywhere. This can ruin localization for that particular source, which can also result in the whole cascade of problems that I’ve already alluded to.

Fixing a localization problem pretty much comes down having sounds occupy their own spatial point as much as possible. The first thing to do is to figure out if all the volume used for that particular source is actually necessary in each mix. If the volume is basically necessary, then it may be feasible to move that volume to a different (but nearby) monitor mix. For some of the players, that sound will get a little muddier and a touch quieter, but the increase in localization may offset those losses. If the volume really isn’t necessary, then things get much easier. All that’s required is to pull back the monitor feeds from that source until localization becomes established again.

It’s worth noting that “extreme” cases are possible. In those situations, it may be necessary to find a way to generate the necessary volume from a single, localized source that’s audible to everyone on the deck. A well placed sidefill can do this, and an instrument amplifier in the correct position can take this role if a regular sidefill can’t be conjured up.

Wrapping Up

This can be a lot to take in, and a lot to think about. I will freely confess to not always having each of these concepts “top of mind.” Sometimes, audio turns into a pressure situation where both musicians and techs get chased into corners. It can be very hard for a person who’s not on deck to figure out what particular issue is in effect. For folks without a lot of technical experience who play or sing, identifying a problem beyond “something’s not right” can be too much to ask.

In the heat of the moment, it’s probably best to simply remember that yes, monitors are there to be used – but not to be overused. Effective troubleshooting is often centered around taking things out of a misbehaving equation until the equation begins to behave again. So, if you want to unsuckify your monitors, try getting as much out of them as possible. You may be surprised at what actually ends up working just fine.


Echoes Of Feedback

By accident, I seem to have discovered an effective, alternate method for “ringing out” PA systems and monitor rigs.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Sometimes, the best way to find something is to be looking for something else entirely.

A couple of weeks ago, I got it into my head to do a bit of testing. I wanted to see how much delay time I could introduce into a monitor feed before I noticed that something was amiss. To that end, I took a mic and monitor that were already set up, routed the mic through the speaker, and inserted a delay (with no internal feedback) on the signal path. I walked between FOH (Front Of House) and the stage, each time adding another millisecond of delay and then talking into the mic.

For several go-arounds, everything was pretty nondescript. I finally got to a delay time that was just noticeable, and then I thought, “What the heck. I should put in something crazy to see how it sounds.” I set the delay time to something like a full second, and then barked a few words into the mic.

That’s when it happened.

First, silence. Then, loud and clear, the delayed version of what I had said.

…and then, the delayed version of the delayed version of what I had just said, but rather more quietly.

“Whoops,” I thought, “I must have accidentally set the delay’s feedback to something audible.” I began walking back to FOH, only to suddenly realize that I hadn’t messed up the delay’s settings at all. I had simply failed to take into account the entire (and I do mean the ENTIRE) signal path I was working on.

Hold that thought.

Back In The Day

There was a time when delay effects weren’t the full-featured devices we’re used to. Whether the unit was using a bit of tape or some digital implementation, you didn’t always get a processor with a knob labeled “feedback,” or “regen,” or “echoes,” or whatever. There was a chance that your delay processor did one thing: It made audio late. Anything else was up to you.

Because of this, certain consoles of the day had a feature on their aux returns that allowed for the signal passing through the return to be “multed” (split), and then sent back through the aux send to the processor it came from. (On SSL consoles, this feature was called “spin.”) You used this to get the multiple echoes we usually associate with delay as an effect for vocals or guitar.

At some point, processor manufacturers decided that including this feature inside the actual box they were selling was a good idea, and we got the “feedback” knob. There’s nothing exotic about the control. It just routes some of the output back to the input. So, if you have a delay set for some number of milliseconds, and send a copy of the output back to the input end (at a reduced level), then you get a repeat every time your chosen number of milliseconds ticks by. Each repeat drops in level by the gain reduction applied at the feedback control…and eventually, the echo signal can’t be readily heard anymore.

But anyway, the key point here is that whether or not it’s handled “internally,” repeating echoes from a delay line are usually caused by some amount of the processor’s output returning to the front end to be processed again. (I say “usually” because it’s entirely possible to conceive of a digital unit that operates by taking an input sample, delaying the sample, playing the sample back at some volume, and then repeats the process for the sample a certain number of times before stopping the process. In this case, the device doesn’t need to listen to its own output to get an echo.)

I digress. Sorry.

If the output were to be routed back to the input at “unity gain,” (with no reduction or increase in level relative to the original output signal) what would happen? That’s right – you’d get an unlimited number of repeats. If the output is routed back to the front end at greater than unity gain, what would happen? Each repeat would grow in level until the processor’s output was completely saturated in a hellacious storm of distorted echo.

Does that remind you of anything?

Acoustical Circuits

This is where my previous sentence comes into play: “I had simply failed to take into account the entire (and I do mean the ENTIRE) signal path I was working on.” I had temporarily forgotten that the delay line I was using for my tests had not magically started to exist in a vacuum, somehow divorced from the acoustical circuit it was attached to. Quite the opposite was true. The feedback setting on the processor might have been set at “negative infinity,” but that did NOT mean that processor output couldn’t return to the input.

It’s just that the output wasn’t returning to the input by a path that was internal to the delay processor.

I’ve talked about acoustical, resonant circuits before. We get feedback in live-audio rigs because, rather like a delay FX unit, our output from the loudspeakers is acoustically routed back to our input microphones. As the level of this re-entrant signal rises towards being equal with the original input, the hottest parts of the signal begin to “smear” and “ring.” If the level of the re-entrant signal reaches “unity,” then the ringing becomes continuous until we do something to reduce the gain. If the returning signal goes beyond unity gain, we get runaway feedback.

This is not fundamentally different from our delay FX unit. The signal output from the PA or monitor speakers takes some non-zero amount of time to get back into the microphone, just like the feedback to the delay takes a non-zero amount of time to return. We’re just not used to thinking of the microphone loop in that way. We don’t consciously set a delay time on the audio re-entering the mic, and we don’t intentionally set an amount of signal that we want to re-enter the capsule – we would, of course, prefer that ZERO signal re-entered the capsule.

And the “delay time” through the mic-loudspeaker loop is just naturally imposed on us. We don’t dial up “x number of milliseconds” on a display, or anything. However long it takes audio to find its way back through the inputs is however long it takes.

Even so, feedback through our mics is basically the same creature as our “hellacious storm” of echoes through a delay processor. The mic just squeals, howls, and bellows because of differences in overall gain at different frequencies. Those frequencies continue to echo – usually, so quickly that we don’t discern individual repeats – while the other frequencies die down. That’s why the fighting of feedback so often involves equalization: If we can selectively reduce the gain of the frequencies that are ringing, we can get their “re-entry level” down to the point where they don’t noticeably ring anymore. The echoes decay so far and so fast that we don’t notice them, and we say that the system has stabilized.

All of this is yet another specific case where the patterns of audio behavior mirror and repeat themselves in places you might not expect.

As it turns out, you can put this to very powerful use.

The Application

As I discussed in “Transdimensional Noodle Baking,” we can do some very interesting things with audio when it comes to manipulating it in time. Making light “late” is a pretty unwieldy thing for people to do, but making audio late is almost trivial in comparison.

And making audio events late, or spreading them out in time, allows you to examine them more carefully.

Now, you might not associate careful examination with fighting feedback issues, but being able to slow things down is a big help when you’re trying to squeeze the maximum gain-before-feedback out of something like a a monitor rig. It’s an especially big help when you’re like me – that is to say, NOT an audio ninja.

What I mean by not being an audio ninja is that I’m really quite poor at identifying frequencies. Those guys who can hear a frequency start to smear a bit, and instantly know which fader to grab on their graphic EQ? That’s not me. As such, I hate graphic EQs and avoid putting them into systems whenever possible. I suppose that I could dive into some ear-training exercises, but I just can’t seem to be bothered. I have other things to do. As such, I have to replace ability with effort and technology.

Now, couple another issue with that. The other issue is that the traditional method of “ringing out” a PA or monitor rig really isn’t that great.

Don’t get me wrong! Your average ringout technique is certainly useful. It’s a LOT better than nothing. Even so, the method is flawed.

The problem with a traditional ringout procedure is that it doesn’t always simulate all the variables that contribute to feedback. You can ring out a mic on deck, walk up, check it, and feel pretty good…right up until the performer asks for “more me,” and you get a high-pitched squeal as you roll the gain up beyond where you had it. The reason you didn’t find that high-pitched squeal during the ringout was because you didn’t have a person with their face parked in front of the mic. Humans are good absorbers, but we’re also partially reflective. Stick a person in front of the mic, and a certain, somewhat greater portion of the monitor’s output gets deflected back into the capsule.

You can definitely test for this problem if you have an assistant, or a remote for the console, but what if you have neither of those things? What if you’ve got some other weird, phantom ring that’s definitely there, and definitely annoying, but hard to pin down? It might be too quiet to easily catch on a regular RTA (Real Time Analyzer), and you might not be able to accurately whistle or sing the tone while standing where you can easily read your RTA. Even if you can carry an RTA with you (if you have a smartphone, you can carry a basic analyzer with you everywhere – for free) you still might not be able to accurately whistle or sing the offending frequency.

But what if you could spread out the ringing into a series of discrete echoes? What if you could visually record and inspect those echoes? You’d have a very powerful tuning tool at your disposal.

The Implementation

I admit, I’m pretty lucky. Everything I need to implement this super-nifty feedback finding tool lives inside my mixing console. For other folks, there’s going to be more “doing” involved. Nevertheless, you really only need to add two key things to your audio setup to have access to all this:

1) A digital delay that can pass all audio frequencies equally, is capable of long (1 second or more) delays, and can be run with no internal feedback.

2) A spectrograph that will show you a range of 10 seconds or more, and will also show you the frequency under a cursor that you can move around to different points of interest.

A spectrograph is a type of audio analysis system that is specifically meant to show frequency magnitude over a certain amount of time. This is similar to “waterfall” plots that show spectral decay, but a spectrograph is probably much easier to read for this application.

The delay is inserted in the audio path of the microphone, in such a way that the only signal audible in the path is the output of the delay. The delay time should be set to somewhere around 1.5 to 2 seconds, long enough to speak a complete phrase into the mic. The output of the signal path is otherwise routed to the PA or monitors as normal, and the spectrograph is hooked up so that it can directly (that is, via an electrical connection) “listen” to the signal path you’re testing. The spectrograph should be set up so that ambient noise is too low to be visible on the analysis – otherwise, the output will be harder to interpret.

To start, you apply a “best guess” amount of gain to the mic pre and monitor sends. You’ll need to wait several seconds to see if the system starts to ring out of control, because the delay is making everything “late.” If the system does start to ring, the problem frequencies should be very obvious on the spectrograph. Adjust the appropriate EQs accordingly, or pull the gain back a bit.

With the spectrograph still running, walk up to the mic. Stick your face right up on the mic, and clearly but quickly say, “Check, test, one, two.” (“Check, test, one, two” is a phrase that covers most of the audible frequency spectrum, and has consonant sounds that rely on high-mid and high frequency reproduction to sound good.)

DON’T FREAKIN’ MOVE.

See, what you’re effectively doing is finding the “hot spots” in the sound that’s re-entrant to the microphone, and if you move away from the mic you change where those hot spots are. So…

Stay put and listen. The first thing you’ll hear is the actual, unadulterated signal that went through the microphone and got delivered through the loudspeaker. The repeats you will hear subsequently are what is making it back into the microphone and getting re-amplified. If you hear the repeats getting more and more “odd” and “peaky” sounding, that’s actually good – it means that you’re finding problem areas.

After the echoes have decayed mostly into silence, or are just repeating and repeating with no end in sight, walk back to your spectrograph and freeze the display. If everything is set up correctly, you should be able to to visually identify sounds that are repeating. The really nifty thing is that the problem areas will repeat more times than the non-problem areas. While other frequencies drop off into black (or whatever color is considered “below the scale” by your spectrograph) the ringy frequencies will still be visible.

You can now use the appropriate EQs to pull your problem frequencies down.

Keep iterating the procedure until you feel like you have a decent amount of monitor level. As much as possible, try to run the tests with gains and mix levels set as close to what they’ll be to the show as possible. Lots of open mics going to lots of different places will ring differently than a few mics only going to a single destination each.

Also, make sure to remember to disengage the delay, walk up on deck, and do a “sanity” check to make sure that everything you did was actually helpful.



If you’re having trouble visualizing this, here are some screenshots depicting one of my own trips through this process:

This spectrograph reading clearly shows some big problems in the low-mid area.

Some corrective EQ goes in, and I retest.

That’s better, but we’re not quite there.

More EQ.

That seems to have done the trick.



I can certainly recognize that this might be more involved than what some folks are prepared to do. I also have to acknowledge that this doesn’t work very well in a noisy environment.

Even so, turning feedback problems into a series of discrete, easily examined echoes has been quite a revelation for me. You might want to give it a try yourself.


Vocal Processors, And The Most Dangerous Knob On Them

If you were wondering, the most dangerous knob is the one that controls compression.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Not every noiseperson is a fan of vocal processors.

(Vocal processors, if you didn’t know, are devices that are functionally similar to guitar multi-fx units – with the exception that they expect input to come from a vocal mic, and so include a microphone preamp.)

Vocal processors can be deceivingly powerful devices, and as such, can end up painting an audio-human into a corner that they can’t get out of. The other side of that coin is that they can allow you to intuitively dial up a sound that you like, without you having to translate your intuitive choices into technical language while at a gig.

What I mean by that last bit is this: Let’s say that you like a certain kind of delay effect on your voice. There’s a specific delay time that just seems perfect, a certain number of repeat echoes that feels exactly right, an exact wet/ dry mix that gives you goosebumps, and an effect tonality that works beautifully for you. With your own vocal processor, you can go into rehearsal and fiddle with the knobs for as long as it takes to get exactly that sound. Further, you don’t have to be fully acquainted with what all the settings mean in a scientific sense. You just try a bit more or less of this or that, and eventually…you arrive. If you then save that sound, and take that vocal processor to a gig, that very exact sound that you love comes with you.

Which is great, because otherwise you have to either go without FX, or (if you’re non-technical) maybe struggle a bit with the sound person. The following are some conversations that you might have.

You: Could I have both reverb and delay on my vocal?

FOH (Front Of House) Engineer: Ummm…we only have reverb.

You: Oh.

You: Gimme a TON of delay in the monitors.

Audio Human: Oh, sorry, my FX returns can only be sent to the main mix.

You: Aw, man…

You: Could I have a touch more mid in my voice?

[Your concept of “a touch more mid” might be +6 dB at 2000 Hz, with a 2-octave-wide filter. The sound-wrangler’s concept of “a touch more mid” might be +3 dB at 750 Hz, with a one-octave-wide filter. Further, you might not be able to put a number on what frequency you want, especially if what I just said sounds like gobbledygook. Heck, the audio human might not even be able to connect a precise number with what they’re doing.]

Sound Wrangler: How’s that?

You: That’s not quite right. Um…

[This one’s directly in line with my original example.]

You: Could I get some delay on my voice?

Audio Human: Sure!

[The audio human dials up their favorite vocal-delay sound.]

You: Actually, it’s more of a slap-delay.

[Your concept of slap-delay might be 50 ms of delay time. The audio-human’s concept of slap-delay might be 75 ms.]

Audio Human: How’s that?

You: That’s…better. It’s not quite it, though. Maybe if there was one less repeat?

[The audio-human’s delay processor doesn’t work in “repeats.” It works in the dB level of the signal that’s fed back into the processor. The audio-human takes a guess, and ends up with what sounds like half a repeat less.]

Audio Human: Is that better?

You: Yeah, but it’s still not quite there. Um…

Having your own vocal processor can spare you from all this. It also spares the engineer from having to manage when the FX should be “in” or bypassed. (This often isn’t a huge issue, but it can become one if you’re really specific about what you want to happen where.) There are real advantages to being self-contained.

There are negative sides, though, as I alluded to earlier. Having lots of power at your disposal feels good, but if you’re not well-acquainted with what that power is actually doing, you can easily sabotage yourself. And your band. And the engineer who’s trying to help you.

EQ Is A Pet Dog

The reason that I say that “EQ is a pet dog” is twofold.

1) EQ is often your friend. Most of the time, it’s fun to play with, and it “likes” to help you out.

2) In certain situations, an EQ setting that was nice and sweet can suddenly turn around and “bite” you. This isn’t because EQ is “a bad dog,” it’s because certain equalization tweaks in certain situations just don’t work acoustically.

What I’ve encountered on more than one occasion are vocal-unit EQ settings that are meant to either sound good in low-volume or studio contexts. I’ve also encountered vocal-unit EQ that seems to have been meant to correct a problem with the rehearsal PA…which then CAUSES a problem in a venue PA that doesn’t need that correction.

To be more specific, I’ve been in various situations where folks had a whole busload of top-end added to their vocal sound. High-frequency boosts often sound good on “bedroom” or “headphone” vocals. Things get nice and crisp. “Breathy.” Even “airy,” if I dare to say so. In a rehearsal situation, this can still work. The rehearsal PA might not be able to get loud enough for the singer to really hear themselves when everybody’s playing, especially if feedback can’t be easily corrected. However, the singer hears that nice, crisp vocal while everybody’s NOT playing, and remembers that sound even they get swamped.

Anyway.

The problem with having overly hyped high-end in a live vocal (especially with a louder band in a small room) is really multiple problems. First, it tends to focus your feedback issues into the often finicky and unpredictable zone of high-frequency material. If there’s a place where both positionally dependent and positionally independent frequency response for mics, monitors, and FOH speakers is likely to get “weird” and “peaky,” the high-frequency zone is that place. (What I mean by “positionally dependent” is that high-frequency response is pretty easy to focus into a defined area…and what THAT means is that you can be in a physical position where you have no HF feedback problems, and then move a couple of steps and make a quarter turn and SQUEEEEAAALLL!)

The second bugbear associated with cranked high-end is that, when the vocals are no longer isolated, the rest of the band can bleed into the vocal mic LIKE MAD. That HF boost that sounds so nice on vocals by themselves is now a cymbal and guitar-hash louder-ization device. If we get into a high-gain situation (which can happen even with relatively quiet bands), what we then end up doing is making the band sound even louder when compared to your voice. If the band started out a bit loud, we may just have gotten to the audience’s tipping point – especially since high-frequency information at “rock” volume can be downright painful. Further, we’re now spending electrical and acoustical headroom on what we don’t want (more of the band’s top end), instead of what we do want (your vocal’s critical range).

Now, I’m not saying that you can’t touch the EQ in your vocal processor, or that you shouldn’t use your favorite manufacturer preset. What I am saying, though, is that dramatic vocal-processor EQ can really wreck your day at the actual show. You might want to find a way to quickly get the EQ bypassed or “flattened,” if you can.

“Compression” Is The Most Dangerous Knob On That Thing

Now, why would I say that, especially after all my ranting about EQ?

Well, it’s like this.

An experienced audio tech with flexible EQ tools can probably “undo” enough of an unhelpful in-the-box equalization solution, given a bit of time. Compression, on the other hand, really can’t be fully “undone” in a practical sense in most situations. (Yes – there is a process called “companding” which involves compression and complementary expansion, but to make it work you have to have detailed knowledge of the compression parameters.) Like EQ, compression can contribute to feedback problems, but it does so in a “full bandwidth” sense that is also much more weird and hard to tame. It can also cause the “we’re making the band louder via the vocal mic” problem, but in a much more pronounced way. It can prevent the vocalist from actually getting loud enough to separate from the rest of the band – and it can even cause a vocalist to injure themselves.

Let’s pick all that apart by talking about what a compressor does.

A compressor’s purpose is to be an automatic fader that can react at least as quickly (if not a lot more quickly) as a human, and that can react just as consistently (if not a lot more consistently) as a human. When a signal exceeds a certain set-point, called the threshold, the automatic fader pulls the signal down based on the “ratio” parameter. When the signal falls back towards the threshold, the fader begins to return to its original gain setting. “Attack” is the speed that the fader reduces gain, and “release” is the speed that the fader returns to its original gain.

Now, how can an automatic fader cause problems?

If the compressor threshold is set too low, and the ratio is too high, the vocalist is effectively pulled WAY down whenever they try to deliver any real power. If I were to set a vocalist so that they were comfortably audible when the band was silent, but then pulled that same vocalist down 10 dB when the band was actually playing, the likely result with quite a few singers would be drowned vocals. This is effectively what happens with an over-aggressive compressor. The practical way for the tech to “fight back” is to add, say, 10 dB (or whatever) of gain on their end – which is fine, except that most small-venue live-sound contexts can’t really tolerate that kind of compensating gain boost. In my experience, small room sound tends to be run pretty close to the feedback point, say, 3-6 dB away from the “Zone of Weird Ringing and Other Annoyances.” When that’s the case, going up 10 dB puts you 4-7 dB INTO the “Zone.”

But the thing is, the experience of that trouble area is extra odd, because your degree of being in it varies. When the singer really goes for it, the processor’s compressor reduces the vocal mic’s gain, and your feedback problem disappears. When they back off a bit, though, the compressor releases, which means the gain goes back up, which means that the strange, phantom rings and feedback chirps come back. It’s not like an uncompressed situaton, where feedback builds at a consistent rate because the overall gain is also consistent. The feedback becomes the worst kind of problem – an intermittent one. Feedback and ringing that quickly comes and goes is the toughest kind to fight.

Beyond just that, there’s also the problem of bleed. If you have to add 10 dB of gain to a vocal-mic to battle against the compressor, then you’ve also added 10 dB of gain to whatever else the mic is hearing when the vocalist isn’t singing. Depending on the situation, this can lead to a very-markedly extra-loud band, with all kinds of unwanted FX applied, and maybe with ear-grating EQ across the whole mess. There’s also the added artistic issue of losing dynamic “swing” between vocal and instrumental passages. That is, the music is just LOUD, all the time, with no breaks. (An audience wears down very quickly under those conditions.) In the circumstance of a singer who’s not very strong when compared to the band, you can get the even more troublesome issue of the vocal’s intelligibility being wrecked by the bleed, even though the vocal is somewhat audible.

Last, there’s the rare-but-present monster of a vocalist hurting themselves. The beauty of a vocal processor is that the singer essentially hears what’s being presented to the audience. The ugliness behind the beauty is that this isn’t always a good thing. Especially in the contexts of rock and metal, vocal monitors are much less about sounding “hi-fi” and polished, and much more about “barking” at a volume and frequency range that has a fighting chance of telling the singer where they are. Even in non-rock situations, a vital part of the singer knowing where they are is knowing how much volume they’re producing when compared to the band. The most foolproof way for this to happen is for the monitors to “track” the vocalists dynamics on a 1:1 basis – if the singer sings 3 dB louder, the monitors get 3 dB louder.

When compression is put across the vocalist immediately after the vocal mic, the monitors suddenly fail to track their volume in a linear fashion. The singer sings with more power, but then the compressor kicks in and holds the monitor sound back. The vocalist, having lost the full volume advantage of their own voice plus the monitors, can feel that they’re too quiet. Thus, they try to sing louder to compensate. If this goes too far, the poor singer just might blow out their voice, and/ or be at risk for long-term health issues. An experienced vocalist with a great band can learn to hear, enjoy, and stop compensating for compression…but a green(er) singer in a pressure situation might not do so well.

(This is also why I advocate against inserting compression on a vocal when your monitor sends are post-insert.)

To be brutally honest, the best setting for a vocal-processor’s compressor is “bypass.” Exceptions can be made, but I think they have to be made on a venue-to-venue, show-to-show basis.

All of this might make it sound like I advocate against the vocal processor. That’s not true. I think they’re great for people in the same way that other powerful tools are great. It’s just that power tools can really hurt you if you’re not careful.


Tiding You Over

Here are a couple of audio-related thoughts to “hold you” until my next article gets done.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

I did actually write a regular article this week, but it’s a guest piece for Schwilly Family Musicians. That means that everybody (including me) will have to wait a little while before it goes “live.” I figured it would be nice to at least have something else to look at this week – so here are two “snippets” or ideas about the live-sound life.

For the live-audio tech, life is a giant, resonant acoustical circuit, with the total system gain as the load impedance. More gain means LESS load impedance, and a higher tendency for the acoustical circuit to “ring” (feedback).

Consider the implications of this carefully.

If you have no choice but to constantly ride the faders during a mix, there is a good chance that:

Your mix might be deficient.
Or the song arrangements might be deficient.
Or the band might be deficient.

Or a combination of the above.


A Vocal Group Can Be Very Helpful

Microsurgery is great, but sometimes you need a sledgehammer.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Folks tend to get set in their ways, and I’m no exception. For ages, I have resisted doing a lot of “grouping” or “busing” in a live context, leaving such things for the times when I’ve been putting together a studio mix. I think this stems from wanting maximum flexibility, disliking the idea of hacking at an EQ that affects lots of inputs, and just generally being in a small-venue context.

Stems. Ha! Funny, because that’s a term that’s used for submixes that feed a larger mix. Submixes that are derived from grouping/ busing tracks together. SEE WHAT I DID THERE?

I’m in an odd mood today.

Anyway…

See, in a small-venue context, you don’t often get to mix in the same way as you would for a recording. It’s often not much help to, say, bus the guitars and bass together into a “tonal backline” group. It’s not usually useful because getting a proper mix solution so commonly comes down to pushing individual channels – or just bits of those channels – into cohesion with the acoustic contribution that’s already in the room with you. That is, I rarely need to create a bed for the vocals to sit in that I can carefully and subtly re-blend on a moment’s notice. No…what I usually need to do is work on the filling in of individual pieces of a mix in an individual way. One guitar might have its fader down just far enough that the contribution from the PA is inaudible (but not so far down that I can’t quickly push a solo over the top), while the other guitar is very much a part of the FOH mix at all times.

The bass might be another issue entirely.

Anyway, I don’t need to bus things together for that. There’s no point. What I need to do for each channel is so individualized that a subgroup is redundant. Just push ’em all through the main mix, one at a time, and there you go. I don’t have to babysit the overall guitar/ bass backline level – I probably have plenty already, and my main problem is getting the vocals over the whole thing anyway.

The same overall reasoning works if you’ve only got one vocal mic. There’s no reason to chew up a submix bus with one vocal channel – I mean, there’s nothing there to “group.” It’s one channel. However, there are some very good reasons to bus multiple vocal inputs into one signal line, especially if you’re working in a small venue. It’s a little embarrassing that it’s taken me so long to embrace this thinking, but hey…here we are NOW, so let’s go!

The Efficient Killing Of Feedback Monsters

I’m convinced that a big part of the small venue life is the running of vocal mics at relatively high “loop gain.” That is, by virtue of being physically nearby to the FOH PA (not to mention being in an enclosed and often reflective space) your vocal mics “hear” a lot more of themselves than they might otherwise. As such, you very quickly can find yourself in a situation where the vocal sound is getting “ringy,” “weird,” “squirrely,” or even into full-on sustained feedback.

A great way to fight back is a vocal group with a flexible EQ across the group signal.

As I said, I’ve resisted this for years. Part of the resistance came from not having a console that could readily insert an EQ across a group. (I can’t figure out why the manufacturer didn’t allow for it. It seems like an incredibly bizarre limitation to put on a digital mixer.) Another bit of my resistance came from not wanting to do the whole “hack up the house graph” routine. I’ve prided myself on having a workflow where the channel with the problem gets a surgical fix, and everything else is left untouched. I think it’s actually a pretty good mentality overall, but there’s a point where a guy finally recognizes that he’s sacrificing results on the altar of ideology.

Anwyay, the point is that a vocals-only subgroup with an EQ is a pretty good (if not really good) compromise. When you’ve got a bunch of open vocal mics on deck, the ringing in the resonant acoustical circuit that I like to call “real music in a real room” is often a composite problem. If all the mics are relatively close in overall gain, then hunting around for the one vocal channel that’s the biggest problem is just busywork. All of them together are the problem, so you may as well work on a fix that’s all of them together. Ultra-granular control over individual sources is a great thing, and I applaud it, but pulling 4 kHz (or whatever) down a couple of dB on five individual channels is a waste of time.

You might as well just put all those potential problem-children into one signal pipe, pull your offending frequency out of the whole shebang, and be done with the problem in a snap. (Yup, I’m preaching to myself with this one.)

The Efficient Addition Of FX Seasoning

Now, you don’t always want every single vocal channel to have the same amount of reverb, or delay, or whatever else you might end up using. I definitely get that.

But sometimes you do.

So, instead of setting multiple aux sends to the same level, why not just bus all the vocals together, set a pleasing wet/ dry mix level on the FX processor, and be done? Yes, there are a number of situations where you should NOT do this: If you need FX in FOH and monitor world, then you definitely need a separate, 100% “wet” FX channel. (Even better is having separate FX for monitor world, but that’s a whole other topic.) Also, if you can’t easily bypass the FX chain between songs, you’ll want to go the traditional route of “aux to FX to mutable return channel.”

Even so, if the fast and easy way will work appropriately, you might as well go the fast and easy way.

Compress To Impress

Yet another reason to bus a bunch of vocals together is to deal with the whole issue of “when one guy sings, it’s in the right place, but when they all do a chorus it’s overwhelming.” You can handle the issue manually, of course, but you can also use compression on the vocal group to free your attention for other things. Just set the compressor to hold the big, loud choruses down to a comfortable level, and you’ll be most of the way (if not all the way) there.

In my own case, I have a super-variable brickwall limiter on my full-range output, a limiter that I use as an overall “keep the PA at a sane level” control. A strategy that’s worked very well for me over the last while is to set that limiter’s threshold as low as I can possibly get away with…and then HAMMER the limiter with my vocal channels. The overall level of the PA stays in the smallest box possible, while vocal intelligibility remains pretty decent.

Even if you don’t have the processing flexibility that my mix rig does, you can still achieve essentially the same thing by using compression on your vocal group. Just be aware that setting the threshold too low can cause you to push into feedback territory as you “fight” the compressor. You have to find the happy medium between letting too little and too much level through.

Busing your vocals into a subgroup can be a very handy thing for live-audio humans to do. It’s surprising that it’s taken me so long to truly embrace it as a technique, but hey – we’re all learning as we go, right?


Dirty Secrets About Power

The amount of power actually being delivered to your loudspeakers might not be what you think. What power IS getting delivered might not be doing what you think.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

I’m pretty sure that power – that is, energy delivered to loudspeaker drivers – is one of the most misunderstood topics in live-audio. It’s an area of the art that’s often presented in a simplified way for the sake of convenience. Convenience is hardly a bad thing, but simplifying a complex and mission-critical set of concepts can be troublesome. For one, misinformation (or just misinterpretation) starts to be viewed as fact. Going hand-in-hand with that is the phenomenon of folks who mean well, but make bad decisions. These bad decisions lead to the death of loudspeakers, over and under spending on amps and speakers, seemingly reckless system operation…the list goes on.

So, with all the potential problems that can be caused by the oversimplification of the topic “Powering Loudspeakers,” why does “reduction for the sake of convenience” continue to occur?

I think the answer to that is ironically simple: The proper powering of loudspeakers is, in truth, maddeningly complex. There are lots of “microfactors” involved that are quite simple, but when they all get stuck together…things get hairy. At some point, educators with limited time, equipment manufacturers with limited space in instruction manuals, and established pros with limited patience have to decide on what to gloss over. (I’ve done it myself. Certain parts of my article on clipping let some intricacies go without complete explanation.)

With that being the case, this article can’t possibly cover every little counter-intuitive detail. What it can do, however, is give you some idea of how many more particulars are actually out there, while also giving you some insight into a few of those particulars.

So, in no particular order…

Dirty Secret #1: Amp And Speaker Manufacturers Assume A Lot

You may have heard the phrase “Assume Nothing.” That saying does NOT apply to the people who build mass-produced loudspeakers and amplifiers. It doesn’t apply because it CAN NOT apply – otherwise, they’d never get anything built, or their instruction manuals would be gigantic.

Amplifier manufacturers, on their part, assume that you’re going to use their product with mostly “musical” signals. They also assume that you can put together a sane system with the “how to make this thing work” information they provide in their documentation. Further, they make suggestions about using amplifiers with continuous power ratings that are greater than the continuous power ratings of your speakers, because they assume that you’re not going to drive the amp up to its clip lights all the time.

Loudspeaker manufacturers also assume that you’re going to drive their boxes with music. They also ship products with the assumption that you’ll use the speaker in accordance with the instructions. They publish power ratings that are contingent on you being sane, especially with your system equalizers.

The upshot of it all is that the folks who make your gear also make VERY powerful assumptions about your ability to use their products within the design limits. They do this (and disclaim a lot of responsibility), because a ton of factors related to actual system use have traditionally been outside their control. Anytime you read an instruction manual – especially the specifications page – take care to remember that the numbers you see are simplifications and averages that reflect a mountain of assumptions.

Dirty Secret #2: Musical Signals Don’t Get You Your Continuous Power Rating

The reason that technical folks distinguish between signals like sine waves, pink noise, and “music” is because they have very different power densities. Sine waves, for instance, have a continuous level that’s 3 dB below their peak level. Pink noise often has to have an accompanying specification of “crest factor” (the ratio between the peak and average level), because different noise generators can give you different results. Some pink noise generators give you a signal with 6 dB between the peak and average levels. Others might give you 12 dB.

Music is all over the map.

Some music signals have peaks that are 20+ dB above the average power. Of course, in our current age of “compress and limit everything,” it’s common to see ratios that are much smaller. I myself use rather aggressive limiting, because I need to keep a pretty tight rein on how loud the PA system can go. Even so, my peak levels tend to be about 10 dB above the average level.

So if you’ve got an amp that’s rated for “x” continuous watts, and you drive the unit all the way to its undistorted peak, music is probably giving you x/10 watts…or less. In my case, the brickwall limit that I set is usually 10 dB below clip, which means that my actual continuous power is something like 5 watts per channel. This calculation is pretty consistent with what I think the speakers are actually doing, because they get about 96 dB @ 1 watt @ 1 meter. Five watts continuous would mean about 103 dB SPL per full-range box, and there are two full-range boxes in the PA, so that’s 106 dB total…yup, that seems about right.

Yeah, so, your system? If you’re driving it with actual music that isn’t insanely limited, you can go ahead and divide your amp’s continuous power rating by about 10. Don’t get overconfident, though, because you can still wreck your drivers. It’s all because…

Dirty Secret #3: Power Isn’t Always Evenly Distributed

Remember that bit up there about manufacturers making assumptions? Think about this sentence: “They publish power ratings that are contingent on you being sane, especially with your system equalizers.”

Dirty secret #2 may have you feeling pretty safe. In fact, you may be thinking that secret #2 directly contravenes some of the things that I said about cooking your loudspeakers with an amp that’s too big.

Hold up there, chum!

When a loudspeaker builder says that the system will handle, say, 500 watts, what they actually mean is: “This system will survive 500 watts of continuous input, as long as the input is distributed with roughly equal power per octave.” Not everything in the box will take 500 watts without dying. In particular, the HF driver may be rated for a tenth – or less – of what the total system is advertised to do. Now, if you combine that with a system operator who just loves to emphasize high-frequency material (“I love that top-end snap and sizzle, dude!”), you may just be delivering a LOT of juice to a rather fragile component…

…especially if the operator uses a huge amp, because they’re under the false impression that amp headroom = safety. A 1000 watt amplifier, combined with a tech who drives hard, scoops the mids, and has boxes with passive crossovers, is plenty capable of beating a 50-watt-rated HF driver into the ground.

On the flipside, a system without protective filtering on the low-frequency side can get killed in a similar way. Some audio-humans just HAVE to “gun”the low-frequency bands on their system EQ, because “boom and thump are what get the girls dancing, dude!” Well, that’s all fine and good, but most live-sound speakers that are reasonably affordable can’t handle deep bass at high power. Heck, the box that the drivers are in often acts as a filter for material that’s below about 40 Hz.

Of course, there may not be an electronic filter to keep 40 Hz and below out of the amplifier, or out of the LF driver. Thus, our system operator might just be dumping a huge amount of energy into a woofer without actually being able to hear it. The power doesn’t just disappear, of course, which means that “driver failure because of too much power at too low a frequency” might be just around the corner.

Dirty Secret #4: Accidents Aren’t Usually Musical Signals

Building on what I’ve said above, I should be clear that folks do get away with using overpowered amps (for a time) because of feeding them “music.” They end up keeping the peaks at a reasonable level, and so the continuous power stays in a safe place as well.

Then, something goes wrong.

Maybe some feedback gets really out of control. Maybe somebody drops a microphone. All of a sudden, you might have a high-frequency sine-wave with peaks – and continuous level – that’s far beyond what a horn driver can live with. In the blink of an eye, you might have a low-frequency peak that can rip a subwoofer cone.

Ouch.

Dirty Secret #5: Squeezing Every Drop Of Performance From Something Is For Either Amateurs Or Rich People

This secret connects pretty directly with #3 and #4. Lots of folks worry about getting every single dollar’s worth of output from a live-audio rig. It’s very understandable, and also very unhealthy. To extract every possible ounce of output from a loudspeaker system requires powerful, expensive amplifiers that have the capability to flat-out murder the speakers. For this reason, “performance enthusiasts” are either people who can’t afford to buy both more power AND more speakers, or they’re people who can afford to buy (and fix, and fix, and fix again) a lot of gear that’s run very hard.

The moral of the story is that your expectation needs to be that – in line with secret #3 – getting continuous output consistent with about 1/10th of a rig’s rated power is actually getting your money’s worth. If you don’t have enough acoustical output at that level, then you either need to upgrade to a system that gets louder with the same number of boxes, or you need to buy more loudspeakers and more amps to expand your system.

Dirty Secret #6: More Power Means More Than Just Buying More Amps

This follows along with secret #5. If you want more power, then you need more gear. That seems simple enough, but I’m convinced that linear PA growth is accompanied by geometric “support” growth.

What I mean by this is that getting ahold of a more powerful PA is more than just getting the amps and speakers together. More power means heavier and more expensive amp racks, or more (and more expensive because of quantity) amp racks. It may mean that you have to construct patch panels to keep everything organized. More PA power also means that you need more AC power “from the wall” in the venue. Past a certain point, you have to start thinking about an actual power distro system – and that can be a major project with huge pitfalls in and of itself. You need more space for storage. You need a bigger vehicle, if you’re going to transport it all.

Getting more power doesn’t just mean more of the “core” gear that creates and uses that power. It means more of everything that’s connected to that gear.

Dirty Secret #7: The Point Of Diminishing Returns Occurs Very Quickly. Immediately, In Fact.

The last secret is also, in some ways, the biggest bummer. Audio is a logarithmic affair, which means that the gains you get from spending more money and providing more power to a system begin decreasing as soon as you even get started. I’m dead serious.

For example, let’s say you’ve got a loudspeaker that averages about 95 dB SPL @ 1 watt @ 1 meter. You put one continuous watt – one measly watt – across the box, and stand roughly three feet away. That 95 dB SPL seems pretty good. Now, you go up to two watts. Did you get 95 dB more? Nope – that would mean that you could get “space shuttle takeoff” levels out of one loudspeaker. Not gonna happen.

So…did you get 20 dB more?

No.

10 dB?

Nope.

You doubled the power, and got three decibels more level out of the speaker. That’s just enough of a difference to definitively notice that things have gotten louder. If you want three more dB, you’ll have to double the power again. So far we’re only at four watts, but I think you can see just how fast the battle for more output starts to go against you. If your system is running at full tilt, and you want more output, you’re going to have to find a way to “double” the system – and even when you do, you’ll only get a little more out of it. If you want to get 10 times as loud, you need 10 times as much total PA.

The vast majority of a PA system’s output comes from the first watt going into each box. It’s a fact that’s in plain sight, but it (and its ramifications) often aren’t talked about very much.

That makes it one of the dirtiest secrets of all.