Tag Archives: Small Venue

Virtually Unusable Soundcheck

Virtual soundchecks are a neat idea, but in reality they have lots of limitations.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

virtual-soundcheckWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Before we dive in to anything, let’s go over what I’m not saying:

I’m not saying that virtual soundchecks can never be useful in any situation.

I’m not saying that you shouldn’t try them out.

I’m not saying that you’re dumb for using them if you’re using them.

What I am definitely saying, though, is that the virtual soundcheck is of limited usefulness to folks working in small rooms.

What The Heck Is A Virtual Soundcheck?

A virtual soundcheck starts with a recording. This recording is a multitrack capture of the band playing “live,” using all the same mics and DI boxes as would be set up for the show. The multitrack is then fed, channel-per-channel, into a live-sound console. The idea is that the audio-human can tweak everything to their heart’s delight, without having the band on deck for hours at a time. The promise is that you can dial up those EQs, compressors, and FX blends “just so,” maybe even while sitting at home.

This is a great idea. Brilliant, even.

But it’s flawed.

Flaw 1: Home is not where the show is.

It may be possible to make your headphones or studio monitors sound like a live venue. You may even be able to use a convolution reverb to make a playback system in one space sound almost exactly like a PA system in another space. Unless you go to that trouble, though, you’re mixing for a different “target” than what’s actually going to be in play during the actual show. Using a virtual soundcheck system to rough things in is plenty possible, even with a mix solution that’s not exactly tailored for the real thing, but spending a large amount of time on tiny details isn’t worth it. In the end, you’re still going to have to mix the concert in the real space, for that EXACT, real space. You just can’t get around that entirely.

As such, a virtual soundcheck might as well be done in the venue it concerns, using the audio rig deployed for the show.

Flaw 2: Live audio is not an open loop.

A virtual soundcheck removes one of the major difficulties involved in live audio; It opens the feedback loop. Because it’s all driven from playback which the system output cannot directly affect, it’s immune from many of the oddities and pitfalls inherent with mics and speakers that “talk” to each other. A playback-based shakedown might lead an operator to believe that they can crank up the total gain applied to a channel with impunity, but physics will ALWAYS throw the book at you for trying to bend the rules.

The further implication is that “going offline” is about as helpful to the process of mixing wedge monitors as a house stuffed with meth-addled meerkats. In-ears are a different story, but a huge part of getting wedges right is knowing exactly what you can and can not pull off for that band in that space. Knowing what you can get away with requires having the feedback loop factored in, but a virtual check deletes the loop entirely.

Flaw 3: We’re not going to be listening to only the sound rig.

As I’ve been mentioning here, over and over, anybody who has ever heard a real band in a real room knows that real bands make a LOT of noise. Even acoustic shows can have very large “stage wash” components to their total acoustical output. A virtual soundcheck means that the band isn’t there to make noise, and so your mix gets built without taking that into account. The problem is that, in small venues, taking the band’s acoustical contribution into account is critical.

And yes, you could certainly set up the feeds so that monitor-world also gets fed – but that still doesn’t fully fix the issue. Drummers and players of amplified instruments have a lot to say, even before the roar of monitor loudspeakers gets added. This is even true for “unplugged” shows. If the PA isn’t supposed to be drowningly loud, you might be surprised at just how well an acoustic guitar can carry.


As I said before, the whole idea is not useless. You can certainly get something out of playback. You might be able to chase down some weird rattle or other artifact from an instrument that you couldn’t find when everything was banging away in realtime. Virtual soundchecks also become much more helpful when you’re in a big space, with a big PA that’s going to be – far and away – the loudest thing that the audience is listening to.

For those of us in smaller spaces, though, the value of dialing up a simulation is pretty small. For my part, the whole point of soundcheck is to get THE band and THE backline ready for THE show in THE room with THE monitors and THE Front-Of-House system. In my situation, a virtual soundcheck does none of that.


Let ‘Em Get Away From It

Maximum coverage isn’t always appropriate for small venues.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

arrayWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

I love the idea of a high-end, concert-centric install.

It excites me to think of a music venue where the coverage is so even that every patron is getting the same mix, +/- 3 dB. Creating audio rigs where “there isn’t a bad seat in the house” is a point of pride for concert-system installers, as well it should be.

Maximum coverage isn’t always appropriate, though. It can sometimes even be harmful. The good news is that an educated guess at the truly necessary coverage for live audio isn’t all that hard. It starts with audience behavior.

What Is The Audience Trying To Do?

Another way to put that question is, “What is the audience’s purpose?” At my regular gig, the answer is that they want to hang out, listen pretty informally, and socialize. This is an “averaged” assessment, by the way: Some folks want to focus entirely on the music. Some people barely want to focus on the tunes at all. Some folks would hate to be stuck in their seat. Some folks wouldn’t care.

The point is that there’s a mix of objectives in play.

This differs from going to show at, say, The State Room or, even more so, at Red Butte Garden. My perception of those events is that people go to them – paying a bit of a premium – with the intent to focus on the music.

At my regular gig, where there’s such a diversity of audience intent, perfectly even coverage of all areas in the room is counterproductive to that diversity. It forces a singular decision on everyone in the room. It essentially requires that everybody in attendance has the goal of being primarily focused on the music as a foreground element. This is a bad thing, because denying a large section of the audience their intended enjoyment is likely to encourage them to leave.

If they leave, that hurts us, and it hurts the band. As much as possible, we should avoid doing things that encourage folks to vamoose.

So, I’m perfectly happy to NOT cover everything. The FOH PA is slightly “toed in” to focus its output primarily on the area nearest the stage. The sound intensity is allowed to drop off naturally towards the back of the room, and there’s no attempt at all to fill the coverage gap off to the stage-left side. People often seem to congregate there, and my perception is that many of them do it to take a break from being in the direct fire of the PA. They can still hear the show, but the high-frequency content is significantly rolled off (at least for whatever is actually “in” the audio rig).

If I knew that almost everybody in the room was primarily focused on the music, I would take steps to cover the room more evenly. That’s not the case, though, so there are “hot” and “cool” coverage zones.

Cost/ Choice Parametrization

Another way to view the question of how much coverage is appropriate is to try to define the value that an attendee placed on being at a show, and how much choice they have in terms of their position at the show. This is another sort of thing that has to be averaged. Not all events (or people) in a certain venue are the same, so you have to look at what’s most likely to happen.

When you state the problem in terms of those parameters, you get something like this:

coveragenecessity

If the cost of being at the show is high (in terms of money, effort spent, overall commitment required, etc.) and the choice of precisely where to take in the show is low (say, assigned seating), then it’s very important to have consistent audio coverage for everyone. If people are paying hundreds of dollars and traveling long distances to see a huge band’s farewell or reunion, and they’re stuck in one seat at a theater, there had better be good sound at that seat!

On the other hand, it’s not necessary to cover every square inch of an inexpensive, “in town” show, where folks are free to move around. If the coverage isn’t what someone wants, they can move to where it is what they want – and, if they can’t get into the exact coverage area they desire, it’s not a huge loss. For a lot of small venues, this is probably what’s encountered most often.

Now, please don’t misconstrue what I’m saying. What I’m definitely NOT saying is that we should just “punt” on some gigs.

No.

As much as possible, we should assume that the most important show of our careers is the one we’re doing now.

What I’m saying is that we need to spend our effort on things that matter. We have to have a priorities list. If people want (and also have) options available for how they experience a show, then there’s no reason for us to agonize about perfect coverage. As I said above, academically perfect PA deployment might even be bad for us. They might not even want to be in the direct throw of our boxes, so why force them to be? In the world of audio, we have finite resources and rapidly diminishing returns. We have to focus on the primary issues, and if our primary issue is something OTHER than completely homogenous sound throughout the venue, then we need to direct our efforts appropriately.


Danny’s Unofficial Sound System Taxonomy

Actual “concert rigs” are capable of being really loud. They’re also really expensive.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

basscabWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

There’s a question in this business that’s rather like the quandary of what someone means when they say “twice as loud.” It’s the question of how a PA system “classes.”

To a certain degree, the query is unanswerable. What might be a perfectly acceptable rock-band PA for one group might not be adequate for a different band. Even so, if you ask the first group whether or not they play through a “rock-band” system, they will probably say yes. In the end, it all comes down to whether a rig satisfies people’s needs or not. The systems I work on are just fine for what I need them to do (most of the time). If you gave them to Dave Rat, however, they wouldn’t fit the bill.

Even if the question can’t be definitively put to rest, it can still be talked about. In my mind, it’s possible to classify FOH PA systems and monitor rigs by means of acoustical output.

Right away, I do have to acknowledge that acoustical output is a sloppy metric. It doesn’t tell you if a rig sounds nice, or is user-friendly, or if it’s likely to survive through the entire show. Reducing the measure of a system to one number involves a LOT of other assumptions being made, and being made “invisibly.” It’s sort of like the whole problem of simple, passive loudspeakers. The manufacturer suggests a certain, broadband wattage number to use, all while assuming that major “edge cases” will be avoided by the end user.

But one-number metrics sure do make things simple…

Anyway.

My Proposed “Rule Of Quarters”

So, as I present my personal taxonomy of audio rigs, let me also mention some of my other assumptions for a “pro” PA system:

1) I assume that a system can be tuned such that any particular half-octave range of frequencies will have an average level of no more than +/- 6 dB from an arbitrary reference point. Whether the system is actually tuned that way is a whole other matter. (My assumption might also be too lenient. I would certainly prefer for a rig’s third-octave averages to be no more than +/- 3 dB from the reference, to be perfectly frank. I’d also like a $10 million estate where I can hold concerts.)

2) I assume that the system can provide its stated output from 50 Hz to 15000 Hz. Yes, some shows require “very deep” low-frequency reproduction, but it seems that 50 Hz is low enough to cover the majority of shows being done, especially in a small-venue context. On the HF side, it seems to me that very few people can actually hear above 16 kHz, so there’s no point in putting superhuman effort into reproducing the last half-octave of theoretical audio bandwidth. Don’t get me wrong – it’s great if the rig can actually go all the way out to 20 kHz, but it’s not really a critical thing for me.

3) I assume that the system has only a 1:100 chance (or less) of developing a major problem during the show. To me, a major problem is one that is actually a PA equipment failure, is noticeable to over 50% of the audience, and requires the space of more than 5 minutes to get fixed.

If all the above is in the right place, then I personally class PA systems into four basic categories. The categories follow a “rule of quarters,” where each PA class is capable of four times the output of its predecessor. Please note that I merely said “capable.” I’m not saying that a PA system SHOULD be producing the stated output, I’m only saying that it should be ABLE to produce it.

Also, as a note about the math I’m using for these numbers, I do make it a point to use “worst case” models for things. That is, I knock 12 dB off the peak output of a loudspeaker just to start, and I also treat every doubling of distance from a box to result in a 6 dB loss of apparent SPL. I also neglect to account for the use of subwoofers, and assume that full-range boxes are doing all the work. I prefer to underestimate PA performance, because it’s better to have deployed a Full-Concert rig and wish you’d brought a Foreground Music system than to be in the opposite situation.

Spoken Word

Minimum potential SPL at audience center, continuous: 97 dB

This isn’t too tough to achieve, especially in a small space. If the audience center is 25 feet (7.62 meters) from the PA, and they can hear two boxes firing together, then each box has to produce about 112 dB at one meter. A relatively inexpensive loudspeaker (like a Peavey PVx12) with an amp rated for 400 watts continuous power should be able to do that with a little bit of room left over – but not much room, to be brutally honest.

Also, it’s important to note that 97 dB SPL, continuous, is REALLY LOUD for speech. Something like 75 – 85 dB is much more natural.

Background Music

Minimum potential SPL at audience center, continuous: 103 dB

This is rather more demanding. For a 25-foot audience centerpoint being covered by two boxes, each box has to produce about 118 dB continuous at close range. This means that you would already be in the territory of something like a JBL PRX425, powered by an amp rated for 1200 watts continuous output. (It’s a bit sobering to realize that what looks like a pretty beefy rig might only qualify as a “background” system.)

Foreground Music

Minimum potential SPL at audience center, continuous: 109 dB

Doing this at 25 feet with two boxes requires something like a Peavey QW4F…and a lot of amplifier power.

Full Concert

Minimum potential SPL at audience center, continuous: 115 dB

If you want to know why live-sound is so expensive, especially at larger scales, this is an excellent example. With $4800 worth of loudspeakers (not to mention the cost of the amps, cabling, processing, subwoofer setup, and so on), it’s actually possible to, er, actually, NOT QUITE make the necessary output. Even in a small venue.

Also, there’s the whole issue that just building a big pile of PA doesn’t always sound so great. Boxes combining incoherently cause all kinds of coverage hotspots and comb filtering. It’s up to you to figure out what you can tolerate, of course.

And, of course, just because a system can make 115 dB continuous doesn’t mean that you actually have to hit that mark.

Don’t Be Depressed

Honest-to-goodness, varsity-level audio requires a lot of gear. It requires a lot of gear because varsity-level audio means having a ton of output available, even if you don’t use it.

In the small-venue world, the chances of us truly doing varsity-level audio are pretty small, and that’s okay. That doesn’t mean we can’t have a varsity-level attitude about what we’re doing, and that doesn’t mean that our shows have to be disappointing. We just have to realize where we stack up, and take pride in our work regardless.

As an example, at my regular gig, “full-throttle” for an FOH loudspeaker is 117 dB SPL at one meter. “Crowd center” is only about 12 feet from the boxes, so their worst-case output is 106 dB continuous individually, or 109 dB continuous as a pair. According to my own classification methods, the system just barely qualifies as a “foreground music” rig.

But I rarely run it at full tilt.

In fact, I often limit the PA to 10 dB below its full output capability.

“Full Concert” capability is nice, but it’s a difficult bar to reach – and you may not actually need it.


Alan Parsons Is Absolutely Correct. And Wrong.

Live sound is not the studio, and it’s dangerous to treat it as such.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

closemicWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This article is the “closing vertex” of my semi-intentional “Gain, Stability, And The Best Holistic Show” trilogy.

I’m here to agree and disagree with Alan Parsons. Yes, that Parsons. The guy who engineered “Dark Side of the Moon.” A studio engineer with a career that most of us daydream about. An audio craftsperson who truly lives up to the title in the best way.

I am NOT here to trash the guy.

What I am here to do is to talk about a disagreement I have regarding the application of a specific bit of theory. It was a bit of theory that was first presented to me by the late Tim Hollinger (whom I greatly miss). Tim told me about an article he read where Alan Parsons explained why he (Parsons) mics guitar cabinets from a distance. Part of Parsons’ rationale is that nobody listens to guitar amps with their ear right up against the speaker, and also that guitar players are so loud that he doesn’t have a bleed problem.

I don’t know if the Premier Guitar article I found is the same one that Tim read, but it might be. You can read it here. The pertinent section is below the black and white picture of Parsons working in the studio.

Alan Parsons Is Academically Right

I don’t know of any guitar player who listens to their rig with an ear pressed up to the grill cloth. I can also tell you that, in lots of small-venue cases, a LOT of what the audience hears is the entirety of the guitar cab. A close-miced version of that sound might also be present in the PA, but it’s not the totality of the acoustical “solution.”

Also, yes, there are plenty of guitar players who run their rigs “hot.” Move a mic 18 inches from the cab when working with a player like that, and bleed might not be too problematic in a recording context, even if everybody’s in the same (largish) room. Solo the channel into a pair of headphones, and you’ll probably go, “Yup, there’s plenty of guitar in that mic.”

There’s not much to say about the correctness of Alan Parsons’ factual assertions, because they’re…well…correct.

The Problem Is Application

So, if Parsons is accurate about his rationale, how can there be a disagreement?

It’s pretty easy actually, and it comes from a statement that Parsons makes in the article I linked above: “Live sound engineers just don’t seem to get it.”

Parsons is correct about that too. Really! Concert-sound humans, in a live context, DON’T “get” studio recording applications. The disciplines are different. In precisely the same way, I can say that studio engineers don’t “get” live sound applications in the live context. This all comes back to what I’ve said in earlier articles: The live-audio craftsperson’s job is to produce the best holistic show possible at the lowest practicable gain. The studio craftsperson’s job is to capture the best possible sound for later reproduction. These goals are not always fully compatible, especially in a small-venue context.

(And before you write me hate-mail, it’s entirely possible for an audio human to become competent in both studio and live disciplines. What I’m getting at here is that each discipline ultimately has separate priorities.)

Obviously, there are some specifics that need addressing here. The divergent needs of the studio and live disciplines take different roads at a number of junctions.

I Don’t Want To Mic The Room, Thanks

In the studio, getting some great “ambience” is a prized piece of both the recording process and the choosing of a recording space. The very best studios have rooms that enhance the sound of the various sources put into them. Grabbing a bit of this along with the correct dose of the “direct” sound from a source is something to be desired. It enhances the playback of that recording, which takes place at another time in another room – or is delivered directly to a person’s ear canal, in the case of headphones.

But this is not at ALL what I want as a live-audio practitioner.

For me, more often than not, a really beautiful-sounding room is an unlikely thing to encounter. There are such things as venues with tremendously desirable acoustics, but most of the time, a venue is primarily built to satisfy the logistics of getting lots of people into one space in whatever way is practical. In general, I regard any environmental acoustics to be a hostile element. Even a relatively nice room is troublesome, because it still causes me to have to deal with multiple, indirect arrivals which smear and garble the overall sound of the show. Unlike in a recording context, I am guaranteed to hear the ambience of the room.

Lots of it.

Too much, in fact.

I do NOT want any more of it to get captured and shot out of the PA, thanks very much. I don’t need my problems to be compounded. In the very often occurring case that I need to forge a total solution by combining room sound with PA sound, I want the sound in the PA to NOT reinforce the “room tone” at all. I’ve already got the sound of the room. What I need is something else.

Close micing prevents my transducers from capturing “the room” and passing that signal on to the rest of the system.

Specificity Is My Friend

For a recording engineer, a bit of “bleed” from the drumkit (and everything else) is not necessarily a bad thing. For me, though, it’s counterproductive. If I need more guitar because the drums are too strong, I do NOT want any more drums at ALL. I want guitar only, or vice versa.

Especially in small-venue live-sound, you tend to have sources that are very close together (often much closer than they would be in a nice studio), and loud wedges instead of headphones. On a large stage, this problem is mitigated somewhat, but that’s not what I tend to run into. Also, in a studio, it’s very possible to arrange the band such that directional microphone nulls help to minimize the effects of bleed. Small venues and expectations of what a band’s setup is “supposed” to look like often get in the way of doing this live.

In any case, live show bleed tends to be much more severe than what a studio engineer might encounter. This compounds the “I need more this, not that” problem above.

As an example, I recently worked with a band where the drummer specifically asked for his kit to be miced with overheads. I happily obliged, because I wanted to be accommodating. (Part of producing the best holistic show is to have comfortable, happy musicians.) At soundcheck, I took a quick guess at where the overheads should be. I wouldn’t say that we could really hear them, but hey, we had a decent total solution in the room pretty much immediately. I didn’t really think about the overheads much. About halfway through the show, though, I got curious. I soloed the overheads into my headphones.

In order to get the drum monitors where he wanted them, we had so much guitar and bass coming through that they almost swamped the drums in the overheads.(!) The overheads were basically useless as drum reinforcement, because they would pretty much end up reinforcing everything ELSE.

If a mic is going to be useful for live sound reinforcement, specificity is critical. Pulling a mic away from a source is counterproductive to that discrimination, so I prefer not to do it.

Lowest Practicable Gain

In general, higher gain is not a problem for studio folks. Yes, it might result in greater noise, and it can also reduce electronic component bandwidth, but it’s really a very small issue in the grand scheme of things.

In live audio, higher gain is an enemy. Because microphones encounter sounds that they have already picked up and passed along to the rest of the PA, they exist in a feedback loop. As the gain applied to the mic goes up, the more likely it is that the feedback loop will destabilize and ring. If I can have lower gain, I will take it, even if that means a slightly unnatural sound.

Now, you might not think that feedback would be a problem with a source as loud as a guitar amp can be, but you also may not have been in situations that I’ve encountered. I have been in situations where players, even with reasonably loud amplifiers, have asked for a metric ton of level from the monitors. Yes, I’ve gotten feedback from mics on guitar amps. (And yes, we should have just turned up the amplifiers in the first place, but these situations developed in the middle of fluid shows where stopping to talk wasn’t really an option. Look, it’s complicated.)

Even if the chance of feedback is unlikely – as it usually is with louder sources – I do NOT want to do anything that causes me to have to run a signal path at higher gain. Close micing increases the apparent sound pressure level at the transducer capsule, which allows me to run at lower gain for a given signal strength.

The overall point of this is pretty simple: The desires of recording techs and the needs of live-sound humans don’t always intersect in a pretty way. When I disagree with Alan Parsons, it’s not because he doesn’t have his facts straight, and it’s not that I’m somehow more knowledgeable than he is. I disagree because applying his area of discipline to mine simply isn’t appropriate in the specific context of his comments, and the specific live-show contexts I tend to encounter.


It’s Not Actually About The Best Sound

What we really want is the best possible show at the lowest practical gain.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

soundWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

As it happens, there’s a bit of a trilogy forming around my last article – the one about gain vs. stability. In discussions like this, the opening statement tends to be abstract. The “abstractness” is nice in a way, because it doesn’t restrict the application too much. If the concept is purified sufficiently, it should be usable in any applicable context.

At the same time, it’s nice to be able to make the abstract idea more practical. That is, the next step after stating the concept is to talk about ways in which it applies.

In live audio, gain is both a blessing and a curse. We often need gain to get mic-level signals up to line-level. We sometimes need gain to correct for “ensemble imbalances” that the band hasn’t yet fixed. We sometimes need gain to make a quiet act audible against a noisy background. Of course, the more gain we add, the more we destabilize the PA system, and the louder the show gets. The day-to-day challenge is to find the overall gain which lets us get the job done while maintaining acceptable system stability and sound pressure.

If this is the overall task, then there’s a precept which I think can be derived from it. It might only be derivable indirectly, depending on your point of view. Nevertheless:

Live sound is NOT actually about getting the best sound, insofar as “the best sound” is divorced from other considerations. Rather, the goal of live sound is to get the best possible holistic SHOW, at the lowest practical gain.

Fixing Everything Is A Bad Idea

The issue with a phrase like “the best sound” is that it morphs into different meanings for different people. For instance, at this stage in my career, I have basically taken the label saying “The Best Sound” and stuck it firmly on the metaphorical box containing the sound that gets the best show. For that reason alone, the semantics can be a little difficult. That’s why I made the distinction above – the distinction that “the best sound” or “the coolest sound” or “the best sound quality” is sometimes thought of without regard to the show as a whole.

This kind of compartmentalized thinking can be found both in concert audio veterans and greenhorns. My gut feeling is that the veterans who still section off their thinking are the ones who never had their notions challenged when they were new enough.

…and I think it’s quite common among new audio humans to think that the best sound creates the best show. That is, if we get an awesome drum sound, and a killer guitar tone, and a thundering bass timbre, and a “studio ready” vocal reproduction, we will then have a great show.

The problem with this line of thinking is that it tends to create situations where a tech is trying to “fix” almost everything about the band. The audio rig is used as a tool to change the sound of the group into a processed and massaged version of themselves – a larger than life interpretation. The problem with turning a band into a “bigger than real” version of itself is that doing so can easily require the FOH PA to outrun the acoustical output of the band AND monitor world by 10 dB or more. Especially in a small-venue context, this can mean lots and lots of gain, coupled with a great deal of SPL. The PA system may be perched on the edge of feedback for the duration of the show, and it may even tip over into uncontrolled ringing on occasion. Further, the show can easily be so loud that the audience is chased off.

To be blunt, your “super secret” snare-drum mojo is worthless if nobody wants to be in the same room with it. (If you follow me.)

Removed from other factors, the PA does sound great…but with the other factors being considered, that “great” sound is creating a terrible show.

Granularity

The correction for trying to fix everything is to only reinforce what actually needs help. This approach obeys the “lowest possible gain” rule. PA system gain is applied only to the sources that are being acoustically swamped, and only in enough quantity that those sources stop being swamped.

In a sense, you might say that there’s a certain amount of total gain (and total resultant volume) that you can have that is within an acceptable “window.” When you’ve used up your allotted amount of gain and volume, you need to stop there.

At first, the selectivity of what gets gain applied is not very narrow. For newer operators and/ or simplified PA systems, the choice tends to be “reproduce most of the source or none of it.” You might have, say, one guitar that’s in the PA, plus a vocal that’s cranked up, and some kick drum, and that’s all. Since the broadband content of the source is getting reproduced by the PA, adding any particular source into the equation chews up your total allowable gain in a fairly big hurry. This limits the correction (if actually necessary) that the PA system can apply to the total acoustical solution.

The above, by the way, is a big reason why it’s so very important for bands to actually sound like a band without any help from the PA system. That does NOT mean “so loud that the PA is unnecessary,” but rather that everything is audible in the proper proportions.

Anyway.

As an operator learns more and gains more flexible equipment, they can be more selective about what gets a piece of the gain allotment. For instance, let’s consider a situation where one guitar sound is not complementing another. The overall volumes are basically correct, but the guitar tones mask each other…or are masked by something else on stage. An experienced and well-equipped audio human might throw away everything in one guitar’s sound, except for a relatively narrow area that is “out of the way” of the other guitar. The audio human then introduces just enough of that band-limited sound into the PA to change the acoustical “solution” for the appropriate guitar. The stage volume of that guitar rig is still producing the lion’s share of the SPL in the room. The PA is just using that SPL as a foundation for a limited correction, instead of trying to run right past the total onstage SPL. The operator is using granular control to get a better show (where the guitars each have their own space) while adding as little gain and SPL to the experience as possible.

If soloed up, the guitar sound in the PA is terrible, but the use of minimal gain creates a total acoustical solution that is pleasing.

Of course, the holistic experience still needs to be considered. It’s entirely possible to be in a situation that’s so loud that an “on all the time” addition of even band-limited reinforcement is too much. It might be that the band-limited channel should only be added into the PA during a solo. This keeps the total gain of the show as low as is practicable, again, because of granularity. The positive gain is restricted in the frequency domain AND the time domain – as little as possible is added to the signal, and that addition is made as rarely as possible.

An interesting, and perhaps ironic consequence of granularity is that you can put more sources into the PA and apply more correction without breaking your gain/ volume budget. Selective reproduction of narrow frequency ranges can mean that many more channels end up in the PA. The highly selective reproduction lets you tweak the sound of a source without having to mask all of it. You might not be able to turn a given source into the best sound of that type, but granular control just might let you get the best sound practical for that source at that show. (Again, this is where the semantics can get a little weird.)

Especially for the small-venue audio human, the academic version of “the best sound” might not mean the best show. This also goes for the performers. As much as “holy grail” instrument tones can be appreciated, they often involve so much volume that they wreck the holistic experience. Especially when getting a certain sound requires driving a system hard – or “driving” an audience hard – the best show is probably not being delivered. The amount of signal being thrown around needs to be reduced.

Because we want the best possible show at the lowest practical gain.


The Board Feed Problem

Getting a good “board feed” is rarely as simple as just splitting an output.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

boardfeedWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

I’ve lost count of the number of times I’ve been asked for a “board mix.” A board mix or feed is, in theory, a quick and dirty way to get a recording of a show. The idea is that you take either an actual split from the console’s main mix bus, or you construct a “mirror” of what’s going into that bus, and then record that signal. What you’re hoping for is that the engineer will put together a show where everything is audible and has a basically pleasing tonality, and then you’ll do some mastering work to get a usable result.

It’s not a bad idea in general, but the success of the operation relies on a very powerful assumption: That the overwhelming majority of the show’s sound comes from the console’s output signal.

In very large venues – especially if they are open-air – this can be true. The PA does almost all the work of getting the show’s audio out to the audience, so the console output is (for most practical purposes) what the folks in the seats are listening to. Assuming that the processing audible in the feed-affecting path is NOT being used to fix issues with the PA or the room, a good mix should basically translate to a recorded context. That is, if you were to record the mix and then play it back through the PA, the sonic experience would be essentially the same as it was when it was live.

In small venues, on the other hand…

The PA Ain’t All You’re Listening To

The problem with board mixes in small venues is that the total acoustical result is often heavily weighted AWAY from what the FOH PA is producing. This doesn’t mean that the show sounds bad. What it does mean is that the mix you’re hearing is the PA, AND monitor world, AND the instruments’ stage volume, hopefully all blended together into a pleasing, convergent solution. That total acoustic solution is dependent on all of those elements being present. If you record the mix from the board, and then play it back through the PA, you will NOT get the same sonic experience that occurred during the live show. The other acoustical elements, no longer being present, leave you with whatever was put through the console in order to make the acoustical solution converge.

You might get vocals that sound really thin, and are drowning everything else out.

You might not have any electric guitar to speak of.

You might have only a little bit of the drumkit’s bottom end added into the bleed from the vocal mics.

In short, a quick-n-dirty board mix isn’t so great if the console’s output wasn’t the dominant signal (by far) that the audience heard. While this can be a revealing insight as to how the show came together, it’s not so great as a demo or special release.

So, what can you do?

Overwhelm Or Bypass

Probably the most direct solution to the board feed problem is to find a way to make the PA the overwhelmingly dominant acoustic factor in the show. Some ways of doing this are better than others.

An inadvisable solution is to change nothing about the show and just allow FOH to drown everything. This isn’t so good because it has a tendency to create a painfully loud experience for the audience. Especially in a rock context, getting FOH in front of everything else might require a mid-audience continuous sound pressure of 110 dB SPL or more. Getting away with that in a small room is a sketchy proposition at best.

A much better solution is to lose enough volume from monitor world and the backline, such that FOH being dominant brings the total show volume back up to (or below) the original sound level. This requires some planning and experimentation, because achieving that kind of volume loss usually means finding a way of killing off 10 – 20 dB SPL of noise. Finding a way to divide the sonic intensity of your performance by anywhere from 10 to 100(!) isn’t trivial. Shielding drums (or using a different kit setup), blocking or “soaking” instrument amps (or changing them out), and switching to in-ear monitoring solutions are all things that you might have to try.

Alternatively, you can get a board feed that isn’t actually the FOH mix.

One way of going about this is to give up one pre-fade monitor path to use as a record feed. You might also get lucky and be in a situation where a spare output can be configured this way, requiring you to give up nothing on deck. A workable mix gets built for the send, you record the output, and you hope that nothing too drastic happens. That is, the mix doesn’t follow the engineer’s fader moves, so you want to strenuously avoid large changes in the relative balances of the sources involved. Even with that downside, the nice thing about this solution is that, large acoustical contributions from the stage or not, you can set up any blend you like. (With the restriction of avoiding the doing of weird things with channel processing, of course. Insane EQ and weird compression will still be problematic, even if the overall level is okay.)

Another method is to use a post-fade path, with the send levels set to compensate for sources being too low or too hot at FOH. As long as the engineer doesn’t yank a fader all the way down to -∞ or mute the channel, you’ll be okay. You’ll also get the benefit of having FOH fader moves being reflected in the mix. This can still be risky, however, if a fader change has to compensate for something being almost totally drowned acoustically. Just as with the pre-fade method, the band still has to work together as an actual ensemble in the room.

If you want to get really fancy, you can split all the show inputs to a separate console and have a mix built there. It grants a lot of independence (even total independence) from the PA console, and even lets you assign your own audio human to the task of mixing the recording in realtime. You can also just arrange to have the FOH mix person run the separate console, but managing the mix for the room and “checking in” with the record mix can be a tough workload. It’s unwise to simply expect that a random tech will be able to pull it off.

Of course, if you’re going to the trouble of patching in a multichannel input split, I would say to just multitrack the show and mix it later “offline” – but that wouldn’t be a board feed anymore.

Board mixes of various sorts are doable, but if you’re playing small rooms you probably won’t be happy with a straight split from FOH. If you truly desire to get something usable, some “homework” is necessary.


Why Broad EQ Can’t Save You

You can’t do microsurgery with an axe.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

peakingeqWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

I don’t have anything against “British EQ” as a basic concept. As I’ve come to interpret it, “British EQ” is a marketing term that means “our filters are wide.” EQ filters with gentle, wide slopes tend to sound nice and are pretty easy to use, so they make sense as a design decision in consoles that can’t give you every bell and whistle.

When I’m trying to give a channel or group a push in a specific area, I do indeed prefer to use a filter that’s a bit wider. Especially if I have to really “get on the gas,” I need the EQ to NOT impart a strange or ugly resonance to the sound. Even so, I think my overall preference is still for a more focused filter than what other folks might choose. For instance, when adding 6 dB at about 1kHz to an electric guitar (something I do quite often), the default behavior of my favorite EQ plugin is a two-octave wide filter:

1k2oct

What I generally prefer is a 1.5-octave filter, though.

1k1.5oct

I still mostly avoid a weird, “peaky” sound, but I get a little bit less (1 dB) of that extra traffic at 2-3 kHz, which might be just enough to keep me from stomping on the intelligibility of my vocal channels.

Especially in the rough-and-tumble world of live audio, EQ selectivity is a big deal. When everything is bleeding into everything else, you want to be able to grab and move only the frequency range that corresponds to what’s actually “signal” in a channel. Getting what you want…and also glomming onto a bunch of extra material isn’t all that helpful. In the context of, say, a vocal mic, only the actual vocal part is signal. Everything else is noise, even if it’s all music in the wider sense. IF I want to work on something in a vocal channel, I don’t also want to be working on the bass, drums, guitar, and keyboard noises that are also arriving at the mic. Selective EQ helps with that.

What Your Channel EQ Is Doing To You

Selective EQ isn’t always a choice that you get, though. If a console manufacturer has a limited “budget” to decide what to give you on a channel-per-channel basis, they’ll probably choose a filter that’s fairly wide. For instance, here’s a 6 dB boost at 1 kHz on a channel from an inexpensive analog console (a Behringer SL2442):

1kbehringer

The filter looks to be between 2.5 and 3 octaves wide. This is perfectly fine for basic tone shaping, but it’s not always great for solving problems. It would be nice to get control over the bandwidth of the filter, but that option chews up both what can be spent on internal components, and it also hogs control-surface real estate. For those reasons, and also because of “ease of use” considerations, fully parametric EQ isn’t something that’s commonly found on small-venue, analog consoles. As such, their channel EQs are often metaphorical axes – or kitchen knives, if you’re lucky – when what you may need is a scalpel.

If you need to do something drastic in terms of gain, a big, fat EQ filter can start acting like a volume control across the entire channel. This is especially true when you need to work on two or more areas, and multiple filters overlap. You can kill your problem, but you’ll also kill everything else.

It’s like getting rid of a venomous spider by having the Air Force bomb your house.

I should probably stop with the metaphors…

Fighting Feedback

Of course, we don’t usually manage feedback issues with a console’s channel EQ. We tend to use graphic EQs that have been inserted or “inlined” on console outputs. (I do things VERY differently, but that’s not the point of this article.)

Why, though? Why use a graphic EQ, or a highly flexible parametric EQ for battling feedback?

Well, again, the issue is selectivity.

See, if what you’re trying to do is to maximize the amount of gain that can be applied to a system, any gain reduction works against that goal.

(Logical, right?)

Unfortunately, most feedback management is done by applying negative gain across some frequency range. The trick, then, is to apply that negative gain across as narrow a band as is practicable. The more selective a filter is, the more insane things you can do with its gain without having a large effect on the average level of the rest of the signal.

For example, here’s a (hypothetical) feedback management filter that’s 0.5 octaves wide and set for a gain of -9 dB.

feedbackwide

It’s 1 dB down at about 600 Hz and 1700 Hz. That’s not too bad, but take a look at this quarter-octave notch filter:

feedbacknarrow

Its actual gain is negative infinity, although the analyzer “only” has enough resolution to show a loss of 30 dB. (That’s still a very deep cut.) Even with a cut that displays as more than three times as deep as the first filter, the -1 dB points are 850 Hz and 1200 Hz. The filter’s high selectivity makes it capable of obliterating a problem area while leaving almost everything else untouched.

To conclude, I want to reiterate: Wide EQ isn’t bad. It’s an important tool to have in the box. At the same time, I would caution craftspersons that are new to this business that a label like “British EQ” or “musical EQ” does not necessarily mean “good for everything.” In most cases, what that label likely means is that an equalizer is inoffensive by way of having a gentle slope.

And that’s fine.

But broad EQ can’t save you. Not from the really tough problems, anyway.


A Vocal Addendum

Forget about all the “sexy” stuff. Get ’em loud, and let ’em bark.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

micandmonsWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This article is a follow-on to my piece regarding the unsuckification of monitors. In a small-venue context, vocal monitoring is probably more important than any other issue for the “on deck” sound. Perhaps surprisingly, I didn’t talk directly about vocals and monitors AT ALL in the previous article.

But let’s face it. The unsuckification post was long, and meant to be generalized. Putting a specific discussion of vocal monitoring into the mix would probably have pushed the thing over the edge.

I’ll get into details below, but if you want a general statement about vocal monitors in a small-venue, “do-or-die,” floor-wedge situation, I’ll be happy to oblige: You do NOT need studio-quality vocals. You DO need intelligible, reasonably smooth vocals that can be heard above everything else. Forget the fluff – focus on the basics, and do your preparation diligently.

Too Loud Isn’t Loud Enough

One of the best things to ever come out of Pro Sound Web was this quiz on real-world monitoring. In particular, answer “C” on question 16 (“What are the main constituents of a great lead vocal mix?”) has stuck with me. Answer C reads: “The rest of the band is hiding 20 feet upstage because they can’t take it anymore.”

In my view, the more serious rendering of this is that vocal monitors should, ideally, make singing effortless. Good vocal monitors should allow a competent vocalist to deliver their performance without straining to hear themselves. To that end, an audio human doing show prep should be trying to get the vocal mics as loud as is practicable. In the ideal case, a vocal mic routed through a wedge should present no audible ringing, while also offering such a blast of sound that the singer will ask for their monitor send to be turned down.

(Indeed, one of my happiest “monitor guy” moments in recent memory occurred when a vocalist stepped up to a mic, said “Check!”, got a startled look on his face, and promptly declared that “Anyone who can’t hear these monitors is deaf.”)

Now, wait a minute. Doesn’t this conflict with the idea that too much volume and too much gain are a problem?

No.

Vocal monitors are a cooperative effort amongst the audio human, the singer(s), and the rest of the band. The singer has to have adequate power to perform with the band. The band has to run at a reasonable volume to play nicely with the singer. If those two conditions are met (and assuming there are no insurmountable equipment or acoustical problems), getting an abundance of sound pressure from a monitor should not require a superhuman effort or troublesome levels of gain.

So – if you’re prepping for a band, dial up as much vocal volume as you can without causing a loop-gain problem. If the vocals are tearing people’s heads off, you can always turn it down. Don’t be lazy! Get up on deck and listen to what it sounds like. If there are problem areas at certain frequencies, then get on the appropriate EQ and tame them. Yes, the feedback points can change a bit when things get moved around and people get in the room, but that’s not an excuse to just sit on your hands. Do some homework now, and life will be easier later.

Don’t Squeeze Me, Bro

A sort of corollary to the above is that anything which acts to restrict your vocal monitor volume is something you should think twice about. If you were thinking about inserting a compressor in such a way that it would affect monitor world, think again.

A compressor reduces dynamic range by reducing gain on signals that exceed a preset threshold. For a vocalist, this means that the monitor level of their singing may no longer track in a 1:1 ratio with their output at the mic. They sing with more force, but the return through the monitors doesn’t get louder at the same rate. If the singer is varying their dynamics to track with the band, this failure of the monitors to stay “in ratio” can cause the vocals to become swamped.

And, in certain situations, monitors that don’t track with vocal dynamics can cause a singer to hurt themselves. They don’t hear their voice getting as loud as it should, so they push themselves harder – maybe even to the point that they blow out their voice.

Of course, you could try to compensate for the loss of level by increasing the output or “makeup” gain on the compressor, but oh! There’s that “too much loop gain” problem again. (Compressors do NOT cause feedback. That’s a myth. Steady-state gain applied to compensate for compressor-applied, variable gain reduction, on the other hand…)

The upshot?

Do NOT put a compressor across a vocalist such that monitor world will be affected. (The exception is if you have been specifically asked to do so by an artist that has had success with the compressor during a real, “live-fire” dress rehearsal.) If you don’t have an independent monitor console or monitor-only channels, then bus the vocals to a signal line that’s only directly audible in FOH, and compress that signal line.

The Bark Is The Bite

One thing I have been very guilty of in the past, and am still sometimes guilty of, is dialing up a “sounds good in the studio” vocal tone for monitor world. That doesn’t sound like it would be a problem, but it can be a huge one.

The issue at hand is that what sounds impressive in isolation often isn’t so great when the full band is blasting away. This is very similar to guitarists who have “bedroom” tone. When we’re only listening to a single source, we tend to want that source to consume the entire audible spectrum. We want that single instrument or voice to have extended lows and crisp, snappy HF information. We will sometimes dig out the midrange in order to emphasize the extreme ends of the audible spectrum. When all we’ve got to listen to is one thing, this can all sound very “sexy.”

And then the rest of the band starts up, and our super-sexy, radio-announcer vocals become the wrong thing. Without a significant amount of midrange “bark,” the parts of the spectrum truly responsible for vocal audibility get massacred by the guitars. And drums. And keyboards. All that’s left poking through is some sibilance. Then, when you get on the gas to compensate, the low-frequency material starts to feed back (because it’s loud, and the mic probably isn’t as directional as you think at low frequencies), and the high-frequency material also starts to ring (because it’s loud, and probably has some nasty peaks in it as well).

Yes – a good monitor mix means listenable vocals. You don’t want mud or nasty “clang” by any means, but you need the critical midrange zone – say, 500 Hz to 3 KHz or 4 KHz – to be at least as loud as the rest of the audible spectrum in the vocal channel. Midrange that jumps at you a little bit doesn’t sound as refined as a studio recording, but this isn’t the studio. It’s live-sound. Especially on the stage, hi-fi tone often has to give way to actually being able to differentiate the singer. There are certainly situations where studio-style vocal tone can work on deck, but those circumstances are rarely encountered with rock bands in small spaces.

Stay Dry

An important piece of vocal monitoring is intelligibility. Intelligibility has to do with getting the oh-so-important midrange in the right spot, but it also has to do with signals starting and stopping. Vocal sounds with sharply defined start and end points are easy for listeners to parse for words. As the beginnings and ends of vocal sounds get smeared together, the difficulty of parsing the language goes up.

Reverb and delay (especially) cause sounds to smear in the time domain. I mean, that’s what reverb and delay are for.

But as such, they can step on vocal monitoring’s toes a bit.

If it isn’t a specific need for the band, it’s best to leave vocals dry in monitor world. Being able to extract linguistic information from a sound is a big contributor to the perception that something is loud enough or not. If the words are hard to pick out because they’re all running together, then there’s a tendency to run things too hot in order to compensate.

The first step with vocal monitors is to get them loud enough. That’s the key goal. After that goal is met, then you can see how far you can go in terms of making things pretty. Pretty is nice, and very desirable, but it’s not the first task or the most important one.


Unsuckifying Your Monitor Mix

Communicate well, and try not to jam too much into any one mix.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

monsWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Monitors can be a beautiful thing. Handled well, they can elicit bright-eyed, enthusiastic approbations like “I’ve never heard myself so well!” and “That was the best sounding show EVER!” They can very easily be the difference between a mediocre set and a killer show, because of how much they can influence the musicians’ ability to play as a group.

I’ve said it to many people, and I’m pretty sure I’ve said it here: As an audio-human, I spend much more time worrying about monitor world than FOH (Front Of House). If something is wrong out front, I can hear it. If something is wrong in monitor world, I won’t hear it unless it’s REALLY wrong. Or spiraling out of control.

…and there’s the issue. Bad monitor mixes can do a lot of damage. They can make the show less fun for the musicians, or totally un-fun for the musicians, or even cause so much on stage wreckage that the show for the audience becomes a disaster. On top of that, the speed at which the sound on deck can go wrong can be startlingly high. If you’ve ever lost control of monitor world, or have been a musician in a situation where someone else has had monitor world “get away” from them, you know what I mean. When monitors become suckified, so too does life.

So – how does one unsuckify (or, even better, prevent suckification of) monitor world?

Foundational Issues To Prevent Suckification

Know The Inherent Limits On The Engineer’s Perception

At the really high-class gigs, musicians and production techs alike are treated to a dedicated “monitor world” or “monitor beach.” This is an independent or semi-independent audio control rig that is used to mix the show for the musicians. There are even some cases where there are multiple monitor worlds, all run by separate people. These folks are likely to have a setup where they can quickly “solo” a particular monitor mix into their own set of in-ears, or a monitor wedge which is similar to what the musicians have. Obviously, this is very helpful to them in determining what a particular performer is hearing.

Even so, the monitor engineer is rarely in exactly the same spot as any particular musician. Consequently, if the musicians are on wedges, even listening to a cue wedge doesn’t exactly replicate the total acoustic situation being experienced by the players.

Now, imagine a typical small-venue gig. There’s probably one audio human doing everything, and they’re probably listening mostly to the FOH PA. The way that FOH combines with monitor world can be remarkably different out front versus on deck. If the engineer has a capable console, they can solo up a complete monitor mix, probably through a pair of headphones. (A cue wedge is pretty unlikely to have been set up. They’re expensive and consume space.) A headphone feed is better than nothing, but listening to a wedge mix in a set of cans only tells an operator so much. Especially when working on a drummer’s mix, listening to the feed through a set of headphones has limited utility. A guy or gal might set up a nicely balanced blend, but have no real way of knowing if that mix is even truly audible at the percussionist’s seat.

If you’re not so lucky as to have a flexible console, your audio human will be limited to soloing individual inputs.

The point is that, at most small-venue shows, an audio human at FOH can’t really be expected to know what a particular mix sounds like as a total acoustic event. Remote-controlled consoles can fix this temporarily, of course, but as soon as the operator leaves the deck…all bets are off. If you’re a musician, assume that the engineer does NOT have a thoroughly objective understanding of what you’re hearing. If you’re an audio human, make the same assumption about yourself. Having made those assumptions, be gentle with yourself and others. Recognize that anything “pre set” is just a wild guess, and further, recognize that trying to take a channel from “inaudible in a mix” to “audible” is going to take some work and cooperation.

Use Language That’s As Objective As Possible

Over the course of a career, audio humans create mental mappings between subjective statements and objective measurements. For instance, when I’m working with well-established monitor mixes, I translate requests like “Could I get just a little more guitar?” into “Could I get 3 dB more guitar?” This is a necessary thing for engineers to formulate for themselves, and it’s appropriate to expect that a pro-level operator has some ability to interpret subjective requests.

At the same time, though, it can make life much easier when everybody communicates using objective language. (Heck, it makes it easier if there’s two-way communication at all.)

For instance, let’s say you’re an audio human working with a performer on a monitor mix, and they ask you for “a little more guitar.” I strongly recommend making the change that you translate “a little more” as corresponding to, and then stating your change (in objective terms) over the talkback. Saying something like, “Okay, that’s 3 dB more guitar in mix 2” creates a helpful dialogue. If that 3 dB more guitar wasn’t enough, the stating of the change opens a door for the musician to say that they need more. Also, there’s an opportunity for the musician’s perception to become calibrated to an objective scale – meaning that they get an intuitive sense for what a certain dB boost “feels” like. Another opportunity that arises is for you and the musician to become calibrated to each other’s terminology.

Beyond that, a two-way dialogue fosters trust. If you’re working on monitors and are asked for a change, making a change and then stating what you did indicates that you are trying to fulfill the musician’s wishes. This, along with the understanding that gets built as the communication continues, helps to mentally place everybody on the same team.

For musicians, as you’re asking for changes in your monitor mixes, I strongly encourage you to state things in terms of a scale that the engineer can understand. You can often determine that scale by asking questions like, “What level is my vocal set at in my mix?” If the monitor sends are calibrated in decibels, the engineer will probably respond with a decibel number. If they’re calibrated in an arbitrary scale, then the reply will probably be an arbitrary number. Either way, you will have a reference point to use when asking for things, even if that reference point is a bit “coarse.” Even if all you’ve got is to request that something go from, say, “five to three,” that’s still functionally objective if the console is labeled using an arbitrary scale.

For decibels, a useful shorthand to remember is that 3 dB should be a noticeable change in level for something that’s already audible in your mix. “Three decibels” is a 2:1 power ratio, although you might personally feel that “twice as loud” is 6 dB (4:1) or even 10 dB (10:1).

Realtime Considerations To Prevent And Undo Suckification

Too Much Loop Gain, Too Much Volume

Any instrument or device that is substantially affected by the sound from a monitor wedge, and is being fed through that same wedge, is part of that mix’s “loop gain.” Microphones, guitars, basses, acoustic drums, and anything else that involves body or airborne resonance is a factor. When their output is put through a monitor speaker, these devices combine with the monitor signal path to form an acoustical, tuned circuit. In tuned circuits, the load impedance determines whether the circuit “rings.” As the load impedance drops, the circuit is more and more likely to ring or resonate for a longer time.

If that last bit made your eyes glaze over, don’t worry. The point is that more gain (turning something up in the mix) REDUCES the impedance, or opposition, to the flow of sound in the loop. As the acoustic impedance drops, the acoustic circuit is more likely to ring. You know, feed back. *SQEEEEEALLLL* *WHOOOOOwoowooooOOOM*

Anyway.

The thing for everybody to remember – audio humans and musicians alike – is that a monitor mix feeding a wedge becomes progressively more unstable as gain is added. As ringing sets in, the sound quality of the mix drops off. Sounds that should start and then stop quickly begin to “smear,” and with more gain, certain frequency ranges become “peaky” as they ring. Too much gain can sometimes begin to manifest itself as an overall tone that seems harsh and tiring, because sonic energy in an irritating range builds up and sustains itself for too long. Further instability results in audible feedback that, while self-correcting, sounds bad and can be hard for an operator to zero-in on. As instability increases further, the mix finally erupts into “runaway” feedback that’s both distracting and unnerving to everyone.

The fix, then is to keep each mix’s loop gain as low as possible. This often translates into keeping things OUT of the monitors.

As an example, there’s a phenomenon I’ve encountered many times where folks start with vocals that work…and then add a ton of other things to their feed. These other sources are often far more feedback resistant than their vocal mic can be, and so they can apply enough gain to end up with a rather loud monitor mix. Unfortunately, they fall in love with the sound of that loud mix, except for the vocals which have just been drowned. As a result, they ask for the vocals to be cranked up to match. The loop gain on the vocal mic increases, which destabilizes the mix, which makes monitor world harder to manage.

As an added “bonus,” that blastingly loud monitor mix is often VERY audible to everybody else on stage, which interferes with their mixes, which can cause everybody else to want their overall mix volume to go up, which increases loop gain, which… (You get the idea.)

The implication is that, if you’re having troubles with monitors, a good thing to do is to start pulling things out of the mixes. If the last thing you did before monitor world went bad was, say, adding gain to a vocal mic, try reversing that change and then rebuilding things to match the lower level.

And not to be harsh or combative, but if you’re a musician and you require high-gain monitors to even play at all, then what you really have is an arrangement, ensemble, ability, or equipment problem that is YOURS to fix. It is not an audio-human problem or a monitor-rig problem. It’s your problem. This doesn’t mean that an engineer won’t help you fix it, it just means that it’s not their ultimate responsibility.

Also, take notice of what I said up there: High-GAIN monitors. It is entirely possible to have a high-gain monitor situation without also having a lot of volume. For example, 80 dB SPL C is hardly “rock and roll” loud, but getting that output from a person who sings at the level of a whisper (50 – 60 dB SPL C) requires 20 – 30 dB of boost. For the acoustical circuits that I’ve encountered in small venues, that is definitely a high-gain situation. Gain is the relative level increase or decrease applied to a signal. Volume is the output associated with a signal level resultant from gain. They are related to each other, but the relationship isn’t fixed in terms of any particular gain setting.

Conflicting Frequency Content

Independent of being in a high-gain monitor conundrum, you can also have your day ruined by masking. Masking is what occurs when two sources with similar frequency content become overlaid. One source will tend to dominate the other, and you lose the ability to hear both sources at once. I’ve had this happen to me on numerous occasions with pianists and guitar players. They end up wanting to play at the same time, using substantially the same notes, and the sonic characteristics of the two instruments can be surprisingly close. What you get is either too-loud guitar, too-loud piano, or an indistinguishable mash of both.

In a monitor-mix situation, it’s helpful to identify when multiple sources are all trying to occupy the same sonic space. If sources can’t be distinguished from one another until one sound just gets obliterated, then you may have a frequency-content collision in progress. These collisions can result in volume wars, which can lead to high-gain situations, which result in the issues I talked about in the previous section. (Monitor problems are vicious creatures that breed like rabbits.)

After being identified, frequency-content issues can be solved in a couple of different ways. One way is to use equalization to alter the sonic content of one source or another. For instance, a guitar and a bass might be stepping on each other. It might be decided that the bass sound is fine, but the guitar needs to change. In that case, you might end up rolling down the guitar’s bottom end, and giving the mids a push. Of course, you also have to decide where this change needs to take place. If everything was distinct before the monitor rig got involved, then some equalization change from the audio human is probably in order. If the problem largely existed before any monitor mixes were established, then the issue likely lies in tone choice or song arrangement. In that case, it’s up to the musicians.

One thing to be aware of is that many small-venue mix rigs have monitor sends derived from the same channel that feeds FOH. While this means that the engineer’s channel EQ can probably be used to help fix a frequency collision, it also means that the change will affect the FOH mix as well. If FOH and monitor world sound significantly different from each other, a channel EQ configuration that’s correct for monitor world may not be all that nice out front. Polite communication and compromise are necessary from both the musicians and the engineer in this case. (Certain technical tricks are also possible, like “multing” a problem source into a monitors-only channel.)

Lack Of Localization

Humans have two ears so that we can determine the location and direction of sounds. In music, one way for us to distinguish sources is for us to recognize those instruments as coming from different places. When localization information gets lost, then distinguishing between sources requires more separation in terms of overall volume and frequency content. If that separation isn’t possible to get, then things can become very muddled.

This relates to monitors in more than one way.

One way is a “too many things in one place that’s too loud” issue. In this instance, a monitor mix gets more and more put in it, and at a high enough volume that the monitor obscures the other sounds on deck. What the musician originally heard as multiple, individually localized sources is now a single source – the wedge. The loss of localization information may mean that frequency-content collisions become a problem, which may lead to a volume-war problem, which may lead to a loop-gain problem.

Another possible conundrum is “too much volume everywhere.” This happens when a particular source gets put through enough wedges at enough volume for it to feel as though that single source is everywhere. This can ruin localization for that particular source, which can also result in the whole cascade of problems that I’ve already alluded to.

Fixing a localization problem pretty much comes down having sounds occupy their own spatial point as much as possible. The first thing to do is to figure out if all the volume used for that particular source is actually necessary in each mix. If the volume is basically necessary, then it may be feasible to move that volume to a different (but nearby) monitor mix. For some of the players, that sound will get a little muddier and a touch quieter, but the increase in localization may offset those losses. If the volume really isn’t necessary, then things get much easier. All that’s required is to pull back the monitor feeds from that source until localization becomes established again.

It’s worth noting that “extreme” cases are possible. In those situations, it may be necessary to find a way to generate the necessary volume from a single, localized source that’s audible to everyone on the deck. A well placed sidefill can do this, and an instrument amplifier in the correct position can take this role if a regular sidefill can’t be conjured up.

Wrapping Up

This can be a lot to take in, and a lot to think about. I will freely confess to not always having each of these concepts “top of mind.” Sometimes, audio turns into a pressure situation where both musicians and techs get chased into corners. It can be very hard for a person who’s not on deck to figure out what particular issue is in effect. For folks without a lot of technical experience who play or sing, identifying a problem beyond “something’s not right” can be too much to ask.

In the heat of the moment, it’s probably best to simply remember that yes, monitors are there to be used – but not to be overused. Effective troubleshooting is often centered around taking things out of a misbehaving equation until the equation begins to behave again. So, if you want to unsuckify your monitors, try getting as much out of them as possible. You may be surprised at what actually ends up working just fine.


What Can You Do For Two People?

Quite a bit, actually, because even the small things have a large effect.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

MiNX is a treat to see on the show schedule. They’re not just a high-energy performance, but a high-energy performance delivered by only two people, and without resorting to ear-splitting volume. How could an audio-human not appreciate that?

A MiNX show is hardly an exercise in finding the boundaries of one’s equipment. Their channel count is only slightly larger than a singer-songwriter open mic. It looks something like this:

  1. Raffi Vocal Mic
  2. Ischa Vocal Mic
  3. Guitar Amp Mic
  4. Acoustic Guitar DI
  5. Laptop DI

That’s it. When you compare those five inputs with the unbridled hilarity that is a full rock band with 3+ vocals, two guitars, a bass rig, keys, and full kit of acoustic drums, a bit of temptation creeps in. You get the urge to think that because the quantity of things to potentially manage has gone down, the amount of attention that you have to devote to the show is reduced. This is, of course, an incorrect assumption.

But why?

Low Stage Volume Magnifies FOH

A full-on rock band tends to produce a fair amount of stage volume. In a small room, this stage volume is very much “in parallel” with the contribution from the PA. If you mute the PA, you may very well still have concert-level SPL (Sound Pressure Level) in the seats. There are plenty of situations where, for certain instruments, the contribution from the PA is nothing, or something but hardly audible, or something audible but in a restricted frequency area that just “touches up” the audio from stage.

So, you might have 12 things connected to the console, but only really be using – say – the three vocal channels. Everything else cold very well be taking care of itself (or mostly so), and thus the full-band mix is actually LESS complex and subtle than a MiNX-esque production. The PA isn’t overwhelmingly dominant for a lot of the channels, and so changes to those channel volumes or tones are substantially “washed out.”

But that’s not the way it is with MiNX and acts similar to them.

In the case of a production like MiNX, the volume coming off the stage is rather lower than that of a typical rock act. It’s also much more “directive.” With the exception of the guitar amplifier, everything else is basically running through the monitors. Pro-audio monitors – relative to most instruments and instrument amps – are designed to throw audio in a controlled pattern. There’s much less “splatter” from sonic information that’s being thrown rearward and to the sides. What this all means is that even a very healthy monitor volume can be eclipsed by the PA without tearing off the audience’s heads.

That is, unlike a typical small-room rock show, the audience can potentially be hearing a LOT of PA relative to everything else.

And that means that changes to FOH (Front Of House) level and tonality are far less washed out than they would normally be.

And that means that little changes matter much more than they usually do.

You’ve Got To Pay Attention

It’s easy to be taken by surprise by this. Issues that you might normally let go suddenly become fixable, but you might not notice the first few go-arounds because you’re just used to letting those issues slide. Do the show enough times, though, and you start noticing things. For instance, the last time I worked on a MiNX show was when I finally realized that some subtle dips at 2.5 kHz in the acoustic guitar and backing tracks allowed me to run those channels a bit hotter without stomping on Ischa’s vocals. This allows for a mix that sounds less artificially “separated,” but still retains intelligibility.

That’s a highly specific example, but the generalized takeaway is this: An audio-human can be tempted to just handwave a simpler, quieter show, but that really isn’t a good thing to do. Less complexity and lower volume actually means that the details matter more than ever…and beyond that, you actually have the golden opportunity to work on those details in a meaningful way.

When the tech REALLY needs to be paying attention to the small details of the mix is when the PA system’s “tool metaphor” changes from a sledgehammer to a precision scalpel.

When you’ve only got a couple of people on deck, try hard to stay sharp. There might be a lot you can do for ’em, and for their audience.