Tag Archives: Mixing

The Board Feed Problem

Getting a good “board feed” is rarely as simple as just splitting an output.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

boardfeedWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

I’ve lost count of the number of times I’ve been asked for a “board mix.” A board mix or feed is, in theory, a quick and dirty way to get a recording of a show. The idea is that you take either an actual split from the console’s main mix bus, or you construct a “mirror” of what’s going into that bus, and then record that signal. What you’re hoping for is that the engineer will put together a show where everything is audible and has a basically pleasing tonality, and then you’ll do some mastering work to get a usable result.

It’s not a bad idea in general, but the success of the operation relies on a very powerful assumption: That the overwhelming majority of the show’s sound comes from the console’s output signal.

In very large venues – especially if they are open-air – this can be true. The PA does almost all the work of getting the show’s audio out to the audience, so the console output is (for most practical purposes) what the folks in the seats are listening to. Assuming that the processing audible in the feed-affecting path is NOT being used to fix issues with the PA or the room, a good mix should basically translate to a recorded context. That is, if you were to record the mix and then play it back through the PA, the sonic experience would be essentially the same as it was when it was live.

In small venues, on the other hand…

The PA Ain’t All You’re Listening To

The problem with board mixes in small venues is that the total acoustical result is often heavily weighted AWAY from what the FOH PA is producing. This doesn’t mean that the show sounds bad. What it does mean is that the mix you’re hearing is the PA, AND monitor world, AND the instruments’ stage volume, hopefully all blended together into a pleasing, convergent solution. That total acoustic solution is dependent on all of those elements being present. If you record the mix from the board, and then play it back through the PA, you will NOT get the same sonic experience that occurred during the live show. The other acoustical elements, no longer being present, leave you with whatever was put through the console in order to make the acoustical solution converge.

You might get vocals that sound really thin, and are drowning everything else out.

You might not have any electric guitar to speak of.

You might have only a little bit of the drumkit’s bottom end added into the bleed from the vocal mics.

In short, a quick-n-dirty board mix isn’t so great if the console’s output wasn’t the dominant signal (by far) that the audience heard. While this can be a revealing insight as to how the show came together, it’s not so great as a demo or special release.

So, what can you do?

Overwhelm Or Bypass

Probably the most direct solution to the board feed problem is to find a way to make the PA the overwhelmingly dominant acoustic factor in the show. Some ways of doing this are better than others.

An inadvisable solution is to change nothing about the show and just allow FOH to drown everything. This isn’t so good because it has a tendency to create a painfully loud experience for the audience. Especially in a rock context, getting FOH in front of everything else might require a mid-audience continuous sound pressure of 110 dB SPL or more. Getting away with that in a small room is a sketchy proposition at best.

A much better solution is to lose enough volume from monitor world and the backline, such that FOH being dominant brings the total show volume back up to (or below) the original sound level. This requires some planning and experimentation, because achieving that kind of volume loss usually means finding a way of killing off 10 – 20 dB SPL of noise. Finding a way to divide the sonic intensity of your performance by anywhere from 10 to 100(!) isn’t trivial. Shielding drums (or using a different kit setup), blocking or “soaking” instrument amps (or changing them out), and switching to in-ear monitoring solutions are all things that you might have to try.

Alternatively, you can get a board feed that isn’t actually the FOH mix.

One way of going about this is to give up one pre-fade monitor path to use as a record feed. You might also get lucky and be in a situation where a spare output can be configured this way, requiring you to give up nothing on deck. A workable mix gets built for the send, you record the output, and you hope that nothing too drastic happens. That is, the mix doesn’t follow the engineer’s fader moves, so you want to strenuously avoid large changes in the relative balances of the sources involved. Even with that downside, the nice thing about this solution is that, large acoustical contributions from the stage or not, you can set up any blend you like. (With the restriction of avoiding the doing of weird things with channel processing, of course. Insane EQ and weird compression will still be problematic, even if the overall level is okay.)

Another method is to use a post-fade path, with the send levels set to compensate for sources being too low or too hot at FOH. As long as the engineer doesn’t yank a fader all the way down to -∞ or mute the channel, you’ll be okay. You’ll also get the benefit of having FOH fader moves being reflected in the mix. This can still be risky, however, if a fader change has to compensate for something being almost totally drowned acoustically. Just as with the pre-fade method, the band still has to work together as an actual ensemble in the room.

If you want to get really fancy, you can split all the show inputs to a separate console and have a mix built there. It grants a lot of independence (even total independence) from the PA console, and even lets you assign your own audio human to the task of mixing the recording in realtime. You can also just arrange to have the FOH mix person run the separate console, but managing the mix for the room and “checking in” with the record mix can be a tough workload. It’s unwise to simply expect that a random tech will be able to pull it off.

Of course, if you’re going to the trouble of patching in a multichannel input split, I would say to just multitrack the show and mix it later “offline” – but that wouldn’t be a board feed anymore.

Board mixes of various sorts are doable, but if you’re playing small rooms you probably won’t be happy with a straight split from FOH. If you truly desire to get something usable, some “homework” is necessary.


Why Broad EQ Can’t Save You

You can’t do microsurgery with an axe.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

peakingeqWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

I don’t have anything against “British EQ” as a basic concept. As I’ve come to interpret it, “British EQ” is a marketing term that means “our filters are wide.” EQ filters with gentle, wide slopes tend to sound nice and are pretty easy to use, so they make sense as a design decision in consoles that can’t give you every bell and whistle.

When I’m trying to give a channel or group a push in a specific area, I do indeed prefer to use a filter that’s a bit wider. Especially if I have to really “get on the gas,” I need the EQ to NOT impart a strange or ugly resonance to the sound. Even so, I think my overall preference is still for a more focused filter than what other folks might choose. For instance, when adding 6 dB at about 1kHz to an electric guitar (something I do quite often), the default behavior of my favorite EQ plugin is a two-octave wide filter:

1k2oct

What I generally prefer is a 1.5-octave filter, though.

1k1.5oct

I still mostly avoid a weird, “peaky” sound, but I get a little bit less (1 dB) of that extra traffic at 2-3 kHz, which might be just enough to keep me from stomping on the intelligibility of my vocal channels.

Especially in the rough-and-tumble world of live audio, EQ selectivity is a big deal. When everything is bleeding into everything else, you want to be able to grab and move only the frequency range that corresponds to what’s actually “signal” in a channel. Getting what you want…and also glomming onto a bunch of extra material isn’t all that helpful. In the context of, say, a vocal mic, only the actual vocal part is signal. Everything else is noise, even if it’s all music in the wider sense. IF I want to work on something in a vocal channel, I don’t also want to be working on the bass, drums, guitar, and keyboard noises that are also arriving at the mic. Selective EQ helps with that.

What Your Channel EQ Is Doing To You

Selective EQ isn’t always a choice that you get, though. If a console manufacturer has a limited “budget” to decide what to give you on a channel-per-channel basis, they’ll probably choose a filter that’s fairly wide. For instance, here’s a 6 dB boost at 1 kHz on a channel from an inexpensive analog console (a Behringer SL2442):

1kbehringer

The filter looks to be between 2.5 and 3 octaves wide. This is perfectly fine for basic tone shaping, but it’s not always great for solving problems. It would be nice to get control over the bandwidth of the filter, but that option chews up both what can be spent on internal components, and it also hogs control-surface real estate. For those reasons, and also because of “ease of use” considerations, fully parametric EQ isn’t something that’s commonly found on small-venue, analog consoles. As such, their channel EQs are often metaphorical axes – or kitchen knives, if you’re lucky – when what you may need is a scalpel.

If you need to do something drastic in terms of gain, a big, fat EQ filter can start acting like a volume control across the entire channel. This is especially true when you need to work on two or more areas, and multiple filters overlap. You can kill your problem, but you’ll also kill everything else.

It’s like getting rid of a venomous spider by having the Air Force bomb your house.

I should probably stop with the metaphors…

Fighting Feedback

Of course, we don’t usually manage feedback issues with a console’s channel EQ. We tend to use graphic EQs that have been inserted or “inlined” on console outputs. (I do things VERY differently, but that’s not the point of this article.)

Why, though? Why use a graphic EQ, or a highly flexible parametric EQ for battling feedback?

Well, again, the issue is selectivity.

See, if what you’re trying to do is to maximize the amount of gain that can be applied to a system, any gain reduction works against that goal.

(Logical, right?)

Unfortunately, most feedback management is done by applying negative gain across some frequency range. The trick, then, is to apply that negative gain across as narrow a band as is practicable. The more selective a filter is, the more insane things you can do with its gain without having a large effect on the average level of the rest of the signal.

For example, here’s a (hypothetical) feedback management filter that’s 0.5 octaves wide and set for a gain of -9 dB.

feedbackwide

It’s 1 dB down at about 600 Hz and 1700 Hz. That’s not too bad, but take a look at this quarter-octave notch filter:

feedbacknarrow

Its actual gain is negative infinity, although the analyzer “only” has enough resolution to show a loss of 30 dB. (That’s still a very deep cut.) Even with a cut that displays as more than three times as deep as the first filter, the -1 dB points are 850 Hz and 1200 Hz. The filter’s high selectivity makes it capable of obliterating a problem area while leaving almost everything else untouched.

To conclude, I want to reiterate: Wide EQ isn’t bad. It’s an important tool to have in the box. At the same time, I would caution craftspersons that are new to this business that a label like “British EQ” or “musical EQ” does not necessarily mean “good for everything.” In most cases, what that label likely means is that an equalizer is inoffensive by way of having a gentle slope.

And that’s fine.

But broad EQ can’t save you. Not from the really tough problems, anyway.


Convergent Solutions

FOH and monitor world have to work together if you want the best results.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

convergentWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

In a small venue, there’s something that you know, even if you’re not conscious of knowing it:

The sound from the monitors on deck has an enormous effect on the sound that the audience hears. The reverse is also true. The sound from the FOH PA has an enormous effect on the sound that the musicians hear on stage.

I’m wiling to wager that there are shows that you’ve had where getting a mix put together seemed like a huge struggle. There are shows that you’ve had where – on the other hand – creating blends that made everybody happy occurred with little effort. One of the major factors in the ease or frustration of whole-show sound is “convergence.” When the needs of the folks on deck manage to converge with the needs of the audience, sound reinforcement gets easier. When those needs diverge, life can be quite a slog.

Incompatible Solutions

But…why would the audience’s needs and the musicians’ needs diverge?

Well, taste, for one thing.

Out front, you have an interpreter for the audience, i.e. the audio human. This person has to make choices about what the audience is going to hear, and they have to do this through the filter of their own assumptions. Yes, they can get input from the band, and yes, they will sometimes get input from the audience, but they still have to make a lot of snap decisions that are colored by their immediate perceptions.

When it comes to the sound on deck, the noise-management professional becomes more of an “executor.” The tech turns the knobs, but there can be a lot more guidance from the players. The musicians are the ones who try to get things to match their needs and tastes, and this can happen on an individual level if enough monitor mixes are available.

If the musicians’ tastes and the tech’s taste don’t line up, you’re likely to have divergent solutions. One example I can give is from quite a while ago, where a musician playing a sort of folk-rock wanted a lot of “kick” in the wedges. A LOT of kick. There was so much bass-drum material in the monitors that I had none at all out front. Even then, it was a little much. (I was actually pretty impressed at the amount of “thump” the monitor rig would deliver.) I ended up having to push the rest of the mix up around the monitor bleed, which made us just a bit louder than we really needed to be for an acoustic-rock show.

I’ve also experienced plenty of examples where we were chasing vocals and instruments around in monitor world, and I began to get the sneaky suspicion that FOH was being a hindrance. More than once, I’ve muted FOH and heard, “Yeah! That sounds good now.” (Uh oh.)

In any case, the precipitating factors differ, but the main issue remains the same: The “solutions” for the sound on stage and the sound out front are incompatible to some degree.

I say “solutions” because I really do look at live-sound as a sort of math or science “problem.” There’s an outcome that you want, and you have to work your way through a process which gets you that outcome. You identify what’s working against your desired result, find a way to counteract that issue, and then re-evaluate. Eventually, you find a solution – a mix that sounds the way you think it should.

And that’s great.

Until you have to solve for multiple solutions that don’t agree, because one solution invalidates the others.

Live Audio Is Nonlinear Math

The analogy that I think of for all this is a very parabolic one. Literally.

If you remember high school, you probably also remember something about finding “solutions” for parabolic curves. You set the function as being equal to zero, and then tried to figure out the inputs to the function that would satisfy that condition. Very often, you would get two numbers as solutions because nonlinear functions can output zero more than once.

In my mind, this is a pretty interesting metaphor for what we try to do at a show.

For the sake of brevity, let’s simplify things down so that “the sound on stage” and “the sound out front” are each a single solution. If we do that, we can look at this issue via a model which I shall dub “The Live-Sound Parabola.” The Live-Sound Parabola represents a “metaproblem” which encompasses two smaller problems. We can solve each sub-problem in isolation, but there’s a high likelihood that the metaproblem will remain unsolved. The metaproblem is that we need a good show for everyone, not just for the musicians or just for the audience.

In the worst-case scenario, neither sub-problem is even close to being solved. The show is bad for everybody. Interestingly, the indication of the “badness” of the show is the area under the curve. (Integral calculus. It’s everywhere.) In other words, the integral of The Live Sound Parabola is a measure of how much the sub-solutions functionally diverge.

nosolution

(Sorry about the look of the graphs. Wolfram Alpha doesn’t give you large-size graphics unless you subscribe. It’s still a really cool website, though.)

Anyway.

A fairly common outcome is that we don’t quite solve the “on deck” and “out front” problems, but instead arrive at a compromise which is imperfect – but not fatally flawed. The area between the curve and the x-axis is comparatively small.

compromise

When things really go well, however, we get a convergent solution. The Live-Sound Parabola becomes equal to zero at exactly one point. Everybody gets what they want, and the divergence factor (the area under the curve) is minimized. (It’s not eliminated, but simply brought to its minimum value.)

solution

What’s interesting is that The Live Sound Parabola still works when the graph drops below zero. When it does, it’s showing a situation where two diverging solutions actually work independently. This is possible with in-ear monitors, where the solution for the musicians can be almost (if not completely) unaffected by the FOH mix. The integral still shows how much divergence exists, but in this case the divergence is merely instructive rather than problematic.

in-ears

How To Converge

At this point, you may be wanting to shout, “Yeah, yeah, but what do we DO?”

I get that.

The first thing is to start out as close to convergence as possible. The importance of this is VERY high. It’s one of the reasons why I say that sounding like a band without any help from sound reinforcement is critical. It’s also why I discourage audio techs from automatically trying to reinvent everything. If the band already sounds basically right, and the audio human does only what’s necessary to transfer that “already right sound” to the audience, any divergence that occurs will tend to be minimal. Small divergence problems are simple to fix, or easy to ignore. If (on the other hand) you come out of the gate with a pronounced disagreement between the stage and FOH, you’re going to be swimming against very strong current.

Beyond that, though, you need two things: Time, and willingness to use that time for iteration.

One of my favorite things to do is to have a nice, long soundcheck where the musicians can play in the actual room. This “settling in” period is ideally started with minimal PA and minimal monitors. The band is given a chance to get themselves sorted out “acoustically,” as much as is practical. As the basic onstage sound comes together, some monitor reinforcement can be added to get things “just so.” Then, some tweaks at FOH can be applied if needed.

At that point, it’s time to evaluate how much the house and on-deck solutions are diverging. If they are indeed diverging, then some changes can be applied to either or both solutions to correct the problem. The musicians then continue to settle in for a bit, and after that you can evaluate again. You can repeat this process until everybody is satisfied, or until you run out of time.

With a seasoned band and experienced audio human, this iteration can happen very fast. It’s not instant, though, which is another reason to actually budget enough time for it to happen. Sometimes that’s not an option, and you just have to “throw and go.” However, I have definitely been in situations where bands wanted to be very particular about a complex show…after they arrived with only 30 minutes left until downbeat. It’s not that I didn’t want to do everything to help them, it’s just that there wasn’t time for everything to be done. (Production craftspersons aren’t blameless, either. There are audio techs who seem to believe that all shows can be checked in the space of five minutes, and remain conspicuously absent from the venue until five minutes is all they have. Good luck with that, I guess.)

But…

If everybody does their homework, and is willing to spend an appropriate amount of prep-time on show day, your chances of enjoying some convergent solutions are much higher.


Why Techs Should Work Some All-Ages Shows

It’s an excellent way to learn and be tested.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

allages

That picture up there is one of the last from my days at New Song Underground. Underground was an all-ages venue in Salt Lake City that I helped to create and run. It was a BLAST.

I miss it.

Looking back, Underground was a formative experience that I would not have traded for anything. Going to school for audio was an important part of my education, but Underground was downright critical. If you’re looking to become a production craftsperson of some kind (audio, lights, staging, video, you name it), I highly encourage you to spend some time doing work in an all-ages context.

Why? Well…

You’ll Meet People Who Love The Craft For The Craft

It’s not that there aren’t people at every level of this business who “love the art.” Loving the art is what got a lot of folks to those lofty heights.

At the same time, though, the (often) brutally unprofitable nature of the all-ages scene means something: That the people who don’t love the art for its own sake tend to get filtered more aggressively than elsewhere. Sure, there are folks who enter the scene for a perceived, externalized payoff, but they probably won’t last too long. To a large degree, the bands that keep playing do so because they want so badly to play. The venue operators that actually stick with it are in the game because they can’t NOT be in it. The techs that stay around are still there because there are interesting shows to do.

Money doesn’t necessarily make art less pure, but the lack of it acts to encourage the “pure form” to emerge. The question of “will this be cool?” gets just as much weight, if not more, than “will this make money?” That’s how many great things are made.

You’ll Meet People Who Feed (And Are) The Future Of Art

It was through all-ages work that I met two particularly amazing people in the local music world. One was Julia Hollingsworth, who used to run Rising Artists Studios. The other is David Murphy, who runs The Wasatch Music Coaching Academy. Both of them have done mountains of work with performers learning the craft. They’re the kinds of people who are inspiring to be around, and they’re surrounded by players and singers of great talent. The raw potential of some young musicians is enough to make your hair stand on end; the Julias and Davids of the world help to shape that potential.

Through Julia and Dave, I got a chance to work on shows and recordings that displayed stunning performances. There were teenagers turning out the kind of material that folks twice their age couldn’t match.

And the best part is that you get to participate. In some cases, you may be giving “some kid” their first taste of a real show on a real stage. You get to make their day and whet their appetite for more. You get to help performers on their journey towards…whatever they’re journeying toward. I can’t adequately communicate how that feels, or what a privilege it is. There’s nothing quite like it in the world of music. Maybe there’s nothing quite like it in the world, period.

Such experiences are certainly not confined to the all-ages circuit, but I believe they exist there in high concentration.

You’ll Be Challenged

There’s a lot of talent out there in all-ages world, but some of it is undeveloped. There are also a lot of people who just can’t hack the whole “live performance” thing, but haven’t yet learned that they can’t.

Working with folks who are naturally professional, or have learned to be, is easy.

Working with folks who haven’t learned many lessons on professionalism is a challenge – a challenge that’s good for you.

The accessibility and fluidity of all-ages gigs means that you, as a production craftsperson, will have to deal with situations that aren’t under control. Show-orders will change at a moment’s notice. Nobody will submit an input list. Another band will jump on the bill unexpectedly. Nobody will know what’s going on. You will encounter a good number of bands and artists who are well intentioned, but have yet to master the art of show logistics.

And you HAVE to deal with it. You have to do professional work in unprofessional situations, with limited resources, and with limited preparation. You will learn how to be diplomatic, how to find and stay on the critical path for show execution, how to cheerfully chuck out your expectations and just “go for it,” or you will be consumed and excreted by the raging dragon that is “The Show.” You will think nothing of switching out six full bands in a night.

If you want the ultimate education in how to run a PA system at the ragged edge, all-ages gigs are an Ivy League school. You will experience VERY high-gain monitors, with multiple mixes put together for people who haven’t learned how to communicate effectively with audio humans. Both the deck and the house will teeter precariously on the edge of runaway feedback. You will struggle with FOH blends that fight every step of the way, as you wrestle with players who are too loud for each other, and too loud for the poor vocalist…who wants a SCREAMING wedge while they make no more noise than a normal conversation. Also, they’ll want to be three feet from the mic. You will learn very quickly that the loudest dude on stage is as quiet as you can be.

You will not have enough PA. Nobody ever does, of course, but you will have even more not enough PA than lots of other people.

You either swim or sink, and it’s exhilarating. There’s no other learning experience like it, and the best part is that everything else seems much easier afterwards. (You will also learn to be very grateful for people who are professional, that’s for sure.)

So – if learning tough lessons while also experiencing some brilliant moments is something you want to do?

Work some all-ages shows.


A Vocal Addendum

Forget about all the “sexy” stuff. Get ’em loud, and let ’em bark.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

micandmonsWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This article is a follow-on to my piece regarding the unsuckification of monitors. In a small-venue context, vocal monitoring is probably more important than any other issue for the “on deck” sound. Perhaps surprisingly, I didn’t talk directly about vocals and monitors AT ALL in the previous article.

But let’s face it. The unsuckification post was long, and meant to be generalized. Putting a specific discussion of vocal monitoring into the mix would probably have pushed the thing over the edge.

I’ll get into details below, but if you want a general statement about vocal monitors in a small-venue, “do-or-die,” floor-wedge situation, I’ll be happy to oblige: You do NOT need studio-quality vocals. You DO need intelligible, reasonably smooth vocals that can be heard above everything else. Forget the fluff – focus on the basics, and do your preparation diligently.

Too Loud Isn’t Loud Enough

One of the best things to ever come out of Pro Sound Web was this quiz on real-world monitoring. In particular, answer “C” on question 16 (“What are the main constituents of a great lead vocal mix?”) has stuck with me. Answer C reads: “The rest of the band is hiding 20 feet upstage because they can’t take it anymore.”

In my view, the more serious rendering of this is that vocal monitors should, ideally, make singing effortless. Good vocal monitors should allow a competent vocalist to deliver their performance without straining to hear themselves. To that end, an audio human doing show prep should be trying to get the vocal mics as loud as is practicable. In the ideal case, a vocal mic routed through a wedge should present no audible ringing, while also offering such a blast of sound that the singer will ask for their monitor send to be turned down.

(Indeed, one of my happiest “monitor guy” moments in recent memory occurred when a vocalist stepped up to a mic, said “Check!”, got a startled look on his face, and promptly declared that “Anyone who can’t hear these monitors is deaf.”)

Now, wait a minute. Doesn’t this conflict with the idea that too much volume and too much gain are a problem?

No.

Vocal monitors are a cooperative effort amongst the audio human, the singer(s), and the rest of the band. The singer has to have adequate power to perform with the band. The band has to run at a reasonable volume to play nicely with the singer. If those two conditions are met (and assuming there are no insurmountable equipment or acoustical problems), getting an abundance of sound pressure from a monitor should not require a superhuman effort or troublesome levels of gain.

So – if you’re prepping for a band, dial up as much vocal volume as you can without causing a loop-gain problem. If the vocals are tearing people’s heads off, you can always turn it down. Don’t be lazy! Get up on deck and listen to what it sounds like. If there are problem areas at certain frequencies, then get on the appropriate EQ and tame them. Yes, the feedback points can change a bit when things get moved around and people get in the room, but that’s not an excuse to just sit on your hands. Do some homework now, and life will be easier later.

Don’t Squeeze Me, Bro

A sort of corollary to the above is that anything which acts to restrict your vocal monitor volume is something you should think twice about. If you were thinking about inserting a compressor in such a way that it would affect monitor world, think again.

A compressor reduces dynamic range by reducing gain on signals that exceed a preset threshold. For a vocalist, this means that the monitor level of their singing may no longer track in a 1:1 ratio with their output at the mic. They sing with more force, but the return through the monitors doesn’t get louder at the same rate. If the singer is varying their dynamics to track with the band, this failure of the monitors to stay “in ratio” can cause the vocals to become swamped.

And, in certain situations, monitors that don’t track with vocal dynamics can cause a singer to hurt themselves. They don’t hear their voice getting as loud as it should, so they push themselves harder – maybe even to the point that they blow out their voice.

Of course, you could try to compensate for the loss of level by increasing the output or “makeup” gain on the compressor, but oh! There’s that “too much loop gain” problem again. (Compressors do NOT cause feedback. That’s a myth. Steady-state gain applied to compensate for compressor-applied, variable gain reduction, on the other hand…)

The upshot?

Do NOT put a compressor across a vocalist such that monitor world will be affected. (The exception is if you have been specifically asked to do so by an artist that has had success with the compressor during a real, “live-fire” dress rehearsal.) If you don’t have an independent monitor console or monitor-only channels, then bus the vocals to a signal line that’s only directly audible in FOH, and compress that signal line.

The Bark Is The Bite

One thing I have been very guilty of in the past, and am still sometimes guilty of, is dialing up a “sounds good in the studio” vocal tone for monitor world. That doesn’t sound like it would be a problem, but it can be a huge one.

The issue at hand is that what sounds impressive in isolation often isn’t so great when the full band is blasting away. This is very similar to guitarists who have “bedroom” tone. When we’re only listening to a single source, we tend to want that source to consume the entire audible spectrum. We want that single instrument or voice to have extended lows and crisp, snappy HF information. We will sometimes dig out the midrange in order to emphasize the extreme ends of the audible spectrum. When all we’ve got to listen to is one thing, this can all sound very “sexy.”

And then the rest of the band starts up, and our super-sexy, radio-announcer vocals become the wrong thing. Without a significant amount of midrange “bark,” the parts of the spectrum truly responsible for vocal audibility get massacred by the guitars. And drums. And keyboards. All that’s left poking through is some sibilance. Then, when you get on the gas to compensate, the low-frequency material starts to feed back (because it’s loud, and the mic probably isn’t as directional as you think at low frequencies), and the high-frequency material also starts to ring (because it’s loud, and probably has some nasty peaks in it as well).

Yes – a good monitor mix means listenable vocals. You don’t want mud or nasty “clang” by any means, but you need the critical midrange zone – say, 500 Hz to 3 KHz or 4 KHz – to be at least as loud as the rest of the audible spectrum in the vocal channel. Midrange that jumps at you a little bit doesn’t sound as refined as a studio recording, but this isn’t the studio. It’s live-sound. Especially on the stage, hi-fi tone often has to give way to actually being able to differentiate the singer. There are certainly situations where studio-style vocal tone can work on deck, but those circumstances are rarely encountered with rock bands in small spaces.

Stay Dry

An important piece of vocal monitoring is intelligibility. Intelligibility has to do with getting the oh-so-important midrange in the right spot, but it also has to do with signals starting and stopping. Vocal sounds with sharply defined start and end points are easy for listeners to parse for words. As the beginnings and ends of vocal sounds get smeared together, the difficulty of parsing the language goes up.

Reverb and delay (especially) cause sounds to smear in the time domain. I mean, that’s what reverb and delay are for.

But as such, they can step on vocal monitoring’s toes a bit.

If it isn’t a specific need for the band, it’s best to leave vocals dry in monitor world. Being able to extract linguistic information from a sound is a big contributor to the perception that something is loud enough or not. If the words are hard to pick out because they’re all running together, then there’s a tendency to run things too hot in order to compensate.

The first step with vocal monitors is to get them loud enough. That’s the key goal. After that goal is met, then you can see how far you can go in terms of making things pretty. Pretty is nice, and very desirable, but it’s not the first task or the most important one.


Unsuckifying Your Monitor Mix

Communicate well, and try not to jam too much into any one mix.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

monsWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Monitors can be a beautiful thing. Handled well, they can elicit bright-eyed, enthusiastic approbations like “I’ve never heard myself so well!” and “That was the best sounding show EVER!” They can very easily be the difference between a mediocre set and a killer show, because of how much they can influence the musicians’ ability to play as a group.

I’ve said it to many people, and I’m pretty sure I’ve said it here: As an audio-human, I spend much more time worrying about monitor world than FOH (Front Of House). If something is wrong out front, I can hear it. If something is wrong in monitor world, I won’t hear it unless it’s REALLY wrong. Or spiraling out of control.

…and there’s the issue. Bad monitor mixes can do a lot of damage. They can make the show less fun for the musicians, or totally un-fun for the musicians, or even cause so much on stage wreckage that the show for the audience becomes a disaster. On top of that, the speed at which the sound on deck can go wrong can be startlingly high. If you’ve ever lost control of monitor world, or have been a musician in a situation where someone else has had monitor world “get away” from them, you know what I mean. When monitors become suckified, so too does life.

So – how does one unsuckify (or, even better, prevent suckification of) monitor world?

Foundational Issues To Prevent Suckification

Know The Inherent Limits On The Engineer’s Perception

At the really high-class gigs, musicians and production techs alike are treated to a dedicated “monitor world” or “monitor beach.” This is an independent or semi-independent audio control rig that is used to mix the show for the musicians. There are even some cases where there are multiple monitor worlds, all run by separate people. These folks are likely to have a setup where they can quickly “solo” a particular monitor mix into their own set of in-ears, or a monitor wedge which is similar to what the musicians have. Obviously, this is very helpful to them in determining what a particular performer is hearing.

Even so, the monitor engineer is rarely in exactly the same spot as any particular musician. Consequently, if the musicians are on wedges, even listening to a cue wedge doesn’t exactly replicate the total acoustic situation being experienced by the players.

Now, imagine a typical small-venue gig. There’s probably one audio human doing everything, and they’re probably listening mostly to the FOH PA. The way that FOH combines with monitor world can be remarkably different out front versus on deck. If the engineer has a capable console, they can solo up a complete monitor mix, probably through a pair of headphones. (A cue wedge is pretty unlikely to have been set up. They’re expensive and consume space.) A headphone feed is better than nothing, but listening to a wedge mix in a set of cans only tells an operator so much. Especially when working on a drummer’s mix, listening to the feed through a set of headphones has limited utility. A guy or gal might set up a nicely balanced blend, but have no real way of knowing if that mix is even truly audible at the percussionist’s seat.

If you’re not so lucky as to have a flexible console, your audio human will be limited to soloing individual inputs.

The point is that, at most small-venue shows, an audio human at FOH can’t really be expected to know what a particular mix sounds like as a total acoustic event. Remote-controlled consoles can fix this temporarily, of course, but as soon as the operator leaves the deck…all bets are off. If you’re a musician, assume that the engineer does NOT have a thoroughly objective understanding of what you’re hearing. If you’re an audio human, make the same assumption about yourself. Having made those assumptions, be gentle with yourself and others. Recognize that anything “pre set” is just a wild guess, and further, recognize that trying to take a channel from “inaudible in a mix” to “audible” is going to take some work and cooperation.

Use Language That’s As Objective As Possible

Over the course of a career, audio humans create mental mappings between subjective statements and objective measurements. For instance, when I’m working with well-established monitor mixes, I translate requests like “Could I get just a little more guitar?” into “Could I get 3 dB more guitar?” This is a necessary thing for engineers to formulate for themselves, and it’s appropriate to expect that a pro-level operator has some ability to interpret subjective requests.

At the same time, though, it can make life much easier when everybody communicates using objective language. (Heck, it makes it easier if there’s two-way communication at all.)

For instance, let’s say you’re an audio human working with a performer on a monitor mix, and they ask you for “a little more guitar.” I strongly recommend making the change that you translate “a little more” as corresponding to, and then stating your change (in objective terms) over the talkback. Saying something like, “Okay, that’s 3 dB more guitar in mix 2” creates a helpful dialogue. If that 3 dB more guitar wasn’t enough, the stating of the change opens a door for the musician to say that they need more. Also, there’s an opportunity for the musician’s perception to become calibrated to an objective scale – meaning that they get an intuitive sense for what a certain dB boost “feels” like. Another opportunity that arises is for you and the musician to become calibrated to each other’s terminology.

Beyond that, a two-way dialogue fosters trust. If you’re working on monitors and are asked for a change, making a change and then stating what you did indicates that you are trying to fulfill the musician’s wishes. This, along with the understanding that gets built as the communication continues, helps to mentally place everybody on the same team.

For musicians, as you’re asking for changes in your monitor mixes, I strongly encourage you to state things in terms of a scale that the engineer can understand. You can often determine that scale by asking questions like, “What level is my vocal set at in my mix?” If the monitor sends are calibrated in decibels, the engineer will probably respond with a decibel number. If they’re calibrated in an arbitrary scale, then the reply will probably be an arbitrary number. Either way, you will have a reference point to use when asking for things, even if that reference point is a bit “coarse.” Even if all you’ve got is to request that something go from, say, “five to three,” that’s still functionally objective if the console is labeled using an arbitrary scale.

For decibels, a useful shorthand to remember is that 3 dB should be a noticeable change in level for something that’s already audible in your mix. “Three decibels” is a 2:1 power ratio, although you might personally feel that “twice as loud” is 6 dB (4:1) or even 10 dB (10:1).

Realtime Considerations To Prevent And Undo Suckification

Too Much Loop Gain, Too Much Volume

Any instrument or device that is substantially affected by the sound from a monitor wedge, and is being fed through that same wedge, is part of that mix’s “loop gain.” Microphones, guitars, basses, acoustic drums, and anything else that involves body or airborne resonance is a factor. When their output is put through a monitor speaker, these devices combine with the monitor signal path to form an acoustical, tuned circuit. In tuned circuits, the load impedance determines whether the circuit “rings.” As the load impedance drops, the circuit is more and more likely to ring or resonate for a longer time.

If that last bit made your eyes glaze over, don’t worry. The point is that more gain (turning something up in the mix) REDUCES the impedance, or opposition, to the flow of sound in the loop. As the acoustic impedance drops, the acoustic circuit is more likely to ring. You know, feed back. *SQEEEEEALLLL* *WHOOOOOwoowooooOOOM*

Anyway.

The thing for everybody to remember – audio humans and musicians alike – is that a monitor mix feeding a wedge becomes progressively more unstable as gain is added. As ringing sets in, the sound quality of the mix drops off. Sounds that should start and then stop quickly begin to “smear,” and with more gain, certain frequency ranges become “peaky” as they ring. Too much gain can sometimes begin to manifest itself as an overall tone that seems harsh and tiring, because sonic energy in an irritating range builds up and sustains itself for too long. Further instability results in audible feedback that, while self-correcting, sounds bad and can be hard for an operator to zero-in on. As instability increases further, the mix finally erupts into “runaway” feedback that’s both distracting and unnerving to everyone.

The fix, then is to keep each mix’s loop gain as low as possible. This often translates into keeping things OUT of the monitors.

As an example, there’s a phenomenon I’ve encountered many times where folks start with vocals that work…and then add a ton of other things to their feed. These other sources are often far more feedback resistant than their vocal mic can be, and so they can apply enough gain to end up with a rather loud monitor mix. Unfortunately, they fall in love with the sound of that loud mix, except for the vocals which have just been drowned. As a result, they ask for the vocals to be cranked up to match. The loop gain on the vocal mic increases, which destabilizes the mix, which makes monitor world harder to manage.

As an added “bonus,” that blastingly loud monitor mix is often VERY audible to everybody else on stage, which interferes with their mixes, which can cause everybody else to want their overall mix volume to go up, which increases loop gain, which… (You get the idea.)

The implication is that, if you’re having troubles with monitors, a good thing to do is to start pulling things out of the mixes. If the last thing you did before monitor world went bad was, say, adding gain to a vocal mic, try reversing that change and then rebuilding things to match the lower level.

And not to be harsh or combative, but if you’re a musician and you require high-gain monitors to even play at all, then what you really have is an arrangement, ensemble, ability, or equipment problem that is YOURS to fix. It is not an audio-human problem or a monitor-rig problem. It’s your problem. This doesn’t mean that an engineer won’t help you fix it, it just means that it’s not their ultimate responsibility.

Also, take notice of what I said up there: High-GAIN monitors. It is entirely possible to have a high-gain monitor situation without also having a lot of volume. For example, 80 dB SPL C is hardly “rock and roll” loud, but getting that output from a person who sings at the level of a whisper (50 – 60 dB SPL C) requires 20 – 30 dB of boost. For the acoustical circuits that I’ve encountered in small venues, that is definitely a high-gain situation. Gain is the relative level increase or decrease applied to a signal. Volume is the output associated with a signal level resultant from gain. They are related to each other, but the relationship isn’t fixed in terms of any particular gain setting.

Conflicting Frequency Content

Independent of being in a high-gain monitor conundrum, you can also have your day ruined by masking. Masking is what occurs when two sources with similar frequency content become overlaid. One source will tend to dominate the other, and you lose the ability to hear both sources at once. I’ve had this happen to me on numerous occasions with pianists and guitar players. They end up wanting to play at the same time, using substantially the same notes, and the sonic characteristics of the two instruments can be surprisingly close. What you get is either too-loud guitar, too-loud piano, or an indistinguishable mash of both.

In a monitor-mix situation, it’s helpful to identify when multiple sources are all trying to occupy the same sonic space. If sources can’t be distinguished from one another until one sound just gets obliterated, then you may have a frequency-content collision in progress. These collisions can result in volume wars, which can lead to high-gain situations, which result in the issues I talked about in the previous section. (Monitor problems are vicious creatures that breed like rabbits.)

After being identified, frequency-content issues can be solved in a couple of different ways. One way is to use equalization to alter the sonic content of one source or another. For instance, a guitar and a bass might be stepping on each other. It might be decided that the bass sound is fine, but the guitar needs to change. In that case, you might end up rolling down the guitar’s bottom end, and giving the mids a push. Of course, you also have to decide where this change needs to take place. If everything was distinct before the monitor rig got involved, then some equalization change from the audio human is probably in order. If the problem largely existed before any monitor mixes were established, then the issue likely lies in tone choice or song arrangement. In that case, it’s up to the musicians.

One thing to be aware of is that many small-venue mix rigs have monitor sends derived from the same channel that feeds FOH. While this means that the engineer’s channel EQ can probably be used to help fix a frequency collision, it also means that the change will affect the FOH mix as well. If FOH and monitor world sound significantly different from each other, a channel EQ configuration that’s correct for monitor world may not be all that nice out front. Polite communication and compromise are necessary from both the musicians and the engineer in this case. (Certain technical tricks are also possible, like “multing” a problem source into a monitors-only channel.)

Lack Of Localization

Humans have two ears so that we can determine the location and direction of sounds. In music, one way for us to distinguish sources is for us to recognize those instruments as coming from different places. When localization information gets lost, then distinguishing between sources requires more separation in terms of overall volume and frequency content. If that separation isn’t possible to get, then things can become very muddled.

This relates to monitors in more than one way.

One way is a “too many things in one place that’s too loud” issue. In this instance, a monitor mix gets more and more put in it, and at a high enough volume that the monitor obscures the other sounds on deck. What the musician originally heard as multiple, individually localized sources is now a single source – the wedge. The loss of localization information may mean that frequency-content collisions become a problem, which may lead to a volume-war problem, which may lead to a loop-gain problem.

Another possible conundrum is “too much volume everywhere.” This happens when a particular source gets put through enough wedges at enough volume for it to feel as though that single source is everywhere. This can ruin localization for that particular source, which can also result in the whole cascade of problems that I’ve already alluded to.

Fixing a localization problem pretty much comes down having sounds occupy their own spatial point as much as possible. The first thing to do is to figure out if all the volume used for that particular source is actually necessary in each mix. If the volume is basically necessary, then it may be feasible to move that volume to a different (but nearby) monitor mix. For some of the players, that sound will get a little muddier and a touch quieter, but the increase in localization may offset those losses. If the volume really isn’t necessary, then things get much easier. All that’s required is to pull back the monitor feeds from that source until localization becomes established again.

It’s worth noting that “extreme” cases are possible. In those situations, it may be necessary to find a way to generate the necessary volume from a single, localized source that’s audible to everyone on the deck. A well placed sidefill can do this, and an instrument amplifier in the correct position can take this role if a regular sidefill can’t be conjured up.

Wrapping Up

This can be a lot to take in, and a lot to think about. I will freely confess to not always having each of these concepts “top of mind.” Sometimes, audio turns into a pressure situation where both musicians and techs get chased into corners. It can be very hard for a person who’s not on deck to figure out what particular issue is in effect. For folks without a lot of technical experience who play or sing, identifying a problem beyond “something’s not right” can be too much to ask.

In the heat of the moment, it’s probably best to simply remember that yes, monitors are there to be used – but not to be overused. Effective troubleshooting is often centered around taking things out of a misbehaving equation until the equation begins to behave again. So, if you want to unsuckify your monitors, try getting as much out of them as possible. You may be surprised at what actually ends up working just fine.


Engin-eyes

Trust your ears – but verify.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

analysisWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

It might be that I just don’t want to remember who it was, but a famous engineer once became rather peeved. His occasion to be irritated arose when a forum participant had the temerity to load one of the famous engineer’s tracks into a DAW and look at the waveform. The forum participant (not me) was actually rather complimentary, saying that the track LOOKED very compressed, but didn’t SOUND crushed at all.

This ignited a mini-rant from the famous guy, where he pointedly claimed that the sound was all that mattered, and he wasn’t interested in criticism from “engin-eyes.” (You know, because audio humans are supposed to be “engine-ears.”)

To be fair, the famous engineer hadn’t flown into anything that would pass as a “vicious, violent rage,” but the relative ferocity of his response was a bit stunning to me. I was also rather put off by his apparent philosophy that the craft of audio has no need of being informed by senses other than hearing.

Now, let’s be fair. The famous engineer in question is known for a reason. He’s had a much more monetarily successful career than I have. He’s done excellent work, and is probably still continuing to do excellent work at the very moment of this writing. He’s entitled to his opinions and philosophies.

But I am also entitled to mine, and in regards to this topic, here’s what I think:

The idea that an audio professional must rely solely upon their sense of hearing when performing their craft is, quite simply, a bogus “purity standard.” It gets in the way of people’s best work being done, and is therefore an inappropriate restriction in an environment that DEMANDS that the best work be done.

Ears Are Truthful. Brains Are Liars.

Your hearing mechanism, insofar as it works properly, is entirely trustworthy. A sound pressure wave enters your ear, bounces your tympanic membrane around, and ultimately causes some cilia deep in your ear to fire electrical signals down your auditory nerve. To the extent that I understand it all, this process is functionally deterministic – for any given input, you will get the same output until the system changes. Ears are dispassionate detectors of aural events.

The problem with ears is that they are hooked up to a computer (your brain) which can perform very sophisticated pattern matching and pattern synthesis.

That’s actually incredibly neat. It’s why you can hear a conversation in a noisy room. Your brain receives all the sound, performs realtime, high-fidelity pattern matching, tries to figure out what events correlate only to your conversation, and then passes only those events to the language center. Everything else is labeled “noise,” and left unprocessed. On the synthesis side, this remarkable ability is one reason why you can enjoy a song, even against noise or compression artifacts. You can remember enough of the hi-fi version to mentally reconstruct what’s missing, based on the pattern suggested by the input received. Your emotional connection to the tune is triggered, and it matters very little that the particular playback doesn’t sound all that great.

As I said, all that is incredibly neat.

But it’s not necessarily deterministic, because it doesn’t have to be. Your brain’s pattern matching and synthesis operations don’t have to be perfect, or 100% objective, or 100% consistent. They just have to be good enough to get by. In the end, what this means is that your brain’s interpretation of the signals sent by your ears can easily be false. Whether that falsehood is great or minor is a whole other issue, very personalized, and beyond the scope of this article.

Hearing What You See

It’s very interesting to consider what occurs when your hearing correlates with your other senses. Vision, for instance.

As an example, I’ll recall an “archetype” story from Pro Sound Web’s LAB: A system tech for a large-scale show works to fulfill the requests of the band’s live-audio engineer. The band engineer has asked that the digital console be externally “clocked” to a high-quality time reference. (In a digital system, the time reference or “wordclock” is what determines exactly when a sample is supposed to occur. A more consistent timing reference should result in more accurate audio.) The system tech dutifully connects a cable from the wordclock generator to the console. The band engineer gets some audio flowing through the system, and remarks at how much better the rig sounds now that the change had been made.

The system tech, being diplomatic, keeps quiet about the fact that the console has not yet been switched over from its internal reference. The external clock was merely attached. The console wasn’t listening to it yet. The band engineer expected to hear something different, and so his brain synthesized it for him.

(Again, this is an “archetype” story. It’s not a description of a singular event, but an overview of the functional nature of multiple events that have occurred.)

When your other senses correlate with your hearing, they influence it. When the correlation involves something subjective, such as “this cable will make everything sound better,” your brain will attempt to fulfill your expectations – especially when no “disproving” input is presented.

But what if the correlating input is objective? What then?

Calibration

What I mean by “an objective, correlated input” is an unambiguously labeled measurement of an event, presented in the abstract. A waveform in a DAW (like I mentioned in the intro) fits this description. The timescale, “zero point,” and maximum levels are clearly identifiable. The waveform is a depiction of audio events over time, in a visual medium. It’s abstract.

In the same way, audio analyzers of various types can act as objective, correlated inputs. To the extent that their accuracy allows, they show the relative intensities of audio frequencies on an unambiguous scale. They’re also abstract. An analyzer depicts sonic information in a visual way.

When used alongside your ears, these objective measurements cause a very powerful effect: They calibrate your hearing. They allow you to attach objective, numerical information to your brain’s perception of the output from your ears.

And this makes it harder for your brain to lie to you. Not impossible, but harder.

Using measurement to confirm or deny what you think you hear is critical to doing your best work. Yes, audio-humans are involved in art, and yes, art has subjective results. However, all art is created in a universe governed by the laws of physics. The physical processes involved are objective, even if our usage of the processes is influenced by taste and preference. Measurement tools help us to better understand how our subjective decisions intersect with the objective universe, and to me, that’s really important.

If you’re wondering if this is a bit of a personal “apologetic,” you’re correct. If there’s anything I’m not, it’s a “sound ninja.” There are audio-humans who can hear a tiny bit of ringing in a system, and can instantly pinpoint that ring with 1/3rd octave accuracy – just by ear. I am not that guy. I’m very slowly getting better, but my brain lies to me like the guy who “hired” me right out of school to be an engineer for his record label. (It’s a doozy of a story…when I’m all fired up and can remember the best details, anyway.) This being the case, I will gladly correlate ANY sense with my hearing if it helps me create a better show. I will use objective analysis of audio signals whenever I think it’s appropriate, if it helps me deliver good work.

Of course the sound is the ultimate arbiter. If the objective measurement looks weird, but that’s the sound that’s right for the occasion, then the sound wins.

But aside from that, the goal is the best possible show. Denying ones-self useful tools for creating that show, based on the bogus purity standards of a few people in the industry who AREN’T EVEN THERE…well, that’s ludicrous. It’s not their show. It’s YOURS. Do what works for YOU.

Call me an “engineyes” if you like, but if looking at a meter or analyzer helps me get a better show (and maybe learn something as well), then I will do it without offering any apology.


The Festival Patch

Hierarchies are handy, and if you’ve got the channels, use ’em.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

cablesWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Last weekend, my regular gig hosted a Leonard Cohen tribute show. It was HUGE. The crowd was capacity, and a veritable gaggle of musicians stepped up to pay their respects to the songwriter. The guy in charge of it all (JT Draper) did a brilliant job of managing all the personnel logistics.

On my end, probably the most important piece of prep was getting the patch sorted out. If you’re new to this whole thing, the “patch” is what gets plugged into where. It’s synonymous with “input list,” when it all comes down to it.

For a festival-style show (where multiple acts perform shorter sets and switch out during the gig), getting the patch right is crucial. It’s a pillar of making festival-style reinforcement basically feasible and functionally manageable. A multi-act, fluidly-progressing show stands or falls based on several factors – and the patch is one of those supercritical, “load-bearing” parts that holds a massive quantity of weight.

If it fails to hold that weight, the wreck can be staggering.

But we got the patch right, which contributed greatly to the show being well-behaved.

Here’s the patch that actually got implemented, as far as I remember. The stage locations used are traditional stage directions, given from the perspective of someone on the deck and looking out at the audience:

  1. Vocal (Down-Right)
  2. Vocal (Down-Center)
  3. Vocal (Down Left)
  4. Vocal (Drums)
  5. Guitar Amp (Center-Left)
  6. Guitar Amp (Center-Center)
  7. Guitar DI 1
  8. Guitar DI 2
  9. Guitar Mic
  10. Bass DI (Unused)
  11. Bass Amp DI
  12. Keys DI (Unused)
  13. Percussion Mic
  14. Guitar Amp DI
  15. SM58 Special
  16. Empty
  17. Empty
  18. Empty
  19. Kick
  20. Snare
  21. Tom 1
  22. Tom 2
  23. Tom 3
  24. Tom 4 (Unused)

Why did it turn out that way?

You Have To Get Around Swiftly

Festival-style reinforcement demands that you can find the channels you need in a hurry. The biggest hurry is to get to the channels that are absolutely critical for the show to go forward. Thus, the vocals (with one exception) are all grouped together at the top of the patch. It’s very easy to find the channels on the “ends” of a console, whereas the middle is a little bit slower. If everything else went by the wayside – not that we would want that, or accept it without a fight, but if it happened – the show could still go on if we had decent vocals. Thus, they’re patched so they can be gotten to, grabbed, and controlled with the least amount of effort.

You’ll also notice that things are generally grouped into similar classes. The vocals are all mostly stuck together, followed by the inputs related to the guitars, then the basses, and so on. It’s easier to first find a group of channels and then a specific channel, as opposed to one specific channel in a sea of dissimilar sources. If you know that, say, all the guitars are in a general area, then it’s quite snappy to go to that general area of the console and then spot the specific thing you want.

A final factor in maintaining high-speed, low drag operation is making the internals of each patch group “look” like the stage. That is, for a console that’s numbered in ascending order from left to right, a lower-numbered patch point denotes an item that is closer to the left side of the stage…from the perspective of the tech. When I look up, the first vocal mic should be the farthest one to my left (which is STAGE right). The point of this is to remove as much conscious thought as possible from figuring out where each individual mic or input is within a logical group. Numbering left-to-right from the stage’s perspective might be academically satisfying, but it requires at least a small amount of abstract thought to reverse that left-to-right order on the fly. Skipping that abstraction gives one less thing to worry about, and that saves brainpower for other tasks.

Of course, now that I’ve said that, you’ll notice that the first guitar amp is actually on the wrong side of the stage. That leads into the next section:

Things Don’t Go Precisely To Plan

So…why are there some inputs that don’t seem to be numbered or grouped correctly? Why are there channels marked as unused? Didn’t we plan this thing out carefully?

Yes, the night was planned carefully. However, plans change, and things can be left unclear.

Let me explain.

Not everything in a small-venue festival-style show is necessarily nailed down. Getting a detailed stage plot from everybody is often overkill for a one-nighter, especially if the production style is “throw and go.” Further, circumstances that occur in the moment can overtake the desire to have a perfect patch. In the case of the guitar amps, I had thought that I was only going to have two on the deck, and I had also thought that the placement would be basically a traditional “left/ right” sort of affair. That’s not what happened, though, and so I had to react quickly. Because the console was already labeled and prepped for my original understanding, bumping the whole patch down by one would have been much harder than just patching into the empty channels at the end. Also, from a physical standpoint, it turned out to be more expedient to run the first guitar line over to the other side of the stage than to pull the center-center microphone from its place.

I clearly labeled the console to avoid confusion, and that was that.

The unused channels were a case of “leaving a channel unused is easy, patching in the middle of the show is hard.” During the planning for the night, it was unclear as to whether we’d have acoustic bass or not, and it was also unclear if we’d have keys or not. When the time came to actually plug-in the show, those unknowns remained. As such, the wise thing to do was to have those channels ready to go. If sources for those inputs materialized, I’d be ready with zero fuss required. If I wasn’t ready on those channels, and it turned out that they were needed, I would have to get them in place – potentially in the middle of the night. If those channels were never needed, all I had lost were a couple of inputs, and a few minutes of running cable at my leisure.

Look at all those “if” statements, and it’s pretty clear: The penalty for setting up the channels and not using them was very small compared to the advantage of having them in place.

Spend Channels, Get “Easy”

Now, what about that SM58? Why not just swap one of the other vocal mics, save time, and save space on the deck?

That seems like it would be easier, but it actually would have been harder. For starters, the other mics on the stage were VERY unlike the SM58 in terms of both output level and tonality. Yes – I could have set up a separate mix for the act that used the 58 (which would have fixed the tonality issue), but my console doesn’t currently have recallable preamp levels. I would have had to remember to roll the appropriate preamp back down when that act was finished. That might not seem like much to remember, and it isn’t really, but it’s very easy to forget if you get distracted by something else. Using one more channel to host the special mic basically removed the possibility of me making that mistake. It also removed the need for the act and me to execute a whole series of actions – on the fly – just to make the mic work. I set a preamp level for a channel that was ALWAYS going to belong to that microphone, and built EQ settings that would ALWAYS apply to that microphone, and we did the show without having to futz with swapping mics, changing mix presets, or rolling preamp gains around.

In a festival-style show, trading one spare input for a whole lot of “easy” is a no-brainer. (This is one reason why it’s good to have more channels available than you might think you actually need. You’ll probably end up with a surprise or two that becomes much easier to manage if you can just plug things into their very own channels.)

An orderly, quickly navigable festival patch is a must for getting through a multi-act gig. Even when something happens unexpectedly and partially upsets the order of that patch, starting with a good channel layout helps to contain the chaos. If you start with chaos and then add more entropy, well…


Pink Floyd Is A Bluegrass Band

If you beat the dynamics out of a band that manages itself with dynamics, well…

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

floydgrassWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Just recently, I had the privilege of working on “The Last Floyd Show.” (The production provided the backdrop for that whole bit about the lighting upgrade that took forever.) We recorded the show to multitrack, and I was tasked with getting a mix done.

It was one of the toughest mixdowns I’ve attempted, mostly because I took the wrong approach when I got started. I attempted a “typical rock band” mix, and I ended up having to basically start over once…and then backtrack significantly twice more. Things started to work much more nicely when I backed WAY off on my channel compression – which is a little weird, because a lot of my “mix from live” projects actually do well with aggressive compression on individual channels. You grab the player’s level, hold it back from jumping around, fit ’em into the part of the spectrum that works, and everything’s groovy.

Not this time, though.

Because Pink Floyd is actually an old-timey bluegrass act that inhabits a space-rock body. They use “full-range” tones and dynamics extensively, which means that preventing those things from working is likely to wreck the band’s sound.

General Dynamics (Specific Dynamics, Too)

Not every Floyd tune is the same, of course, but take a listen over a range of their material and you’ll discover something: Pink Floyd gets a huge amount of artistic impact from big swings in overall dynamics, as well as the relative levels of individual players. Songs build into rolling, thunderous choruses, and then contract into gentle verses. There are “stings” where a crunchy guitar chord PUNCHES YOU IN THE FACE, and then backs away into clean, staccato notes. Different parts ebb and flow around each other, with great, full-range tones possible across multiple instruments – all because of dynamics. When it’s time for the synth, or organ, or guitar to be in the lead, that’s what is in the lead. They just go right past the other guys and “fill the space,” which is greatly enabled by the other guys dropping far into the background.

If you crush the dynamics out of any part of a Pink Floyd production, it isn’t Pink Floyd anymore. It’s people playing the same notes as Floyd songs without actually making those songs happen. If those dynamic swings are prevented, the arrangements stop working properly. The whole shebang becomes a tangled mess of sounds running into each other, which is EXACTLY what happened to me when I tried to “rock mix” our Floyd Show recording.

This usage of dynamics, especially as a self-mix tool, is something that you mostly see in “old school acoustic-music” settings. Rock and pop acts these days are more about a “frequency domain” approach than a “volume domain” sort of technique. It’s not that there’s no use of volume at all, it’s just that the overwhelming emphasis seems to be on everybody finding a piece of the spectrum, and then just banging away with the vocals on top. (I’m not necessarily complaining. This can be very fun when it’s done well.) With that emphasis being the case so often, it’s easy to get suckered into doing everything with a “rock” technique. Use that technique in the wrong place, though, and you’ll be in trouble.

And yes, this definitely applies to live audio. In fact, this tendency to work on everything with modern rock tools is probably why I haven’t always enjoyed Floyd Show productions as much as I’ve wanted.

In The Flesh

When you have a band like Floyd Show on the deck, in real life, in a small room, the band’s acoustical peaks can overrun the PA to some extent. This is especially true if (like me), you aggressively limit the PA in order to keep the band “in a manageable box.” This, coupled with the fact that the band’s stage volume is an enormous contributor to the sound that the audience hears, means that a compressed, “rock band” mix isn’t quite as ruinous as it otherwise would be. That is, with the recording, the only sound you can hear is the reproduced sound, so screwing up the production is fatal. Live, in a small venue, you hear a good bit of reproduction (the PA) and a LOT of stage volume. The stage volume counteracts some of the “reproduction” mistakes, and makes the issues less obvious.

Another thing that suppresses “not quite appropriate” production is that you’re prepared to run an effectively automated mix in real time. When you hear that a part isn’t coming forward enough, you get on the appropriate fader and give it a push. Effectively, you put some of the dynamic swing back in as needed, which masks the mistakes made in the “steady state” mix setup. With the recording, though, the mix doesn’t start out as being automated – and that makes a fundamental “steady state” error stand out.

As I said before, I haven’t always had as much fun with Floyd Show gigs as I’ve desired. It’s not that the shows weren’t a blast, because they were definitely enjoyable for me, it’s just that they could have been better.

And it was because I was chasing myself into a corner as much as anyone else was, all by taking an approach to the mix that wasn’t truly appropriate for the music. I didn’t notice, though, because my errors were partially masked by virtue of the gigs happening in a small space. (That masking being a Not Bad Thing At All.™)

The Writing On The Wall

So…what can be generalized from all this? Well, you can boil this down to a couple of handy rules for live (and studio) production:

If you want to use “full bandwidth” tones for all of the parts in a production, then separation between the parts will have to be achieved primarily in the volume and note-choice domain.

If you’re working with a band that primarily achieves separation by way of the volume domain, then you should refrain from restricting the “width” of the volume domain any more than is necessary.

The first rule comes about because “full bandwidth” tones allow each part to potentially obscure each other part. For example, if a Pink Floyd organ sound can occupy the same frequency space as the bass guitar, then the organ either needs to be flat-out quieter or louder at the appropriate times to avoid clashing with the bass, or change its note choices. Notes played high enough will have fundamental frequencies that are away from the bass guitar’s fundamentals. This gives the separation that would otherwise be gotten by restricting the frequency range of the organ with EQ and/ or tone controls. (Of course, working the equalization AND note choice AND volume angles can make for some very powerful separation indeed.)

The second rule is really just an extension of “getting out of the freakin’ way.” If the band is trying to be one thing, and the production is trying to force the band to be something else, the end result isn’t going to be as great as it could be. The production, however well intentioned, gets in the way of the band being itself. That sounds like an undesirable thing, because it is an undesirable thing.

Faithfully rendered Pink Floyd tunes use instruments with wide-ranging tones that run up and down – very significantly – in volume. These volume swings put different parts in the right places at the right time, and create the dramatic flourishes that make Pink Floyd what it is. Floyd is certainly a rock band. The approach is not exactly the same as an old-school bluegrass group playing around a single, omni mic…

…but it’s close enough that I’m willing to say: The lunatics on Pink Floyd’s grass are lying upon turf that’s rather more blue than one might think at first.


Speed Fishing

“Festival Style” reinforcement means you have to go fast and trust the musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Last Sunday was the final day of the final iteration of a local music festival called “The Acoustic All-Stars.” It’s a celebration of music made with traditional or neo-traditional instruments – acoustic-electric guitars, fiddles, drums, mandolins, and all that sort of thing. My perception is that the musicians involved have a lot of anticipation wrapped up in playing the festival, because it’s a great opportunity to hear friends, play for friends, and make friends.

Of course, this anticipation can create some pressure. Each act’s set has a lot riding on it, but there isn’t time to take great care with any one setup. The longer it takes to dial up the band, the less time they have to play…and there are no “do overs.” There’s one shot, and it has to be the right shot for both the listeners and the players.

The prime illustrator for all this on Sunday was Jim Fish. Jim wanted to use his slot to the fullest, and so assembled a special team of musicians to accompany his songs. The show was clearly a big deal for him, and he wanted to do it justice. Trying to, in turn, do justice to his desires required that a number of things take place. It turns out that what had to happen for Jim can (I think) be generalized into guidelines for other festival-style situations.

Pre-Identify The Trouble Spots, Then Make The Compromises

The previous night, Jim had handed me a stage plot. The plot showed six musicians, all singing, wielding a variety of acoustic or acoustic-electric instruments. A lineup like that can easily have its show wrecked by feedback problems, because of the number of open mics and highly-resonant instruments on the deck. Further, the mics and instruments are often run at (relatively) high-gain. The PA and monitor rig need to help with getting some more SPL (Sound Pressure Level) for both the players and the audience, because acoustic music isn’t nearly as loud as a rock band…and we’re in a bar.

Also, there would be a banjo on stage right. Getting a banjo to “concert level” can be a tough test for an audio human, depending on the situation.

Now, there’s no way you’re going to get “rock” volume out of a show like this – and frankly, you don’t want to get that kind of volume out of it. Acoustic music isn’t about that. Even so, the priorities were clear:

I needed a setup that was based on being able to run with a total system gain that was high, and that could do so with as little trouble as possible. As such, I ended up deploying my “rock show” mics on the deck, because they’re good for getting the rig barking when in a pinch. The thing with the “rock” mics is that they aren’t really sweet-sounding transducers, which is unfortunate in an acoustic-country situation. A guy would love to have the smoothest possible sound for it all, but pulling that off in a potentially high-gain environment takes time.

And I would not have that time. Sweetness would have to take a back seat to survival.

Be Ready To Abandon Bits Of The Plan

On the day of the show, the lineup ended up not including two people: The bassist and the mandolin player. It was easy to embrace this, because it meant lower “loop gain” for the show.

I also found out that the fiddle player didn’t want to use her acoustic-electric fiddle. She wanted to hang one particular mic over her instrument, and then sing into that as well. We had gone with a similar setup at a previous show, and it had definitely worked. In this case, though, I was concerned about how it would all shake out. In the potentially high-gain environment we were facing, pointing this mic’s not-as-tight polar pattern partially into the monitor wash held the possibility for creating a touchy situation.

Now, there are times to discuss the options, and times to just go for it. This was a time to go for it. I was working with a seasoned player who knew what she wanted and why. Also, I would lose one more vocal mic, which would lower the total loop-gain in the system and maybe help us to get away with a different setup. I knew basically what I was getting into with the mic we chose for the task.

And, let’s be honest, there were only minutes to go before the band’s set-time. Discussing the pros and cons of a sound-reinforcement approach is something you do when you have hours or days of buffer. When a performer wants a simple change in order to feel more comfortable, then you should try to make that change.

That isn’t to say that I didn’t have a bit of a backup plan in mind in case things went sideways. When you’ve got to make things happen in a hurry, you need to be ready to declare a failing option as being unworkable and then execute your alternate. In essence, festival-style audio requires an initial plan, some kind of backup plan, the willingness to partially or completely drop the original plan, and an ability to formulate a backup plan to the new plan.

The fiddle player’s approach ended up working quite nicely, by the way.

Build Monitor World With FOH Open

If there was anything that helped us pull-off Jim’s set, it was this. In a detail-oriented situation, it can be good to start with your FOH (Front Of House) channels/ sends/ etc. muted (or pulled back) while you build mixes for the deck. After the monitors are sorted out, then you can carefully fill in just what you need to with FOH. There are times, though, that such an approach is too costly in terms of the minutes that go by while you execute. This was one such situation.

In this kind of environment, you have to start by thinking not in terms of volume, but in terms of proportions. That is, you have to begin with proportions as an abstract sort of thing, and then arrive at a workable volume with all those proportions fully in effect. This works in an acoustic music situation because the PA being heavily involved is unlikely to tear anyone’s head off. As such, you can use the PA as a tool to tell you when the monitor mixes are basically balanced amongst the instruments.

It works like this:

You get all your instrument channels set up so that they have equal send levels in all the monitors, plus a bit of a boost in the wedge that corresponds to that instrument’s player. You also set their FOH channel faders to equal levels – probably around “unity” gain. At this point, the preamp gains should be as far down as possible. (I’m spoiled. I can put my instruments on channels with a two-stage preamp that lets me have a single-knob global volume adjustment from silence to “preamp gain +10 dB.” It’s pretty sweet.)

Now, you start with the instrument that’s likely to have the lowest gain before feedback. You begin the adventure there because everything else is going to have to be built around the maximum appropriate level for that source. If you start with something that can get louder, then you may end up discovering that you can’t get a matching level from the more finicky channel without things starting to ring. Rather than being forced to go back and drop everything else, it’s just better to begin with the instrument that will be your “limiting factor.”

You roll that first channel’s gain up until you’ve got a healthy overall volume for the instrument without feedback. Remember, both FOH and monitor world should both be up. If you feel like your initial guess on FOH volume is blowing past the monitors too much (or getting swamped in the wash), make the adjustment now. Set the rest of the instruments’ FOH faders to that new level, if you’ve made a change.

Now, move on to the subsequent instruments. In your mind, remember what the overall volume in the room was for the first instrument. Roll the instruments’ gains up until you get to about that level on each one. Keep in mind that what I’m talking about here is the SPL, not the travel on the gain knob. One instrument might be halfway through the knob sweep, and one might be a lot lower than that. You’re trying to match acoustical volume, not preamp gain.

When you’ve gone through all the instruments this way, you should be pretty close to having a balanced instrument mix in both the house and on deck. Presetting your monitor and FOH sends, and using FOH as an immediate test of when you’re getting the correct proportionality is what lets you do this.

And it lets you do it in a big hurry.

Yes, there might be some adjustments necessary, but this approach can get you very close without having to scratch-build everything. Obviously, you need to have a handle on where the sends for the vocals have to sit, and your channels need to be ready to sound decent through both FOH and monitor-world without a lot of fuss…but that’s homework you should have done beforehand.

Trust The Musicians

This is probably the nail that holds the whole thing together. Festival-style (especially in an acoustic context) does not work if you aren’t willing to let the players do their job, and my “get FOH and monitor world right at the same time” trick does NOT work if you can’t trust the musicians to know their own music. I generally discourage audio humans from trying to reinvent a band’s sound anyway, but in this kind of situation it’s even more of something to avoid. Experienced acoustic music players know what their songs and instruments are supposed to sound like. When you have only a couple of minutes to “throw ‘n go,” you have to be able to put your faith in the music being a thing that happens on stage. The most important work of live-sound does NOT occur behind a console. It happens on deck, and your job is to translate the deck to the audience in the best way possible.

In festival-style acoustic music, you simply can’t “fix” everything. There isn’t time.

And you don’t need to fix it, anyway.

Point a decent mic at whatever needs micing, put a working, active DI on the stuff that plugs in, and then get out of the musicians’ way.

They’ll be happier, you’ll be happier, you’ll be much more likely to stay on schedule…it’s just better to trust the musicians as much as you possibly can.