Tag Archives: Monitors

Convergent Solutions

FOH and monitor world have to work together if you want the best results.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

convergentWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

In a small venue, there’s something that you know, even if you’re not conscious of knowing it:

The sound from the monitors on deck has an enormous effect on the sound that the audience hears. The reverse is also true. The sound from the FOH PA has an enormous effect on the sound that the musicians hear on stage.

I’m wiling to wager that there are shows that you’ve had where getting a mix put together seemed like a huge struggle. There are shows that you’ve had where – on the other hand – creating blends that made everybody happy occurred with little effort. One of the major factors in the ease or frustration of whole-show sound is “convergence.” When the needs of the folks on deck manage to converge with the needs of the audience, sound reinforcement gets easier. When those needs diverge, life can be quite a slog.

Incompatible Solutions

But…why would the audience’s needs and the musicians’ needs diverge?

Well, taste, for one thing.

Out front, you have an interpreter for the audience, i.e. the audio human. This person has to make choices about what the audience is going to hear, and they have to do this through the filter of their own assumptions. Yes, they can get input from the band, and yes, they will sometimes get input from the audience, but they still have to make a lot of snap decisions that are colored by their immediate perceptions.

When it comes to the sound on deck, the noise-management professional becomes more of an “executor.” The tech turns the knobs, but there can be a lot more guidance from the players. The musicians are the ones who try to get things to match their needs and tastes, and this can happen on an individual level if enough monitor mixes are available.

If the musicians’ tastes and the tech’s taste don’t line up, you’re likely to have divergent solutions. One example I can give is from quite a while ago, where a musician playing a sort of folk-rock wanted a lot of “kick” in the wedges. A LOT of kick. There was so much bass-drum material in the monitors that I had none at all out front. Even then, it was a little much. (I was actually pretty impressed at the amount of “thump” the monitor rig would deliver.) I ended up having to push the rest of the mix up around the monitor bleed, which made us just a bit louder than we really needed to be for an acoustic-rock show.

I’ve also experienced plenty of examples where we were chasing vocals and instruments around in monitor world, and I began to get the sneaky suspicion that FOH was being a hindrance. More than once, I’ve muted FOH and heard, “Yeah! That sounds good now.” (Uh oh.)

In any case, the precipitating factors differ, but the main issue remains the same: The “solutions” for the sound on stage and the sound out front are incompatible to some degree.

I say “solutions” because I really do look at live-sound as a sort of math or science “problem.” There’s an outcome that you want, and you have to work your way through a process which gets you that outcome. You identify what’s working against your desired result, find a way to counteract that issue, and then re-evaluate. Eventually, you find a solution – a mix that sounds the way you think it should.

And that’s great.

Until you have to solve for multiple solutions that don’t agree, because one solution invalidates the others.

Live Audio Is Nonlinear Math

The analogy that I think of for all this is a very parabolic one. Literally.

If you remember high school, you probably also remember something about finding “solutions” for parabolic curves. You set the function as being equal to zero, and then tried to figure out the inputs to the function that would satisfy that condition. Very often, you would get two numbers as solutions because nonlinear functions can output zero more than once.

In my mind, this is a pretty interesting metaphor for what we try to do at a show.

For the sake of brevity, let’s simplify things down so that “the sound on stage” and “the sound out front” are each a single solution. If we do that, we can look at this issue via a model which I shall dub “The Live-Sound Parabola.” The Live-Sound Parabola represents a “metaproblem” which encompasses two smaller problems. We can solve each sub-problem in isolation, but there’s a high likelihood that the metaproblem will remain unsolved. The metaproblem is that we need a good show for everyone, not just for the musicians or just for the audience.

In the worst-case scenario, neither sub-problem is even close to being solved. The show is bad for everybody. Interestingly, the indication of the “badness” of the show is the area under the curve. (Integral calculus. It’s everywhere.) In other words, the integral of The Live Sound Parabola is a measure of how much the sub-solutions functionally diverge.

nosolution

(Sorry about the look of the graphs. Wolfram Alpha doesn’t give you large-size graphics unless you subscribe. It’s still a really cool website, though.)

Anyway.

A fairly common outcome is that we don’t quite solve the “on deck” and “out front” problems, but instead arrive at a compromise which is imperfect – but not fatally flawed. The area between the curve and the x-axis is comparatively small.

compromise

When things really go well, however, we get a convergent solution. The Live-Sound Parabola becomes equal to zero at exactly one point. Everybody gets what they want, and the divergence factor (the area under the curve) is minimized. (It’s not eliminated, but simply brought to its minimum value.)

solution

What’s interesting is that The Live Sound Parabola still works when the graph drops below zero. When it does, it’s showing a situation where two diverging solutions actually work independently. This is possible with in-ear monitors, where the solution for the musicians can be almost (if not completely) unaffected by the FOH mix. The integral still shows how much divergence exists, but in this case the divergence is merely instructive rather than problematic.

in-ears

How To Converge

At this point, you may be wanting to shout, “Yeah, yeah, but what do we DO?”

I get that.

The first thing is to start out as close to convergence as possible. The importance of this is VERY high. It’s one of the reasons why I say that sounding like a band without any help from sound reinforcement is critical. It’s also why I discourage audio techs from automatically trying to reinvent everything. If the band already sounds basically right, and the audio human does only what’s necessary to transfer that “already right sound” to the audience, any divergence that occurs will tend to be minimal. Small divergence problems are simple to fix, or easy to ignore. If (on the other hand) you come out of the gate with a pronounced disagreement between the stage and FOH, you’re going to be swimming against very strong current.

Beyond that, though, you need two things: Time, and willingness to use that time for iteration.

One of my favorite things to do is to have a nice, long soundcheck where the musicians can play in the actual room. This “settling in” period is ideally started with minimal PA and minimal monitors. The band is given a chance to get themselves sorted out “acoustically,” as much as is practical. As the basic onstage sound comes together, some monitor reinforcement can be added to get things “just so.” Then, some tweaks at FOH can be applied if needed.

At that point, it’s time to evaluate how much the house and on-deck solutions are diverging. If they are indeed diverging, then some changes can be applied to either or both solutions to correct the problem. The musicians then continue to settle in for a bit, and after that you can evaluate again. You can repeat this process until everybody is satisfied, or until you run out of time.

With a seasoned band and experienced audio human, this iteration can happen very fast. It’s not instant, though, which is another reason to actually budget enough time for it to happen. Sometimes that’s not an option, and you just have to “throw and go.” However, I have definitely been in situations where bands wanted to be very particular about a complex show…after they arrived with only 30 minutes left until downbeat. It’s not that I didn’t want to do everything to help them, it’s just that there wasn’t time for everything to be done. (Production craftspersons aren’t blameless, either. There are audio techs who seem to believe that all shows can be checked in the space of five minutes, and remain conspicuously absent from the venue until five minutes is all they have. Good luck with that, I guess.)

But…

If everybody does their homework, and is willing to spend an appropriate amount of prep-time on show day, your chances of enjoying some convergent solutions are much higher.


A Vocal Addendum

Forget about all the “sexy” stuff. Get ’em loud, and let ’em bark.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

micandmonsWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This article is a follow-on to my piece regarding the unsuckification of monitors. In a small-venue context, vocal monitoring is probably more important than any other issue for the “on deck” sound. Perhaps surprisingly, I didn’t talk directly about vocals and monitors AT ALL in the previous article.

But let’s face it. The unsuckification post was long, and meant to be generalized. Putting a specific discussion of vocal monitoring into the mix would probably have pushed the thing over the edge.

I’ll get into details below, but if you want a general statement about vocal monitors in a small-venue, “do-or-die,” floor-wedge situation, I’ll be happy to oblige: You do NOT need studio-quality vocals. You DO need intelligible, reasonably smooth vocals that can be heard above everything else. Forget the fluff – focus on the basics, and do your preparation diligently.

Too Loud Isn’t Loud Enough

One of the best things to ever come out of Pro Sound Web was this quiz on real-world monitoring. In particular, answer “C” on question 16 (“What are the main constituents of a great lead vocal mix?”) has stuck with me. Answer C reads: “The rest of the band is hiding 20 feet upstage because they can’t take it anymore.”

In my view, the more serious rendering of this is that vocal monitors should, ideally, make singing effortless. Good vocal monitors should allow a competent vocalist to deliver their performance without straining to hear themselves. To that end, an audio human doing show prep should be trying to get the vocal mics as loud as is practicable. In the ideal case, a vocal mic routed through a wedge should present no audible ringing, while also offering such a blast of sound that the singer will ask for their monitor send to be turned down.

(Indeed, one of my happiest “monitor guy” moments in recent memory occurred when a vocalist stepped up to a mic, said “Check!”, got a startled look on his face, and promptly declared that “Anyone who can’t hear these monitors is deaf.”)

Now, wait a minute. Doesn’t this conflict with the idea that too much volume and too much gain are a problem?

No.

Vocal monitors are a cooperative effort amongst the audio human, the singer(s), and the rest of the band. The singer has to have adequate power to perform with the band. The band has to run at a reasonable volume to play nicely with the singer. If those two conditions are met (and assuming there are no insurmountable equipment or acoustical problems), getting an abundance of sound pressure from a monitor should not require a superhuman effort or troublesome levels of gain.

So – if you’re prepping for a band, dial up as much vocal volume as you can without causing a loop-gain problem. If the vocals are tearing people’s heads off, you can always turn it down. Don’t be lazy! Get up on deck and listen to what it sounds like. If there are problem areas at certain frequencies, then get on the appropriate EQ and tame them. Yes, the feedback points can change a bit when things get moved around and people get in the room, but that’s not an excuse to just sit on your hands. Do some homework now, and life will be easier later.

Don’t Squeeze Me, Bro

A sort of corollary to the above is that anything which acts to restrict your vocal monitor volume is something you should think twice about. If you were thinking about inserting a compressor in such a way that it would affect monitor world, think again.

A compressor reduces dynamic range by reducing gain on signals that exceed a preset threshold. For a vocalist, this means that the monitor level of their singing may no longer track in a 1:1 ratio with their output at the mic. They sing with more force, but the return through the monitors doesn’t get louder at the same rate. If the singer is varying their dynamics to track with the band, this failure of the monitors to stay “in ratio” can cause the vocals to become swamped.

And, in certain situations, monitors that don’t track with vocal dynamics can cause a singer to hurt themselves. They don’t hear their voice getting as loud as it should, so they push themselves harder – maybe even to the point that they blow out their voice.

Of course, you could try to compensate for the loss of level by increasing the output or “makeup” gain on the compressor, but oh! There’s that “too much loop gain” problem again. (Compressors do NOT cause feedback. That’s a myth. Steady-state gain applied to compensate for compressor-applied, variable gain reduction, on the other hand…)

The upshot?

Do NOT put a compressor across a vocalist such that monitor world will be affected. (The exception is if you have been specifically asked to do so by an artist that has had success with the compressor during a real, “live-fire” dress rehearsal.) If you don’t have an independent monitor console or monitor-only channels, then bus the vocals to a signal line that’s only directly audible in FOH, and compress that signal line.

The Bark Is The Bite

One thing I have been very guilty of in the past, and am still sometimes guilty of, is dialing up a “sounds good in the studio” vocal tone for monitor world. That doesn’t sound like it would be a problem, but it can be a huge one.

The issue at hand is that what sounds impressive in isolation often isn’t so great when the full band is blasting away. This is very similar to guitarists who have “bedroom” tone. When we’re only listening to a single source, we tend to want that source to consume the entire audible spectrum. We want that single instrument or voice to have extended lows and crisp, snappy HF information. We will sometimes dig out the midrange in order to emphasize the extreme ends of the audible spectrum. When all we’ve got to listen to is one thing, this can all sound very “sexy.”

And then the rest of the band starts up, and our super-sexy, radio-announcer vocals become the wrong thing. Without a significant amount of midrange “bark,” the parts of the spectrum truly responsible for vocal audibility get massacred by the guitars. And drums. And keyboards. All that’s left poking through is some sibilance. Then, when you get on the gas to compensate, the low-frequency material starts to feed back (because it’s loud, and the mic probably isn’t as directional as you think at low frequencies), and the high-frequency material also starts to ring (because it’s loud, and probably has some nasty peaks in it as well).

Yes – a good monitor mix means listenable vocals. You don’t want mud or nasty “clang” by any means, but you need the critical midrange zone – say, 500 Hz to 3 KHz or 4 KHz – to be at least as loud as the rest of the audible spectrum in the vocal channel. Midrange that jumps at you a little bit doesn’t sound as refined as a studio recording, but this isn’t the studio. It’s live-sound. Especially on the stage, hi-fi tone often has to give way to actually being able to differentiate the singer. There are certainly situations where studio-style vocal tone can work on deck, but those circumstances are rarely encountered with rock bands in small spaces.

Stay Dry

An important piece of vocal monitoring is intelligibility. Intelligibility has to do with getting the oh-so-important midrange in the right spot, but it also has to do with signals starting and stopping. Vocal sounds with sharply defined start and end points are easy for listeners to parse for words. As the beginnings and ends of vocal sounds get smeared together, the difficulty of parsing the language goes up.

Reverb and delay (especially) cause sounds to smear in the time domain. I mean, that’s what reverb and delay are for.

But as such, they can step on vocal monitoring’s toes a bit.

If it isn’t a specific need for the band, it’s best to leave vocals dry in monitor world. Being able to extract linguistic information from a sound is a big contributor to the perception that something is loud enough or not. If the words are hard to pick out because they’re all running together, then there’s a tendency to run things too hot in order to compensate.

The first step with vocal monitors is to get them loud enough. That’s the key goal. After that goal is met, then you can see how far you can go in terms of making things pretty. Pretty is nice, and very desirable, but it’s not the first task or the most important one.


Unsuckifying Your Monitor Mix

Communicate well, and try not to jam too much into any one mix.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

monsWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Monitors can be a beautiful thing. Handled well, they can elicit bright-eyed, enthusiastic approbations like “I’ve never heard myself so well!” and “That was the best sounding show EVER!” They can very easily be the difference between a mediocre set and a killer show, because of how much they can influence the musicians’ ability to play as a group.

I’ve said it to many people, and I’m pretty sure I’ve said it here: As an audio-human, I spend much more time worrying about monitor world than FOH (Front Of House). If something is wrong out front, I can hear it. If something is wrong in monitor world, I won’t hear it unless it’s REALLY wrong. Or spiraling out of control.

…and there’s the issue. Bad monitor mixes can do a lot of damage. They can make the show less fun for the musicians, or totally un-fun for the musicians, or even cause so much on stage wreckage that the show for the audience becomes a disaster. On top of that, the speed at which the sound on deck can go wrong can be startlingly high. If you’ve ever lost control of monitor world, or have been a musician in a situation where someone else has had monitor world “get away” from them, you know what I mean. When monitors become suckified, so too does life.

So – how does one unsuckify (or, even better, prevent suckification of) monitor world?

Foundational Issues To Prevent Suckification

Know The Inherent Limits On The Engineer’s Perception

At the really high-class gigs, musicians and production techs alike are treated to a dedicated “monitor world” or “monitor beach.” This is an independent or semi-independent audio control rig that is used to mix the show for the musicians. There are even some cases where there are multiple monitor worlds, all run by separate people. These folks are likely to have a setup where they can quickly “solo” a particular monitor mix into their own set of in-ears, or a monitor wedge which is similar to what the musicians have. Obviously, this is very helpful to them in determining what a particular performer is hearing.

Even so, the monitor engineer is rarely in exactly the same spot as any particular musician. Consequently, if the musicians are on wedges, even listening to a cue wedge doesn’t exactly replicate the total acoustic situation being experienced by the players.

Now, imagine a typical small-venue gig. There’s probably one audio human doing everything, and they’re probably listening mostly to the FOH PA. The way that FOH combines with monitor world can be remarkably different out front versus on deck. If the engineer has a capable console, they can solo up a complete monitor mix, probably through a pair of headphones. (A cue wedge is pretty unlikely to have been set up. They’re expensive and consume space.) A headphone feed is better than nothing, but listening to a wedge mix in a set of cans only tells an operator so much. Especially when working on a drummer’s mix, listening to the feed through a set of headphones has limited utility. A guy or gal might set up a nicely balanced blend, but have no real way of knowing if that mix is even truly audible at the percussionist’s seat.

If you’re not so lucky as to have a flexible console, your audio human will be limited to soloing individual inputs.

The point is that, at most small-venue shows, an audio human at FOH can’t really be expected to know what a particular mix sounds like as a total acoustic event. Remote-controlled consoles can fix this temporarily, of course, but as soon as the operator leaves the deck…all bets are off. If you’re a musician, assume that the engineer does NOT have a thoroughly objective understanding of what you’re hearing. If you’re an audio human, make the same assumption about yourself. Having made those assumptions, be gentle with yourself and others. Recognize that anything “pre set” is just a wild guess, and further, recognize that trying to take a channel from “inaudible in a mix” to “audible” is going to take some work and cooperation.

Use Language That’s As Objective As Possible

Over the course of a career, audio humans create mental mappings between subjective statements and objective measurements. For instance, when I’m working with well-established monitor mixes, I translate requests like “Could I get just a little more guitar?” into “Could I get 3 dB more guitar?” This is a necessary thing for engineers to formulate for themselves, and it’s appropriate to expect that a pro-level operator has some ability to interpret subjective requests.

At the same time, though, it can make life much easier when everybody communicates using objective language. (Heck, it makes it easier if there’s two-way communication at all.)

For instance, let’s say you’re an audio human working with a performer on a monitor mix, and they ask you for “a little more guitar.” I strongly recommend making the change that you translate “a little more” as corresponding to, and then stating your change (in objective terms) over the talkback. Saying something like, “Okay, that’s 3 dB more guitar in mix 2” creates a helpful dialogue. If that 3 dB more guitar wasn’t enough, the stating of the change opens a door for the musician to say that they need more. Also, there’s an opportunity for the musician’s perception to become calibrated to an objective scale – meaning that they get an intuitive sense for what a certain dB boost “feels” like. Another opportunity that arises is for you and the musician to become calibrated to each other’s terminology.

Beyond that, a two-way dialogue fosters trust. If you’re working on monitors and are asked for a change, making a change and then stating what you did indicates that you are trying to fulfill the musician’s wishes. This, along with the understanding that gets built as the communication continues, helps to mentally place everybody on the same team.

For musicians, as you’re asking for changes in your monitor mixes, I strongly encourage you to state things in terms of a scale that the engineer can understand. You can often determine that scale by asking questions like, “What level is my vocal set at in my mix?” If the monitor sends are calibrated in decibels, the engineer will probably respond with a decibel number. If they’re calibrated in an arbitrary scale, then the reply will probably be an arbitrary number. Either way, you will have a reference point to use when asking for things, even if that reference point is a bit “coarse.” Even if all you’ve got is to request that something go from, say, “five to three,” that’s still functionally objective if the console is labeled using an arbitrary scale.

For decibels, a useful shorthand to remember is that 3 dB should be a noticeable change in level for something that’s already audible in your mix. “Three decibels” is a 2:1 power ratio, although you might personally feel that “twice as loud” is 6 dB (4:1) or even 10 dB (10:1).

Realtime Considerations To Prevent And Undo Suckification

Too Much Loop Gain, Too Much Volume

Any instrument or device that is substantially affected by the sound from a monitor wedge, and is being fed through that same wedge, is part of that mix’s “loop gain.” Microphones, guitars, basses, acoustic drums, and anything else that involves body or airborne resonance is a factor. When their output is put through a monitor speaker, these devices combine with the monitor signal path to form an acoustical, tuned circuit. In tuned circuits, the load impedance determines whether the circuit “rings.” As the load impedance drops, the circuit is more and more likely to ring or resonate for a longer time.

If that last bit made your eyes glaze over, don’t worry. The point is that more gain (turning something up in the mix) REDUCES the impedance, or opposition, to the flow of sound in the loop. As the acoustic impedance drops, the acoustic circuit is more likely to ring. You know, feed back. *SQEEEEEALLLL* *WHOOOOOwoowooooOOOM*

Anyway.

The thing for everybody to remember – audio humans and musicians alike – is that a monitor mix feeding a wedge becomes progressively more unstable as gain is added. As ringing sets in, the sound quality of the mix drops off. Sounds that should start and then stop quickly begin to “smear,” and with more gain, certain frequency ranges become “peaky” as they ring. Too much gain can sometimes begin to manifest itself as an overall tone that seems harsh and tiring, because sonic energy in an irritating range builds up and sustains itself for too long. Further instability results in audible feedback that, while self-correcting, sounds bad and can be hard for an operator to zero-in on. As instability increases further, the mix finally erupts into “runaway” feedback that’s both distracting and unnerving to everyone.

The fix, then is to keep each mix’s loop gain as low as possible. This often translates into keeping things OUT of the monitors.

As an example, there’s a phenomenon I’ve encountered many times where folks start with vocals that work…and then add a ton of other things to their feed. These other sources are often far more feedback resistant than their vocal mic can be, and so they can apply enough gain to end up with a rather loud monitor mix. Unfortunately, they fall in love with the sound of that loud mix, except for the vocals which have just been drowned. As a result, they ask for the vocals to be cranked up to match. The loop gain on the vocal mic increases, which destabilizes the mix, which makes monitor world harder to manage.

As an added “bonus,” that blastingly loud monitor mix is often VERY audible to everybody else on stage, which interferes with their mixes, which can cause everybody else to want their overall mix volume to go up, which increases loop gain, which… (You get the idea.)

The implication is that, if you’re having troubles with monitors, a good thing to do is to start pulling things out of the mixes. If the last thing you did before monitor world went bad was, say, adding gain to a vocal mic, try reversing that change and then rebuilding things to match the lower level.

And not to be harsh or combative, but if you’re a musician and you require high-gain monitors to even play at all, then what you really have is an arrangement, ensemble, ability, or equipment problem that is YOURS to fix. It is not an audio-human problem or a monitor-rig problem. It’s your problem. This doesn’t mean that an engineer won’t help you fix it, it just means that it’s not their ultimate responsibility.

Also, take notice of what I said up there: High-GAIN monitors. It is entirely possible to have a high-gain monitor situation without also having a lot of volume. For example, 80 dB SPL C is hardly “rock and roll” loud, but getting that output from a person who sings at the level of a whisper (50 – 60 dB SPL C) requires 20 – 30 dB of boost. For the acoustical circuits that I’ve encountered in small venues, that is definitely a high-gain situation. Gain is the relative level increase or decrease applied to a signal. Volume is the output associated with a signal level resultant from gain. They are related to each other, but the relationship isn’t fixed in terms of any particular gain setting.

Conflicting Frequency Content

Independent of being in a high-gain monitor conundrum, you can also have your day ruined by masking. Masking is what occurs when two sources with similar frequency content become overlaid. One source will tend to dominate the other, and you lose the ability to hear both sources at once. I’ve had this happen to me on numerous occasions with pianists and guitar players. They end up wanting to play at the same time, using substantially the same notes, and the sonic characteristics of the two instruments can be surprisingly close. What you get is either too-loud guitar, too-loud piano, or an indistinguishable mash of both.

In a monitor-mix situation, it’s helpful to identify when multiple sources are all trying to occupy the same sonic space. If sources can’t be distinguished from one another until one sound just gets obliterated, then you may have a frequency-content collision in progress. These collisions can result in volume wars, which can lead to high-gain situations, which result in the issues I talked about in the previous section. (Monitor problems are vicious creatures that breed like rabbits.)

After being identified, frequency-content issues can be solved in a couple of different ways. One way is to use equalization to alter the sonic content of one source or another. For instance, a guitar and a bass might be stepping on each other. It might be decided that the bass sound is fine, but the guitar needs to change. In that case, you might end up rolling down the guitar’s bottom end, and giving the mids a push. Of course, you also have to decide where this change needs to take place. If everything was distinct before the monitor rig got involved, then some equalization change from the audio human is probably in order. If the problem largely existed before any monitor mixes were established, then the issue likely lies in tone choice or song arrangement. In that case, it’s up to the musicians.

One thing to be aware of is that many small-venue mix rigs have monitor sends derived from the same channel that feeds FOH. While this means that the engineer’s channel EQ can probably be used to help fix a frequency collision, it also means that the change will affect the FOH mix as well. If FOH and monitor world sound significantly different from each other, a channel EQ configuration that’s correct for monitor world may not be all that nice out front. Polite communication and compromise are necessary from both the musicians and the engineer in this case. (Certain technical tricks are also possible, like “multing” a problem source into a monitors-only channel.)

Lack Of Localization

Humans have two ears so that we can determine the location and direction of sounds. In music, one way for us to distinguish sources is for us to recognize those instruments as coming from different places. When localization information gets lost, then distinguishing between sources requires more separation in terms of overall volume and frequency content. If that separation isn’t possible to get, then things can become very muddled.

This relates to monitors in more than one way.

One way is a “too many things in one place that’s too loud” issue. In this instance, a monitor mix gets more and more put in it, and at a high enough volume that the monitor obscures the other sounds on deck. What the musician originally heard as multiple, individually localized sources is now a single source – the wedge. The loss of localization information may mean that frequency-content collisions become a problem, which may lead to a volume-war problem, which may lead to a loop-gain problem.

Another possible conundrum is “too much volume everywhere.” This happens when a particular source gets put through enough wedges at enough volume for it to feel as though that single source is everywhere. This can ruin localization for that particular source, which can also result in the whole cascade of problems that I’ve already alluded to.

Fixing a localization problem pretty much comes down having sounds occupy their own spatial point as much as possible. The first thing to do is to figure out if all the volume used for that particular source is actually necessary in each mix. If the volume is basically necessary, then it may be feasible to move that volume to a different (but nearby) monitor mix. For some of the players, that sound will get a little muddier and a touch quieter, but the increase in localization may offset those losses. If the volume really isn’t necessary, then things get much easier. All that’s required is to pull back the monitor feeds from that source until localization becomes established again.

It’s worth noting that “extreme” cases are possible. In those situations, it may be necessary to find a way to generate the necessary volume from a single, localized source that’s audible to everyone on the deck. A well placed sidefill can do this, and an instrument amplifier in the correct position can take this role if a regular sidefill can’t be conjured up.

Wrapping Up

This can be a lot to take in, and a lot to think about. I will freely confess to not always having each of these concepts “top of mind.” Sometimes, audio turns into a pressure situation where both musicians and techs get chased into corners. It can be very hard for a person who’s not on deck to figure out what particular issue is in effect. For folks without a lot of technical experience who play or sing, identifying a problem beyond “something’s not right” can be too much to ask.

In the heat of the moment, it’s probably best to simply remember that yes, monitors are there to be used – but not to be overused. Effective troubleshooting is often centered around taking things out of a misbehaving equation until the equation begins to behave again. So, if you want to unsuckify your monitors, try getting as much out of them as possible. You may be surprised at what actually ends up working just fine.


Mysteriously Clean

“Clean sound” has to do with more than just volume. Where that volume goes is also important.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

PA030005Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

So – you might be wondering what that picture of V-drum cymbals has to do with all this. I’ll gladly tell you.

Just a couple of weeks ago, the band Sake Shot was playing at my regular gig. They were the opening act, and the drummer decided that the changeover would be facilitated by the simplicity and speed of just pulling his E-kit off the deck.

During Sake Shot’s set, Brian from The Daylates walked up to FOH (Front Of House) control. After saying hello, he made a single comment that caused me to do some thinking. What he said was: “The drums sound great. It’s so clean!”

He was absolutely correct, of course. The drums were very clear, and highly separated from the other sources on stage. If the sound of the drums had been a photograph, the image would have been razor sharp. The question was, “Why?” It wasn’t just volume. The mix was somewhat quieter than some other rock bands I’ve done, but we were definitely louder than a jazz trio playing a hotel lobby (if you get my drift). No…there were other factors in play besides how much SPL (Sound Pressure Level) was involved.

I’ll start out by putting it this way: It’s not just how much volume there is. It’s also about where that volume goes.

Let me explain.

Drums, Drums, Everywhere

If you were to take a measurement microphone and walk around an acoustic drumkit, I’m reasonably sure that the overall plot of SPL levels would look something like this:

drumkitpolar

Behind the drummer, you might lose about 6 dB (or maybe not even that much), but overall, the drums just go everywhere. Sound POURS from the kit in all directions. In other words, the drumkit is NOT directional in any real way. This has a number of consequences:

1) Sound (and LOTS of it) travels forward from the kit, into the most sensitive part of the downstage vocal mics’ polar patterns. What’s wanted in those vocal mics is, of course, vocals. Anything that isn’t vocals that makes it into the mic is “noise,” which partially washes out the desired vocal signal.

2) The same sound that just hit the vocal mics continues forward to arrive at the ears of the audience.

3) That same sound also travels through the PA, courtesy of the vocal mics. Especially in a system that uses digital processing of some kind, latency is introduced. The sonic event being reproduced by the PA arrives slightly later than the acoustical event.

4) The sound traveling in directions other than straight towards the audience is – in a small venue – extremely likely to meet some sort of boundary. Some of these boundaries may have significant acoustical absorption qualities, and some of them may have almost no absorption at all. The boundaries that mostly act as reflectors (hard walls, hard ceilings, hard floors, etc) cause the sound to re-emit into the room, and that re-emitted sound can travel into the audience’s ears. These reflections also arrive later than the direct acoustical radiation from the kit. The reflections may exist in the closely packed, smooth wash of reverberation, or they might manifest as distinct “slaps” or “flutter.”

The upshot is that you have sonic events with multiple arrivals. One particular snare hit makes several journeys to the ears of the audience members, and what would otherwise be a nice, clean “crack” becomes smeared in time to some extent. Each drum transient gets sonically blurred, which means inter and intra-drum events become harder to discern from each other. (Inter-drum events are hits on different drums, whereas intra-drum events are the beginnings and ends of sounds produced by one hit on one drum.)

In short, the reflected sound of the drumkit partially garbles the direct sound of the kit. On top of that, the drum sound is now partially garbling the vocals.

This isn’t necessarily a disaster. Bands and techs deal with it all the time, and it’s possible to get perfectly acceptable sonics with an acoustic drumkit in a small venue. The point of this article isn’t to sell electronic drums to everybody. Even so, the effects of an acoustic kit’s sound careening around a room can’t be ignored.

Directivity Matters

Now then.

What was different enough about Sake Shot’s set to make Brian say that the sound was really clean?

It really wasn’t the SPL involved. When it came right down to it, the monitor rig and PA system were creating enough level to make the V-drums sound reasonably like a regular kit. The key was where that SPL was going…directivity, in other words.

Most pro-audio loudspeakers are far more directional than a drumkit. Sure, if you walk around the back of a PA speaker, you’ll still hear something. Even so, the amount of “spill” is enormously reduced. Here’s my estimate of what the average SPL coverage of an “affordable, garden-variety” pro-audio box looks like.

papolar

This is exceptionally important in the context of my regular gig, because the upstage and stage-right walls, along with a portion of the stage ceiling, are acoustically treated. Not only do the downstage monitors fire into the parts of the vocal mic patterns that are LEAST sensitive, they also fire into a boundary which is highly absorptive. Further, the drum monitors fire into the drummer’s ears, and partially into the absorptive back wall. There’s a lot less spill that can hit the reflective boundaries in the room.

What this means is that the non-direct arrivals of the E-kit’s sounds were – relative to an acoustic kit – very low in relation to the direct arrivals from the FOH PA. Further, there was very little “wash” in the vocal mics. All this added up to a sound that was very clean and defined, because each transient from the drums had a sharply defined beginning and end. This makes it much easier for a listener to figure out where drum sounds stop, and where other things (like vocal consonants) begin. Further, the vocal mics were generally delivering a rather higher signal-to-noise ratio than they otherwise might have been, which cleaned up the vocals AND the sound of the drums.

All the different sounds from the show were doing a lot less “running into each other.”

As such, the mysteriously clean sound of the show wasn’t so mysterious after all.


Speed Fishing

“Festival Style” reinforcement means you have to go fast and trust the musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Last Sunday was the final day of the final iteration of a local music festival called “The Acoustic All-Stars.” It’s a celebration of music made with traditional or neo-traditional instruments – acoustic-electric guitars, fiddles, drums, mandolins, and all that sort of thing. My perception is that the musicians involved have a lot of anticipation wrapped up in playing the festival, because it’s a great opportunity to hear friends, play for friends, and make friends.

Of course, this anticipation can create some pressure. Each act’s set has a lot riding on it, but there isn’t time to take great care with any one setup. The longer it takes to dial up the band, the less time they have to play…and there are no “do overs.” There’s one shot, and it has to be the right shot for both the listeners and the players.

The prime illustrator for all this on Sunday was Jim Fish. Jim wanted to use his slot to the fullest, and so assembled a special team of musicians to accompany his songs. The show was clearly a big deal for him, and he wanted to do it justice. Trying to, in turn, do justice to his desires required that a number of things take place. It turns out that what had to happen for Jim can (I think) be generalized into guidelines for other festival-style situations.

Pre-Identify The Trouble Spots, Then Make The Compromises

The previous night, Jim had handed me a stage plot. The plot showed six musicians, all singing, wielding a variety of acoustic or acoustic-electric instruments. A lineup like that can easily have its show wrecked by feedback problems, because of the number of open mics and highly-resonant instruments on the deck. Further, the mics and instruments are often run at (relatively) high-gain. The PA and monitor rig need to help with getting some more SPL (Sound Pressure Level) for both the players and the audience, because acoustic music isn’t nearly as loud as a rock band…and we’re in a bar.

Also, there would be a banjo on stage right. Getting a banjo to “concert level” can be a tough test for an audio human, depending on the situation.

Now, there’s no way you’re going to get “rock” volume out of a show like this – and frankly, you don’t want to get that kind of volume out of it. Acoustic music isn’t about that. Even so, the priorities were clear:

I needed a setup that was based on being able to run with a total system gain that was high, and that could do so with as little trouble as possible. As such, I ended up deploying my “rock show” mics on the deck, because they’re good for getting the rig barking when in a pinch. The thing with the “rock” mics is that they aren’t really sweet-sounding transducers, which is unfortunate in an acoustic-country situation. A guy would love to have the smoothest possible sound for it all, but pulling that off in a potentially high-gain environment takes time.

And I would not have that time. Sweetness would have to take a back seat to survival.

Be Ready To Abandon Bits Of The Plan

On the day of the show, the lineup ended up not including two people: The bassist and the mandolin player. It was easy to embrace this, because it meant lower “loop gain” for the show.

I also found out that the fiddle player didn’t want to use her acoustic-electric fiddle. She wanted to hang one particular mic over her instrument, and then sing into that as well. We had gone with a similar setup at a previous show, and it had definitely worked. In this case, though, I was concerned about how it would all shake out. In the potentially high-gain environment we were facing, pointing this mic’s not-as-tight polar pattern partially into the monitor wash held the possibility for creating a touchy situation.

Now, there are times to discuss the options, and times to just go for it. This was a time to go for it. I was working with a seasoned player who knew what she wanted and why. Also, I would lose one more vocal mic, which would lower the total loop-gain in the system and maybe help us to get away with a different setup. I knew basically what I was getting into with the mic we chose for the task.

And, let’s be honest, there were only minutes to go before the band’s set-time. Discussing the pros and cons of a sound-reinforcement approach is something you do when you have hours or days of buffer. When a performer wants a simple change in order to feel more comfortable, then you should try to make that change.

That isn’t to say that I didn’t have a bit of a backup plan in mind in case things went sideways. When you’ve got to make things happen in a hurry, you need to be ready to declare a failing option as being unworkable and then execute your alternate. In essence, festival-style audio requires an initial plan, some kind of backup plan, the willingness to partially or completely drop the original plan, and an ability to formulate a backup plan to the new plan.

The fiddle player’s approach ended up working quite nicely, by the way.

Build Monitor World With FOH Open

If there was anything that helped us pull-off Jim’s set, it was this. In a detail-oriented situation, it can be good to start with your FOH (Front Of House) channels/ sends/ etc. muted (or pulled back) while you build mixes for the deck. After the monitors are sorted out, then you can carefully fill in just what you need to with FOH. There are times, though, that such an approach is too costly in terms of the minutes that go by while you execute. This was one such situation.

In this kind of environment, you have to start by thinking not in terms of volume, but in terms of proportions. That is, you have to begin with proportions as an abstract sort of thing, and then arrive at a workable volume with all those proportions fully in effect. This works in an acoustic music situation because the PA being heavily involved is unlikely to tear anyone’s head off. As such, you can use the PA as a tool to tell you when the monitor mixes are basically balanced amongst the instruments.

It works like this:

You get all your instrument channels set up so that they have equal send levels in all the monitors, plus a bit of a boost in the wedge that corresponds to that instrument’s player. You also set their FOH channel faders to equal levels – probably around “unity” gain. At this point, the preamp gains should be as far down as possible. (I’m spoiled. I can put my instruments on channels with a two-stage preamp that lets me have a single-knob global volume adjustment from silence to “preamp gain +10 dB.” It’s pretty sweet.)

Now, you start with the instrument that’s likely to have the lowest gain before feedback. You begin the adventure there because everything else is going to have to be built around the maximum appropriate level for that source. If you start with something that can get louder, then you may end up discovering that you can’t get a matching level from the more finicky channel without things starting to ring. Rather than being forced to go back and drop everything else, it’s just better to begin with the instrument that will be your “limiting factor.”

You roll that first channel’s gain up until you’ve got a healthy overall volume for the instrument without feedback. Remember, both FOH and monitor world should both be up. If you feel like your initial guess on FOH volume is blowing past the monitors too much (or getting swamped in the wash), make the adjustment now. Set the rest of the instruments’ FOH faders to that new level, if you’ve made a change.

Now, move on to the subsequent instruments. In your mind, remember what the overall volume in the room was for the first instrument. Roll the instruments’ gains up until you get to about that level on each one. Keep in mind that what I’m talking about here is the SPL, not the travel on the gain knob. One instrument might be halfway through the knob sweep, and one might be a lot lower than that. You’re trying to match acoustical volume, not preamp gain.

When you’ve gone through all the instruments this way, you should be pretty close to having a balanced instrument mix in both the house and on deck. Presetting your monitor and FOH sends, and using FOH as an immediate test of when you’re getting the correct proportionality is what lets you do this.

And it lets you do it in a big hurry.

Yes, there might be some adjustments necessary, but this approach can get you very close without having to scratch-build everything. Obviously, you need to have a handle on where the sends for the vocals have to sit, and your channels need to be ready to sound decent through both FOH and monitor-world without a lot of fuss…but that’s homework you should have done beforehand.

Trust The Musicians

This is probably the nail that holds the whole thing together. Festival-style (especially in an acoustic context) does not work if you aren’t willing to let the players do their job, and my “get FOH and monitor world right at the same time” trick does NOT work if you can’t trust the musicians to know their own music. I generally discourage audio humans from trying to reinvent a band’s sound anyway, but in this kind of situation it’s even more of something to avoid. Experienced acoustic music players know what their songs and instruments are supposed to sound like. When you have only a couple of minutes to “throw ‘n go,” you have to be able to put your faith in the music being a thing that happens on stage. The most important work of live-sound does NOT occur behind a console. It happens on deck, and your job is to translate the deck to the audience in the best way possible.

In festival-style acoustic music, you simply can’t “fix” everything. There isn’t time.

And you don’t need to fix it, anyway.

Point a decent mic at whatever needs micing, put a working, active DI on the stuff that plugs in, and then get out of the musicians’ way.

They’ll be happier, you’ll be happier, you’ll be much more likely to stay on schedule…it’s just better to trust the musicians as much as you possibly can.


The Party-Band Setup To Rule Lots Of Them

A guest post for Schwilly Family Musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

“Party-bands” can make a fair bit of money playing at upmarket events. A setup that lets you pick just about any arbitrary volume to play at can help you secure a wider variety of bookings. For a full article on all this, pay a visit to Schwilly Family Musicians.


What Can You Do For Two People?

Quite a bit, actually, because even the small things have a large effect.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

MiNX is a treat to see on the show schedule. They’re not just a high-energy performance, but a high-energy performance delivered by only two people, and without resorting to ear-splitting volume. How could an audio-human not appreciate that?

A MiNX show is hardly an exercise in finding the boundaries of one’s equipment. Their channel count is only slightly larger than a singer-songwriter open mic. It looks something like this:

  1. Raffi Vocal Mic
  2. Ischa Vocal Mic
  3. Guitar Amp Mic
  4. Acoustic Guitar DI
  5. Laptop DI

That’s it. When you compare those five inputs with the unbridled hilarity that is a full rock band with 3+ vocals, two guitars, a bass rig, keys, and full kit of acoustic drums, a bit of temptation creeps in. You get the urge to think that because the quantity of things to potentially manage has gone down, the amount of attention that you have to devote to the show is reduced. This is, of course, an incorrect assumption.

But why?

Low Stage Volume Magnifies FOH

A full-on rock band tends to produce a fair amount of stage volume. In a small room, this stage volume is very much “in parallel” with the contribution from the PA. If you mute the PA, you may very well still have concert-level SPL (Sound Pressure Level) in the seats. There are plenty of situations where, for certain instruments, the contribution from the PA is nothing, or something but hardly audible, or something audible but in a restricted frequency area that just “touches up” the audio from stage.

So, you might have 12 things connected to the console, but only really be using – say – the three vocal channels. Everything else cold very well be taking care of itself (or mostly so), and thus the full-band mix is actually LESS complex and subtle than a MiNX-esque production. The PA isn’t overwhelmingly dominant for a lot of the channels, and so changes to those channel volumes or tones are substantially “washed out.”

But that’s not the way it is with MiNX and acts similar to them.

In the case of a production like MiNX, the volume coming off the stage is rather lower than that of a typical rock act. It’s also much more “directive.” With the exception of the guitar amplifier, everything else is basically running through the monitors. Pro-audio monitors – relative to most instruments and instrument amps – are designed to throw audio in a controlled pattern. There’s much less “splatter” from sonic information that’s being thrown rearward and to the sides. What this all means is that even a very healthy monitor volume can be eclipsed by the PA without tearing off the audience’s heads.

That is, unlike a typical small-room rock show, the audience can potentially be hearing a LOT of PA relative to everything else.

And that means that changes to FOH (Front Of House) level and tonality are far less washed out than they would normally be.

And that means that little changes matter much more than they usually do.

You’ve Got To Pay Attention

It’s easy to be taken by surprise by this. Issues that you might normally let go suddenly become fixable, but you might not notice the first few go-arounds because you’re just used to letting those issues slide. Do the show enough times, though, and you start noticing things. For instance, the last time I worked on a MiNX show was when I finally realized that some subtle dips at 2.5 kHz in the acoustic guitar and backing tracks allowed me to run those channels a bit hotter without stomping on Ischa’s vocals. This allows for a mix that sounds less artificially “separated,” but still retains intelligibility.

That’s a highly specific example, but the generalized takeaway is this: An audio-human can be tempted to just handwave a simpler, quieter show, but that really isn’t a good thing to do. Less complexity and lower volume actually means that the details matter more than ever…and beyond that, you actually have the golden opportunity to work on those details in a meaningful way.

When the tech REALLY needs to be paying attention to the small details of the mix is when the PA system’s “tool metaphor” changes from a sledgehammer to a precision scalpel.

When you’ve only got a couple of people on deck, try hard to stay sharp. There might be a lot you can do for ’em, and for their audience.


Vocal Processors, And The Most Dangerous Knob On Them

If you were wondering, the most dangerous knob is the one that controls compression.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Not every noiseperson is a fan of vocal processors.

(Vocal processors, if you didn’t know, are devices that are functionally similar to guitar multi-fx units – with the exception that they expect input to come from a vocal mic, and so include a microphone preamp.)

Vocal processors can be deceivingly powerful devices, and as such, can end up painting an audio-human into a corner that they can’t get out of. The other side of that coin is that they can allow you to intuitively dial up a sound that you like, without you having to translate your intuitive choices into technical language while at a gig.

What I mean by that last bit is this: Let’s say that you like a certain kind of delay effect on your voice. There’s a specific delay time that just seems perfect, a certain number of repeat echoes that feels exactly right, an exact wet/ dry mix that gives you goosebumps, and an effect tonality that works beautifully for you. With your own vocal processor, you can go into rehearsal and fiddle with the knobs for as long as it takes to get exactly that sound. Further, you don’t have to be fully acquainted with what all the settings mean in a scientific sense. You just try a bit more or less of this or that, and eventually…you arrive. If you then save that sound, and take that vocal processor to a gig, that very exact sound that you love comes with you.

Which is great, because otherwise you have to either go without FX, or (if you’re non-technical) maybe struggle a bit with the sound person. The following are some conversations that you might have.

You: Could I have both reverb and delay on my vocal?

FOH (Front Of House) Engineer: Ummm…we only have reverb.

You: Oh.

You: Gimme a TON of delay in the monitors.

Audio Human: Oh, sorry, my FX returns can only be sent to the main mix.

You: Aw, man…

You: Could I have a touch more mid in my voice?

[Your concept of “a touch more mid” might be +6 dB at 2000 Hz, with a 2-octave-wide filter. The sound-wrangler’s concept of “a touch more mid” might be +3 dB at 750 Hz, with a one-octave-wide filter. Further, you might not be able to put a number on what frequency you want, especially if what I just said sounds like gobbledygook. Heck, the audio human might not even be able to connect a precise number with what they’re doing.]

Sound Wrangler: How’s that?

You: That’s not quite right. Um…

[This one’s directly in line with my original example.]

You: Could I get some delay on my voice?

Audio Human: Sure!

[The audio human dials up their favorite vocal-delay sound.]

You: Actually, it’s more of a slap-delay.

[Your concept of slap-delay might be 50 ms of delay time. The audio-human’s concept of slap-delay might be 75 ms.]

Audio Human: How’s that?

You: That’s…better. It’s not quite it, though. Maybe if there was one less repeat?

[The audio-human’s delay processor doesn’t work in “repeats.” It works in the dB level of the signal that’s fed back into the processor. The audio-human takes a guess, and ends up with what sounds like half a repeat less.]

Audio Human: Is that better?

You: Yeah, but it’s still not quite there. Um…

Having your own vocal processor can spare you from all this. It also spares the engineer from having to manage when the FX should be “in” or bypassed. (This often isn’t a huge issue, but it can become one if you’re really specific about what you want to happen where.) There are real advantages to being self-contained.

There are negative sides, though, as I alluded to earlier. Having lots of power at your disposal feels good, but if you’re not well-acquainted with what that power is actually doing, you can easily sabotage yourself. And your band. And the engineer who’s trying to help you.

EQ Is A Pet Dog

The reason that I say that “EQ is a pet dog” is twofold.

1) EQ is often your friend. Most of the time, it’s fun to play with, and it “likes” to help you out.

2) In certain situations, an EQ setting that was nice and sweet can suddenly turn around and “bite” you. This isn’t because EQ is “a bad dog,” it’s because certain equalization tweaks in certain situations just don’t work acoustically.

What I’ve encountered on more than one occasion are vocal-unit EQ settings that are meant to either sound good in low-volume or studio contexts. I’ve also encountered vocal-unit EQ that seems to have been meant to correct a problem with the rehearsal PA…which then CAUSES a problem in a venue PA that doesn’t need that correction.

To be more specific, I’ve been in various situations where folks had a whole busload of top-end added to their vocal sound. High-frequency boosts often sound good on “bedroom” or “headphone” vocals. Things get nice and crisp. “Breathy.” Even “airy,” if I dare to say so. In a rehearsal situation, this can still work. The rehearsal PA might not be able to get loud enough for the singer to really hear themselves when everybody’s playing, especially if feedback can’t be easily corrected. However, the singer hears that nice, crisp vocal while everybody’s NOT playing, and remembers that sound even they get swamped.

Anyway.

The problem with having overly hyped high-end in a live vocal (especially with a louder band in a small room) is really multiple problems. First, it tends to focus your feedback issues into the often finicky and unpredictable zone of high-frequency material. If there’s a place where both positionally dependent and positionally independent frequency response for mics, monitors, and FOH speakers is likely to get “weird” and “peaky,” the high-frequency zone is that place. (What I mean by “positionally dependent” is that high-frequency response is pretty easy to focus into a defined area…and what THAT means is that you can be in a physical position where you have no HF feedback problems, and then move a couple of steps and make a quarter turn and SQUEEEEAAALLL!)

The second bugbear associated with cranked high-end is that, when the vocals are no longer isolated, the rest of the band can bleed into the vocal mic LIKE MAD. That HF boost that sounds so nice on vocals by themselves is now a cymbal and guitar-hash louder-ization device. If we get into a high-gain situation (which can happen even with relatively quiet bands), what we then end up doing is making the band sound even louder when compared to your voice. If the band started out a bit loud, we may just have gotten to the audience’s tipping point – especially since high-frequency information at “rock” volume can be downright painful. Further, we’re now spending electrical and acoustical headroom on what we don’t want (more of the band’s top end), instead of what we do want (your vocal’s critical range).

Now, I’m not saying that you can’t touch the EQ in your vocal processor, or that you shouldn’t use your favorite manufacturer preset. What I am saying, though, is that dramatic vocal-processor EQ can really wreck your day at the actual show. You might want to find a way to quickly get the EQ bypassed or “flattened,” if you can.

“Compression” Is The Most Dangerous Knob On That Thing

Now, why would I say that, especially after all my ranting about EQ?

Well, it’s like this.

An experienced audio tech with flexible EQ tools can probably “undo” enough of an unhelpful in-the-box equalization solution, given a bit of time. Compression, on the other hand, really can’t be fully “undone” in a practical sense in most situations. (Yes – there is a process called “companding” which involves compression and complementary expansion, but to make it work you have to have detailed knowledge of the compression parameters.) Like EQ, compression can contribute to feedback problems, but it does so in a “full bandwidth” sense that is also much more weird and hard to tame. It can also cause the “we’re making the band louder via the vocal mic” problem, but in a much more pronounced way. It can prevent the vocalist from actually getting loud enough to separate from the rest of the band – and it can even cause a vocalist to injure themselves.

Let’s pick all that apart by talking about what a compressor does.

A compressor’s purpose is to be an automatic fader that can react at least as quickly (if not a lot more quickly) as a human, and that can react just as consistently (if not a lot more consistently) as a human. When a signal exceeds a certain set-point, called the threshold, the automatic fader pulls the signal down based on the “ratio” parameter. When the signal falls back towards the threshold, the fader begins to return to its original gain setting. “Attack” is the speed that the fader reduces gain, and “release” is the speed that the fader returns to its original gain.

Now, how can an automatic fader cause problems?

If the compressor threshold is set too low, and the ratio is too high, the vocalist is effectively pulled WAY down whenever they try to deliver any real power. If I were to set a vocalist so that they were comfortably audible when the band was silent, but then pulled that same vocalist down 10 dB when the band was actually playing, the likely result with quite a few singers would be drowned vocals. This is effectively what happens with an over-aggressive compressor. The practical way for the tech to “fight back” is to add, say, 10 dB (or whatever) of gain on their end – which is fine, except that most small-venue live-sound contexts can’t really tolerate that kind of compensating gain boost. In my experience, small room sound tends to be run pretty close to the feedback point, say, 3-6 dB away from the “Zone of Weird Ringing and Other Annoyances.” When that’s the case, going up 10 dB puts you 4-7 dB INTO the “Zone.”

But the thing is, the experience of that trouble area is extra odd, because your degree of being in it varies. When the singer really goes for it, the processor’s compressor reduces the vocal mic’s gain, and your feedback problem disappears. When they back off a bit, though, the compressor releases, which means the gain goes back up, which means that the strange, phantom rings and feedback chirps come back. It’s not like an uncompressed situaton, where feedback builds at a consistent rate because the overall gain is also consistent. The feedback becomes the worst kind of problem – an intermittent one. Feedback and ringing that quickly comes and goes is the toughest kind to fight.

Beyond just that, there’s also the problem of bleed. If you have to add 10 dB of gain to a vocal-mic to battle against the compressor, then you’ve also added 10 dB of gain to whatever else the mic is hearing when the vocalist isn’t singing. Depending on the situation, this can lead to a very-markedly extra-loud band, with all kinds of unwanted FX applied, and maybe with ear-grating EQ across the whole mess. There’s also the added artistic issue of losing dynamic “swing” between vocal and instrumental passages. That is, the music is just LOUD, all the time, with no breaks. (An audience wears down very quickly under those conditions.) In the circumstance of a singer who’s not very strong when compared to the band, you can get the even more troublesome issue of the vocal’s intelligibility being wrecked by the bleed, even though the vocal is somewhat audible.

Last, there’s the rare-but-present monster of a vocalist hurting themselves. The beauty of a vocal processor is that the singer essentially hears what’s being presented to the audience. The ugliness behind the beauty is that this isn’t always a good thing. Especially in the contexts of rock and metal, vocal monitors are much less about sounding “hi-fi” and polished, and much more about “barking” at a volume and frequency range that has a fighting chance of telling the singer where they are. Even in non-rock situations, a vital part of the singer knowing where they are is knowing how much volume they’re producing when compared to the band. The most foolproof way for this to happen is for the monitors to “track” the vocalists dynamics on a 1:1 basis – if the singer sings 3 dB louder, the monitors get 3 dB louder.

When compression is put across the vocalist immediately after the vocal mic, the monitors suddenly fail to track their volume in a linear fashion. The singer sings with more power, but then the compressor kicks in and holds the monitor sound back. The vocalist, having lost the full volume advantage of their own voice plus the monitors, can feel that they’re too quiet. Thus, they try to sing louder to compensate. If this goes too far, the poor singer just might blow out their voice, and/ or be at risk for long-term health issues. An experienced vocalist with a great band can learn to hear, enjoy, and stop compensating for compression…but a green(er) singer in a pressure situation might not do so well.

(This is also why I advocate against inserting compression on a vocal when your monitor sends are post-insert.)

To be brutally honest, the best setting for a vocal-processor’s compressor is “bypass.” Exceptions can be made, but I think they have to be made on a venue-to-venue, show-to-show basis.

All of this might make it sound like I advocate against the vocal processor. That’s not true. I think they’re great for people in the same way that other powerful tools are great. It’s just that power tools can really hurt you if you’re not careful.


My Interview On AMR

I was invited to do a radio show on AMR.fm! Here are some key bits.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

About a week ago, I was invited into “The Cat’s Den.” While that might sound like a place where a number of felines reside, it’s actually the show hosted by John, the owner of AMR.fm. We talked about a number of subjects related to local music and small venues. John was kind enough to make the show’s audio available to me, and I thought it would be nifty to chop it all up into topical segments.

The key word up there being “chop.”

That is, what you’re hearing in these files has been significantly edited. The whole thing was about two hours long, and there was a lot of “verbal processing” that occurred. That’s what happens during a live, long-form interview, but it’s not the best way to present the discussion afterwards. Even with having tightened up the key points of the show, I’ve taken pains to not misrepresent what either of us were getting at. The meaning of each bit should be fully intact, even if every sentence hasn’t been included.

So…

The Introduction

Supatroy

A quick reference to an earlier show that featured Supatroy Fillmore. (Supatroy has done a lot of work in our local music scene.)

Why The Computerization Of Live-Audio Is A Great Thing

Computerizing live-sound allows guys like me to do things that were previously much harder (or even impossible) to do.

How I Got Started

A little bit about my pro-audio beginnings…way back in high-school.

Building And Breaking Things

I’m not as “deep into the guts” of audio equipment as the folks who came before me. I give a quick shout-out to Tim Hollinger from The Floyd Show in this bit.

Functional Is 95%

A segment about why I’m pretty much satisfied by gear that simply passes signal in a predictable and “clean” way.

The Toughest Shows

The most challenging shows aren’t always the loudest shows. Also, the toughest shows can be the most fun. I use two “big production” bands as examples: Floyd Show and Juana Ghani. The question touches on an interview that I did with Trevor Hale.

I Worry Most About Monitor World

If something’s wrong in FOH, I can probably hear it. If something’s not quite right on the stage, it’s quite possible that I WON’T hear it – and that worries me.

Communication Between Bands And Audio Humans

I’m not as good at communicating with bands as I’d like to be. Also, I’m a big proponent of people politely (but very audibly) asking for what they need.

The Most Important Thing For Bands To Do

If a band doesn’t sound like a cohesive ensemble without the PA, there’s no guarantee that the PA and audio-human will be able to fix that.

Why Talk About Small-Venue Issues?

I believe that small-venue shows are the backbone of the live-music industry. As such, I think it’s worthwhile to talk about how to do those shows well.

Merchant Royal

John asks me about who’s come through Fats Grill and really grabbed my attention. I proceed to pretty much gush about how cool I think Merchant Royal is.

What Makes A Great Cover Tune?

In my opinion, doing a great job with a cover means getting the song to showcase your own band’s strengths. I also briefly mention that Luke Benson’s version of “You Can’t Always Get What You Want” actually gets me to like the song. (I don’t normally like that song.)

The Issues Of A Laser-Focused Audience

I’m convinced that most people only go to shows with their favorite bands in their favorite rooms. Folks that go to a bar or club “just to check out who’s playing” seem to be incredibly rare anymore. (Some of these very rare “scene supporting” people are John McCool and Brian Young of The Daylates, as well as Christian Coleman.) If a band is playing a room that the general public sees as a “venue” as opposed to a “hangout,” then the band isn’t being paid to play music. The band is being paid based on their ability to be an attraction.

Look – it’s complicated. Just listen to the audio.

Everybody Has Due Diligence

Bands and venues both need to promote shows. Venues also need to be a place where people are happy to go. When all that’s been done, pointing fingers and getting mad when the turnout is low isn’t a very productive thing.

Also: “Promoting more” simply doesn’t turn disinterested people into interested people – at least as far as I can tell.

Shout Outs

This bit is the wrap up, where I say thanks to everybody at Fats Grill for making the place happen. John and I also list off some of our favorite local acts.

 


Get Monitor-World Right First

If it sounds wrong on deck, it will probably sound wrong out front (for a variety of reasons).

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

If you don’t get monitor-world to do what the musicians need it to do, then you may find it hard to get FOH to do what the audience needs it to do.