Category Archives: Live Audio Tactics

Tips, tricks, and strategies for concert sound in small venues.

Up In The Air

A good rigger is an important person.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This is one of those topics that’s a little outside of a small-venue context.

But it’s still good to talk about.

I recently had the opportunity to work on a “big-rig” show. What I mean by that is we had six JBL SRX subwoofers deployed, along with two hangs (four boxes each) of JBL VRX. For some folks, that’s not a huge system, but for me it’s pretty darn large. Going in, I was excited to be on the crew for the event – and also a bit apprehensive. I had never before had any “hands-on” experience with rigging and flying a PA system.

As it turned out, my anxiety was misplaced. When you finally get up close and personal with a box like VRX, you realize that the box-to-box flyware is really easy to understand and operate. Constant-curvature arrays are hard to get wrong in and of themselves. You would basically have to actively attempt to screw up the hang in order to run into a problem. The boxes have a built-in angle, so you don’t have to think about much other than lining a couple of ’em up, flipping the connection flanges into place, and inserting the fly pins.

Another reason my anxiety was misplaced was twofold:

1) We had a good rigger on hand.

2) Everybody implicitly agreed that the rigger was the “lead dog.”

What I mean by point two is that I consider there to be exactly one proper attitude towards an honest-to-goodness, card-carrying rigger. That attitude is that you listen to the rigger, and do EXACTLY as the rigger tells you.

I don’t think I can stress that enough.

An actual rigger is somebody who can safely hang very heavy things above people’s heads, and has the maturity to do it the right way (with no tolerance for shortcuts or other horse-dip). They realize that getting a hang wrong may be a very efficient way to end people’s lives. They distinguish between “reasonably safe” and “truly safe,” and will not allow anyone to settle for the former.

As such, their word is law.

I DO think that safe rigging is within the mental capacity of the average human. However, I also think that there are numerous particulars of equipment and technique which are not immediately intuitive or obvious. I think it’s easy for an un-educated person to hang things the wrong way without realizing it. That’s why, when a rigger shows up in a situation where everybody else is NOT a rigger, the rigger immediately becomes the person in charge. Somebody else may be making executive decisions on what’s wanted for a hang, but the human with the most experience at actually flying things makes the final call on what can be done and how.

(If you ever get into a situation that appears to be the opposite of that, I think you should be concerned.)

Like I said, the case on this show was that everybody was listening to the rigger.

And that meant that everything got up in the air safely, stayed up in the air safely, and came down safely after everything was done.


Entering Flatland

I encourage live-audio humans to spend lots of time listening to studio monitors.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Do you work in live-audio? Are you new to the field? An old hand? Somewhere in between?

I want to encourage you to do something.

I want you to get yourself a pair of basically decent studio monitors. They shouldn’t be huge, or expensive. They just have to be basically flat in terms of their magnitude response. Do NOT add a subwoofer. You don’t need LF drivers bigger than 8″ – anything advertised to play down to about 40 Hz or 50 Hz is probably fine.

I want you to run them as “flat” as possible. I want you to do as much listening with them as possible. Play your favorite music through them. Watch YouTube videos with them passing the audio. When you play computer games, let the monitors make all the noises.

I want you to get used to how they sound.

Oh, and try to tune your car stereo to sound like your studio monitors. If you can only do so coarsely, still do so.

Why?

Because I think it’s very helpful to “calibrate” yourself to un-hyped audio.

A real problem in live music is the tendency to try to make everything “super enhanced.” It’s the idea that loud, deep bass and razor-sharp HF information are the keys to good sound. There’s a problem, though. The extreme ends of the audible spectrum actually aren’t that helpful in concert audio. They are nice to have available, of course. The very best systems can reproduce all (or almost all) of the audible range at high volume, with very low distortion. The issue is over-emphasis. The sacrifice of the absolutely critical midrange – where almost all the musical information actually lives – on the altar of being impressive for 10 seconds.

I’m convinced that part of what drives a tendency to dial up “hyped” audio in a live situation is audio humans listening to similar tonalities when they’re off-duty. They build a recreational system that produces booming bass and slashing treble, yank the midrange down, and get used to that as being “right.” Then, when they’re louderizing noises for a real band in a real room, they try to get the same effect at large scale. This eats power at an incredible rate (especially the low-end), and greatly reduces the ability of the different musical parts to take their appointed place in the mix. If everything gets homogenized into a collection of crispy thuds, the chance of distinctly hearing everything drops like a bag of rocks tied to an even bigger rock that’s been thrown off a cliff made of other rocks.

But it does sound cool!

At first.

A few minutes in, especially at high volume, and the coolness gives way to fatigue.

In my mind, it’s a far better approach to try to get the midrange, or about 100 Hz to 5 kHz, really worked out as well as possible first. Then, you can start thinking about where you are with the four octaves on the top and bottom, and what’s appropriate to do there.

In my opinion, “natural” is actually much more impressive than “impressive,” especially when you don’t have massive reserves of output available. Getting a handle on what’s truly natural is much easier when that kind of sonic experience is what you’ve trained yourself to think of as normal and correct.

So get yourself some studio monitors, and make them your new reference point for what everything is supposed to sound like. I can’t guarantee that it will make you better at mixing bands, but I think there’s a real chance of it.


A Weird LFE Routing Solution

Getting creative to obtain more bottom end.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This is another one of those case studies where you get to see how strange my mind is. As such, be aware that it may not be applicable to you at all. I had a bit of a conundrum, and I solved it in a creative way. Some folks might call it “too creative.”

Maybe those people are boring.

Or they’re reasonable and I’m a little nuts.

Anyway.

I’ve previously mentioned that I handle the audio at my church. We’ve recently added some light percussion to complement our bass-guitar situation, and there was a point where our previous worship leader/ music director wanted more thump. That is, low frequency material that was audible AND a bit “tactile.” In any case, the amount of bass we had happening wasn’t really satisfying.

Part of our problem was how I use system limiting. I’ve long nursed a habit of using a very aggressive limiter across the main mix bus as a “stop the volume here” utility. I decide how loud I want to get (which is really not very loud on Sundays), set the dynamics across the output such that we can’t get any louder, and then smack that processor with a good deal of signal. I’ve gotten to a point where I can get it right most of the time, and “put the band in a box” in terms of volume. Drive the vocals hard and they stay on top, while not jumping out and tearing anyone’s face off when the singers push harder.

At the relatively quiet volume levels that we run things, though, this presents a problem for LF content. To get that extended low-frequency effect that can be oh-so-satisfying, you need to be able to run the bass frequencies rather hotter than everything else. The limiter, though, puts a stop to that. If you’re already hitting the threshold with midrange and high-frequency information, you don’t have anywhere to go.

So, what can you do?

For a while, we took the route of patching into the house system’s subwoofer drive “line.” I would run (effectively) unlimited aux-fed subs to that line, while keeping the mains in check as normal, and we got what we wanted.

But it was a bit of a pain, as patching to the house system required unpatching some of their frontend, pulling an amp partially out of a cabinet, doing our thing, and then reversing the process at the end. I’m not opposed to work, but I like “easy” when I can get it. I eventually came to the conclusion that I didn’t really need the house subs.

This was because:

1) We were far, far below the maximum output capacity of our main speakers.

2) Our main speakers were entirely capable of producing content between 50 – 100 Hz at the level I needed for people to feel the low end a little bit. (Not a lot, just a touch.)

If we wouldn’t have had significant headroom, we would have been sunk. Low Frequency Effects (LFE) require significant power, as I said before. If my artificial headroom reduction was close to the actual maximum output of the system, finding a way around it for bass frequencies wouldn’t have done much. Also, I had to be realistic about what we could get. A full-range, pro-audio box with a 15″ or 12″ LF driver can do the “thump” range at low to moderate volumes without too much trouble. Asking for a bunch of building-rattling boom, which is what you get below about 50 Hz, is not really in line with what such an enclosure can deliver.

With those concerns handled, I simply had to solve a routing problem. For all intents and purposes, I had to create a multiband limiter that was bypassed in the low-frequency band. If you look at the diagram above, that’s what I did.

I now have one bus which is filtered to pass content at 100 Hz and above. It gets the same, super-aggressive limiter as it’s always had.

I also have a separate bus for LFE. That bus is filtered to restrict its information to the range between 50 Hz and 100 Hz, with no limiter included in the path.

Those two buses are then combined into the console’s main output bus.

With this configuration, I can “get on the gas” with low end, while retaining my smashing and smooshing of midrange content. I can have a little bit of fun with percussion and bass, while retaining a small, self-contained system that’s easy to patch. I would certainly not recommend this as a general-purpose solution, but hey – it fits my needs for now.


The Unterminated Line

If nothing’s connected and there’s still a lot of noise, you might want to call the repair shop.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

“I thought we fixed the noise on the drum-brain inputs?” I mused aloud, as one of the channels in question hummed like hymenoptera in flight. I had come in to help with another rehearsal for the band called SALT, and I was perplexed. We had previously chased down a bit of noise that was due to a ground loop; Getting everything connected to a common earthing conductor seemed to have helped.

Yet here we were, channel two stubbornly buzzing away.

Another change to the power distribution scheme didn’t help.

Then, I disconnected the cables from the drum-brain. Suddenly – the noise continued, unchanged. Curious. I pulled the connections at the mixer side. Abruptly, nothing happened. Or rather, the noise continued to happen. Oh, dear.


When chasing unwanted noise, disconnecting things is one of your most powerful tools. As you move along a signal chain, you can break the connection at successive places. When you open the circuit and the noise stops, you know that the supplier of your spurious signal is upstream of the break.

Disconnecting the cable to the mixer input should have resulted in relative silence. An unterminated line, that is, an input that is NOT connected to upstream electronics, should be very quiet in this day and age. If something unexplained is driving a console input hard enough to show up on an input meter, yanking out the patch should yield a big drop in the visible and audible level. When that didn’t happen, logic dictated an uncomfortable reality:

1) The problem was still audible, and sounded the same.

3) The input meter was unchanged, continuing to show electrical activity.

4) Muting the input stopped the noise.

5) The problem was, therefore, post the signal cable and pre the channel mute.

In a digital console, this strongly indicates that something to do with the analog input has suffered some sort of failure. Maybe the jack’s internals weren’t quite up to spec. Maybe a solder joint was just good enough to make it through Quality Control, but then let go after some time passed.

In any case, we didn’t have a problem we could fix directly. Luckily, we had some spare channels at the other end of the input count, so we moved the drum-brain connections there. The result was a pair of inputs that were free of the annoying hum, which was nice.

But if you looked at the meter for channel two, there it still was: A surprisingly large amount of input on an unterminated line.


It’s Gonna Take A Minute

The secret to better shows is practice. Practice requires time.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The Video

The Summary

We should strive to do our best work. The best work possible on the first try is usually not as good as the best work possible on subsequent tries – and we need to be okay with that.


More Features VS Groundwork

In this case, groundwork won: There wasn’t a compelling reason to lose it.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The Video

The Summary

If you have significant prep that’s already done for one mixing system, you might want to avoid losing that effort – even if it would be to put a more powerful/ flexible mix rig into play.


The Power Of The Solo Bus

It’s very handy to be able to pick part of a signal path and route that sound directly to your head.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

headphonesWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The Video

The Summary

Need to figure out which channel is making that weird noise in the midst of the chaos of a show? Wondering whether your drum mics have been switched around? Wish you could directly hear the signal running to the monitor mix that’s giving people fits? Your solo bus is here to save the day!


Case Study: FX When FOH Is Also Monitor World

Two reverbs can help you square certain circles.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The Video

The Script

Let’s say that a band has a new mixing console – one of those “digital rigs in a box” that have come on the scene. The musicians call you in because they need some help getting their monitors dialed up. At some point, the players ask for effects in the monitors: The vocals are too dry, and some reverb would be nice.

So, you crank up an FX send with a reverb inserted on the appropriate bus – and nothing happens.

You then remember that this is meant to be a basic setup, with one console handling both FOH and monitors. Your inputs from the band use pre-fader sends for monitor world, but post-fader sends for FX. Since you weren’t building a mix for FOH, all your faders were all the way down. You don’t know where they would be for a real FOH mix, anyway. If the faders are down, a post-fader send can’t get any signal to an FX bus.

Now, you typically don’t want the monitors to track every level tweak made for FOH, but you DO want the FX sends to be dependent on fader position – otherwise, the “wet-to-dry” ratio would change with every fader adjustment.

So, what do you do?

You can square the circle if you can change the pre/ post send configuration to the FX buses, AND if you can also have two reverbs.

Reverb One becomes the monitor reverb. The sends to that reverb are configured to be pre-fader, so that you don’t have to guess at a fader level. The sends from the reverb return channel should also be pre-fader, so that the monitor reverb doesn’t end up in the main mix.

Reverb Two is then setup to be the FOH reverb. The sends to this reverb from the channels are configured as post-fader. Reverb Two, unlike Reberb One, should have output that’s dependent on the channel fader position. Reverb Two is, of course, kept out of the monitor mixes.

With a setup like this, you don’t need to know the FOH mix in advance in order to dial up FX in the monitors. There is the small downside of having to chew up two FX processors, but that’s not a huge problem if it means getting the players what they need for the best performance.


The Difference Between The Record And The Show

Why is it that the live mix and the album mix end up being done differently?

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Jason Knoell runs H2Audio in Utah, and he recently sent me a question that essentially boils down to this: If you have a band with some recordings, you can play those recordings over the same PA in the same room as the upcoming show. Why is it that the live mix of that band, in that room, with that PA might not come together in the same way as the recording? The recording that you just played over the rig? Why would you NOT end up having the same relationship between the drums and guitars, or the guitars and the vocals, or [insert another sonic relationship here].

This is one of those questions where trying to address every tiny little detail isn’t practical. I will, however, try to get into the major factors I can readily identify. Please note that I’m ignoring room acoustics, as those are a common factor between a recording and a live performance being played into the same space.

Magnitude

It’s very likely that the recording you just pumped out over FOH (Front Of House) had a very large amount of separation between the various sources. Sure, the band might have recorded the songs in such a way as to all be together in one room, but even then, the “bleed” factor is very likely to be much smaller than what you get in a live environment. For instance, a band that’s in a single-room recording environment can be set up with gobos (go-betweens) screening the amps and drums. The players can also be physically arranged so that any particular mic has everything else approaching the element from off-axis.

They also probably recorded using headphones for monitors, and overdubbed the “keeper” vocals. They may also have gone for extreme separation and overdubbed EVERYTHING after putting down some basics.

Contrast this with a typical stage, where we’re blasting away with wedge loudspeakers, we have no gobos to speak of, and all the backline is pointed at the sensitive angles of the vocal mics. Effectively, everything is getting into everything else. Even if we oversimplify and look only at the relative magnitudes between sounds, it’s possible to recognize that there’s a much smaller degree of source-to-source distinctiveness. The band’s signals have been smashed together, and even if we “get on the gas” with the vocals, we might also be effectively pushing up part of the drumkit, or the guitars.

Time

Along with magnitude, we also have a time problem. With as much bleed as is likely in play, the oh-so-critical transients that help create vocal and musical intelligibility are very, very smeared. We might have a piece of backline, or a vocal, “arriving” at the listener several times over in quick succession. The recording, on the other hand, has far more sharply defined “timing information.” This can very likely lead to a requirement that vocals and lead parts be mixed rather hotter live than they would be otherwise. That is, I’m convinced that a “conservation of factors” situation exists: If we lose separation cues that come from timing, the only way to make up the deficit is through volume separation.

A factor that can make the timing problems even worse is those wedge monitors we’re using, combined with the PA handling reproduction out front. Not only are all the different sources getting into each other at different times, sources being run at high gain are arriving at their own mics several times significantly (until the loop decay becomes large enough to render the arrivals inaudible). This further “blurs” the timing information we’re working with.

Processing Limits

Because live audio happens in a loop that is partially closed, we can be rather more constrained in what we can do to a signal. For instance, it may be that the optimal choice for vocal separation would simply be a +3 dB, one-octave wide filter at 1 kHz. Unfortunately, that may also be the portion of the loop’s bandwidth that is on the verge of spiraling out of control like a jet with a meth-addicted Pomeranian at the controls. So, again, we can’t get exactly the same mix with the same factors. We might have to actually cut 1 kHz and just give the rest of the signal a big push.

Also, the acoustical contribution of the band limits the effectiveness of our processing. On the recording, a certain amount of compression on the snare might be very effective; All we hear is the playback with that exact dynamics solution applied. With everything live in the room, however, we hear two things: The reproduction with compression, and the original, acoustic sound without any compression at all. In every situation where the in-room sound is a significant factor, what we’re really doing is parallel compression/ EQ/ gating/ etc. Even our mutes are parallel – the band doesn’t simply drop into silence if we close all the channels.


Try as we might, live-sound humans can rarely exert the same amount of control over audio reproduction that a studio engineer has. In general, we are far more at the mercy of our environment. It’s very often impractical for us to simply duplicate the album mix and receive the same result (only louder).

But that’s just part of the fun, if you think about it.


Case Study: Creating A Virtual Guitar Rig In An Emergency

Distortion + filtering = something that can pass as a guitar amplifier in an emergency.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The Video

The Script

Imagine the scene: You’re setting up a band that has exactly one player with an electric guitar. They get to the gig, and suddenly discover a problem: The power supply for their setup has been left at home. Nobody has a spare, because it’s a specialized power supply – and nobody else plays an electric guitar anyway. The musician in question has no way to get a guitar sound without their rig.

At all.

As in, what they have that you can work with is a guitar and a cable. That’s it.

So, what do you do?

Well, in the worst-case scenario, you just find a direct box, run the guitar completely dry, and limp through it all as best you can.

But that’s not your only option. If you’re willing to get a little creative, you can do better than just having everybody grit their teeth and suffer. To get creative, you need to be able to take their guitar rig apart and put it back together again.

Metaphorically, I mean. You can put the screwdriver away.

What I’m getting at is this question: If you break the guitar rig into signal-processing blocks, what does each block do?

When it comes right down to it, a super-simple guitar amp amounts to three things: Some amount of distortion (including no distortion at all), tone controls, and an output filter stack.
The first two parts might make sense, but what’s that third bit?

The output filtering is either an actual loudspeaker, or something that simulates a loudspeaker for a direct feed. If you remove a speaker’s conversion of electricity to sound pressure waves, what’s left over is essentially a non-adjustable equalizer. Take a look at this frequency-response plot for a 12″ guitar speaker by Eminence: It’s basically a 100 Hz to 5 kHz bandpass filter with some extra bumps and dips.

It’s a fair point to note that different guitar amps and amp sims may have these different blocks happening in different orders. Some might forget about the tone-control block entirely. Some might have additional processing available.

Now then.

The first thing to do is to find an active DI, if you can. Active DI boxes have very high input impedances, which (in short) means that just about any guitar pickup will drive that input without a problem.

Next, if you’re as lucky as I am, you have at your disposal a digital console with a guitar-amp simulation effect. The simulator puts all the processing I talked about into a handy package that gets inserted into a channel.

What if you’re not so lucky, though?

The first component is distortion. If you can’t get distortion that’s basically agreeable, you should skip it entirely. If you must generate your own clipping, your best bet is to find some analog device that you can drive hard. Overloading a digital device almost always sounds terrible, unless that digital device is meant to simulate some other type of circuit.
For instance, if you can dig up an analog mini-mixer, you can drive the snot out of both the input and output sides to get a good bit of crunch. (You can also use far less gain on either or both ends, if you prefer.)

Of course, the result of that sounds pretty terrible. The distortion products are unfiltered, so there’s a huge amount of information up in the high reaches of the audible spectrum. To fix that, let’s put some guitar-speaker-esque filtering across the whole business. A high and low-pass filter, plus a parametric boost in the high mids will help us recreate what a 12″ driver might do.
Now that we’ve done that, we can add another parametric filter to act as our tone control.

And there we go! It may not be the greatest guitar sound ever created, but this is an emergency and it’s better than nothing.

There is one more wrinkle, though, and that’s monitoring. Under normal circumstances, our personal monitoring network gets its signals just after each channel’s head amp. Usually that’s great, because nothing I do with a channel that’s post the mic pre ends up directly affecting the monitors. In this case, however, it was important for me to switch the “monitor pick point” on the guitar channel to a spot that was post all my channel processing – but still pre-fader.

In your case, this may not be a problem at all.

But what if it is, and you don’t have very much flexibility in picking where your monitor sends come from?

If you’re in a real bind, you could switch the monitor send on the guitar channel to be post-fader. Set the fader at a point you can live with, and then assign the channel output to an otherwise unused subgroup. Put the subgroup through the main mix, and use the subgroup fader as your main-mix level control for the guitar. You’ll still be able to tweak the level of the guitar in the mix, but the monitor mixes won’t be directly affected if you do.