Tag Archives: Signal Flow

Console Questions

A few simple queries can get you going on just about any console.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Back when I was in school, we were introduced to “The Four Console Questions.” The idea behind the questions was that, if you walked up to a strange mixer, you could get answers to the questions and be able to get work done. Mixing desks come in many varieties, but there aren’t very many truly different ways to build them that make sense. In any case, all the basic concepts have to essentially stay the same. If a console can’t take some number “a” of audio inputs, and route those inputs to some number “o” of outputs, you don’t have a mixing console anyway.

With the growing commonality of digital mix systems, I feel that the essential “console questions” need some expansion and tweaking. As such, here’s my take on the material that was presented to me over a decade and a half (GEEZE!) ago.

1. Do I Know What I Want To Do?

You might say that this isn’t a console question at all, but in truth, it’s THE most important one. If you don’t know what you want to do with the console, then knowing a bunch of information about the console’s operation won’t help you one iota. The unfortunate reality is that many people try to engage in this whole exercise backwards; They don’t know what they want to accomplish, but they figure that learning the mixer’s whys and wherefores will help them figure it out.

Certainly, learning about a new feature that you haven’t had access to previously can lead you to new techniques. However, at a bedrock level, you have to have some preconceived notion of what you want to accomplish with the tool. Do you want to get a vocal into the FOH PA? Do you want to get three electric guitars, a kazoo, and a capybara playing Tibetan singing-bowls into 12 different monitor mixes?

You have to know your application.

2. How Do I Correctly Clock The Console?

For an analog console, the answer to this is always: “No clock is required.”

For a digital rig, though, it’s very important. I just recently befuddled myself for an agonizing minute with why a digital console wasn’t showing any input. Whoops! It was because I had set it to receive external clock from a master console a few weeks before, and hadn’t returned it to internal clocking now that it was on its own.

You need to know how to indicate to the console which clock source and sample rate is appropriate for the current situation.

3. How Do I Choose What Inputs Are Available To The Channels?

This is particularly important with consoles that support both on-board input and remote stageboxes. You will very likely have to pick and choose which of those options is available to an individual channel or group of channels. What you need to discover is how those selections are accomplished.

4. How Do I Connect A Particular Input To A Particular Channel?

You might think this was covered in the previous question, but it wasn’t. Your global input options aren’t the end of the story. Many consoles will let you do per-channel “soft-patching,” which is the connection of a certain available signal to a certain channel without having to change a physical connection. Whether on a remote stagebox or directly at the desk, input 1 may NOT necessarily be appearing on channel 1. You have to find out how those connections are chosen.

5. How Do I Insert Channel Processing?

In some situations, this means a physical insert connection that may be automatically enabled…or not. In other cases, this means the enabling and disabling of per-channel dynamics and/ or EQ, and maybe even other DSP processing available onboard in some way. You will need to know how that takes place, and with all the possible variations that might have to do with your particular application, it is CRITICAL that you know what you want to do.

6. How Do I Route A Channel To An Auxiliary, Mix Bus, Or The Main Bus?

Sometimes, this is dead-simple and “locked in.” You might have four auxiliaries and four submix buses implemented in hardware, such that they can only be auxiliary or mix buses, with the same knobs always pushing the same aux and a routing matrix with pan-based bus selection. On the other hand, you might have a pool of buses that can behave in various ways depending on global configuration, per-channel configuration, or both.

So, you’ll need to figure out what you’ve got, and how to connect a given channel to a given bus so that you get the results you want.

7. How Do I Insert Bus Processing?

This might be just like question 5, or wildly different. You will need to sort out which reality is currently in play.

8. How Do I Connect A Given Signal To A Physical Output?

Just because you have a signal running to a bus, there’s no guarantee that the bus is actually going to transfer signal to any other piece of equipment. Especially in the digital world, there may be another layer of patching to assign signals to either digital or analog outputs. Bus 1 might be on output 7, because six matrices might be connected to the first six outputs. Maybe output 16 is a pre-fader direct out from channel 4.

You’ll have to figure out where all that gets specified.


Obviously, there’s more to being a whiz at any particular console than eight basic questions. However, if you can get a given signal into the desk, through some processing, combined with other signals you want to combine, and then off to the next destination, you can at least make some real noise in the room.


Is The Crossover Leaky?

A lot of low-end can still get into your mains.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Every so often, I get to chew on a question that a reader asks me directly. I kinda wish that would happen more often (hint, hint, hint…). Anyway, I was sent a message on The Small Venue Survivalist’s Facebook page, asking if I could render an opinion on why a bass guitar seemed to have a surprising amount of LF information in the main speakers. The mains were being used alongside a subwoofer, with the sub providing a crossover filter at 100 Hz. What could the issue be?

There are a few explanations that would seem reasonable, if one discounts “catastrophic” issues like the crossover filter simply failing to operate as advertised.

1. Crossover filters, especially those implemented in active electronics, have a tendency towards a relatively steep slope. Even so, they usually aren’t brick-wall implementations. Everything below the cutoff doesn’t simply disappear – rather, it’s attenuated at a certain rate. With a filter set to roll off everything “below 100 Hz,” the mid-highs are still being asked to do a fair bit of work at the crossover frequency. The general vicinity of 100 Hz is actually quite bassy (depending on who you ask, of course), so the mains might be perceived as doing more than they should when everything is quite normal.

2. If a sizeable pile of low-frequency energy has been dialed into the bass-guitar channel, or the bass-guitar’s pre-console tone, that big hill-o-bass won’t be tamped down by the crossover. It will be split up proportionately, but following on from the first point, the mid-highs will still be tasked with reproducing their allotted piece of that big LF mound. Consequently, a surprising amount of energy may be present in the tops.

3. I have a suspicion that plenty of modern, two-way boxes receive some degree of “hyping” of their low-end at the factory. This makes them sound more impressive, and the manufacturer can get away with it because of safety limiters placed post-EQ. (The limiter prevents the low-frequency amplifier from supplying more voltage than the woofer can handle, and there may even be a level-dependent high-pass filter in play.) A low-frequency boost that occurs after the crossover reduces the crossover’s apparent effectiveness. Sure, the signal leaving the crossover might be down 12 dB at 75 Hz, but a +6 dB shelving filter put in place by the manufacturer at 100 Hz “undoes” that filtering to only lose 6 dB. Once again, a potential situation develops where the mid-highs are being asked to reproduce more “boom” than you expected.

It is entirely possible that an apparent problem isn’t covered by the three possibilities above, but they should catch quite a few scenarios where everything is hooked up properly and configured correctly.


EQ Propagation

The question of where to EQ is, of course, tied inextricably to what to EQ.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

On occasion, I get the opportunity to guest-lecture to live-sound students. When things go the way I want them to, the students get a chance to experience the dialing up of monitor world (or part of it). One of the inevitable and important questions that arises is, “Why did you reach for the channel EQ when you were solving that one problem, but then use the EQ across the bus for this other problem?”

I’ve been able to give good answers to those questions, but I’ve also wanted to offer better explanations. I think I’ve finally hit upon an elegant way to describe my decision making process in regards to which EQ I use to solve different problems. It turns out that everything comes down to the primary “propagation direction” that I want for a given EQ change:

Effectively speaking, equalization on an input propagates downstream to all outputs. Equalization on an output effectively propagates upstream to all inputs.


What I’ve just said is, admittedly, rather abstract. That being so, let’s take a look at it concretely.

Let’s say we’re in the process of dialing up monitor world. It’s one of those all-too-rare occasions where we get the chance to measure the output of our wedges and apply an appropriate tuning. That equalization is applied across the appropriate bus. What we’re trying to do is equalize the box itself, so we can get acoustical output that follows a “reference curve.” (I advocate for a flat reference curve, myself.)

It might seem counter-intuitive, but if we’re going to tune the wedge electronically, what we actually have to do is transform all of the INPUTS to the box. Changing the loudspeaker itself to get our preferred reference curve would be ideal, but also very difficult. So, we use an EQ across a system output to change all the signals traveling to the wedge, counteracting the filtering that the drivers and enclosure impose on whatever makes it to them. If the monitor is making everything too crisp (for example), the “output” EQ lets us effectively dial high-frequency information out of every input traveling to the wedge.

Now, we put the signal from a microphone into one of our wedges. It starts off sounding generally good, although the channel in question is a vocal and we can tell there’s too much energy in the deep, low-frequency area. To fix the problem, we apply equalization to the microphone’s channel – the input. We want the exact change we’ve made to apply to every monitor that the channel might be sent to, and EQ across an input effectively transforms all the outputs that signal might arrive at.

There’s certainly nothing to stop us from going to each output EQ and pulling down the LF, but:

1) If we have a lot of mixes to work with, that’s pretty tedious, even with copy and paste, and…

2) We’ve now pushed away from our desired reference curve for the wedges, potentially robbing desired low-end information from inputs that would benefit from it. A ton of bottom isn’t necessary for vocals on deck, but what if somebody wants bass guitar? Or kick?

It makes much more sense to make the change at the channel if we can.

This also applies to the mud and midrange feedback weirdness that tends to pile up as one channel gets routed to multiple monitors. The problems aren’t necessarily the result of individual wedges being tuned badly. Rather, they are the result of multiple tunings interacting in a way that’s “wrong” for one particular mic at one particular location. What we need, then, is to EQ our input. The change then propagates to all the outputs, creating an overall solution with relative ease (and, again, we haven’t carved up each individual monitor’s curve into something that sounds weird in the process).

The same idea applies to FOH. If the whole mix seems “out of whack,” then a change to the main EQ effectively tweaks all the inputs to fix the offending frequency range.

So, when it’s time to grab an EQ, think about which way you want your changes to flow. Changes to inputs flow to all the connected outputs. Changes to outputs flow to all connected inputs.


A Weird LFE Routing Solution

Getting creative to obtain more bottom end.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This is another one of those case studies where you get to see how strange my mind is. As such, be aware that it may not be applicable to you at all. I had a bit of a conundrum, and I solved it in a creative way. Some folks might call it “too creative.”

Maybe those people are boring.

Or they’re reasonable and I’m a little nuts.

Anyway.

I’ve previously mentioned that I handle the audio at my church. We’ve recently added some light percussion to complement our bass-guitar situation, and there was a point where our previous worship leader/ music director wanted more thump. That is, low frequency material that was audible AND a bit “tactile.” In any case, the amount of bass we had happening wasn’t really satisfying.

Part of our problem was how I use system limiting. I’ve long nursed a habit of using a very aggressive limiter across the main mix bus as a “stop the volume here” utility. I decide how loud I want to get (which is really not very loud on Sundays), set the dynamics across the output such that we can’t get any louder, and then smack that processor with a good deal of signal. I’ve gotten to a point where I can get it right most of the time, and “put the band in a box” in terms of volume. Drive the vocals hard and they stay on top, while not jumping out and tearing anyone’s face off when the singers push harder.

At the relatively quiet volume levels that we run things, though, this presents a problem for LF content. To get that extended low-frequency effect that can be oh-so-satisfying, you need to be able to run the bass frequencies rather hotter than everything else. The limiter, though, puts a stop to that. If you’re already hitting the threshold with midrange and high-frequency information, you don’t have anywhere to go.

So, what can you do?

For a while, we took the route of patching into the house system’s subwoofer drive “line.” I would run (effectively) unlimited aux-fed subs to that line, while keeping the mains in check as normal, and we got what we wanted.

But it was a bit of a pain, as patching to the house system required unpatching some of their frontend, pulling an amp partially out of a cabinet, doing our thing, and then reversing the process at the end. I’m not opposed to work, but I like “easy” when I can get it. I eventually came to the conclusion that I didn’t really need the house subs.

This was because:

1) We were far, far below the maximum output capacity of our main speakers.

2) Our main speakers were entirely capable of producing content between 50 – 100 Hz at the level I needed for people to feel the low end a little bit. (Not a lot, just a touch.)

If we wouldn’t have had significant headroom, we would have been sunk. Low Frequency Effects (LFE) require significant power, as I said before. If my artificial headroom reduction was close to the actual maximum output of the system, finding a way around it for bass frequencies wouldn’t have done much. Also, I had to be realistic about what we could get. A full-range, pro-audio box with a 15″ or 12″ LF driver can do the “thump” range at low to moderate volumes without too much trouble. Asking for a bunch of building-rattling boom, which is what you get below about 50 Hz, is not really in line with what such an enclosure can deliver.

With those concerns handled, I simply had to solve a routing problem. For all intents and purposes, I had to create a multiband limiter that was bypassed in the low-frequency band. If you look at the diagram above, that’s what I did.

I now have one bus which is filtered to pass content at 100 Hz and above. It gets the same, super-aggressive limiter as it’s always had.

I also have a separate bus for LFE. That bus is filtered to restrict its information to the range between 50 Hz and 100 Hz, with no limiter included in the path.

Those two buses are then combined into the console’s main output bus.

With this configuration, I can “get on the gas” with low end, while retaining my smashing and smooshing of midrange content. I can have a little bit of fun with percussion and bass, while retaining a small, self-contained system that’s easy to patch. I would certainly not recommend this as a general-purpose solution, but hey – it fits my needs for now.


The Power Of The Solo Bus

It’s very handy to be able to pick part of a signal path and route that sound directly to your head.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

headphonesWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The Video

The Summary

Need to figure out which channel is making that weird noise in the midst of the chaos of a show? Wondering whether your drum mics have been switched around? Wish you could directly hear the signal running to the monitor mix that’s giving people fits? Your solo bus is here to save the day!


Case Study: FX When FOH Is Also Monitor World

Two reverbs can help you square certain circles.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The Video

The Script

Let’s say that a band has a new mixing console – one of those “digital rigs in a box” that have come on the scene. The musicians call you in because they need some help getting their monitors dialed up. At some point, the players ask for effects in the monitors: The vocals are too dry, and some reverb would be nice.

So, you crank up an FX send with a reverb inserted on the appropriate bus – and nothing happens.

You then remember that this is meant to be a basic setup, with one console handling both FOH and monitors. Your inputs from the band use pre-fader sends for monitor world, but post-fader sends for FX. Since you weren’t building a mix for FOH, all your faders were all the way down. You don’t know where they would be for a real FOH mix, anyway. If the faders are down, a post-fader send can’t get any signal to an FX bus.

Now, you typically don’t want the monitors to track every level tweak made for FOH, but you DO want the FX sends to be dependent on fader position – otherwise, the “wet-to-dry” ratio would change with every fader adjustment.

So, what do you do?

You can square the circle if you can change the pre/ post send configuration to the FX buses, AND if you can also have two reverbs.

Reverb One becomes the monitor reverb. The sends to that reverb are configured to be pre-fader, so that you don’t have to guess at a fader level. The sends from the reverb return channel should also be pre-fader, so that the monitor reverb doesn’t end up in the main mix.

Reverb Two is then setup to be the FOH reverb. The sends to this reverb from the channels are configured as post-fader. Reverb Two, unlike Reberb One, should have output that’s dependent on the channel fader position. Reverb Two is, of course, kept out of the monitor mixes.

With a setup like this, you don’t need to know the FOH mix in advance in order to dial up FX in the monitors. There is the small downside of having to chew up two FX processors, but that’s not a huge problem if it means getting the players what they need for the best performance.


Case Study: Creating A Virtual Guitar Rig In An Emergency

Distortion + filtering = something that can pass as a guitar amplifier in an emergency.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The Video

The Script

Imagine the scene: You’re setting up a band that has exactly one player with an electric guitar. They get to the gig, and suddenly discover a problem: The power supply for their setup has been left at home. Nobody has a spare, because it’s a specialized power supply – and nobody else plays an electric guitar anyway. The musician in question has no way to get a guitar sound without their rig.

At all.

As in, what they have that you can work with is a guitar and a cable. That’s it.

So, what do you do?

Well, in the worst-case scenario, you just find a direct box, run the guitar completely dry, and limp through it all as best you can.

But that’s not your only option. If you’re willing to get a little creative, you can do better than just having everybody grit their teeth and suffer. To get creative, you need to be able to take their guitar rig apart and put it back together again.

Metaphorically, I mean. You can put the screwdriver away.

What I’m getting at is this question: If you break the guitar rig into signal-processing blocks, what does each block do?

When it comes right down to it, a super-simple guitar amp amounts to three things: Some amount of distortion (including no distortion at all), tone controls, and an output filter stack.
The first two parts might make sense, but what’s that third bit?

The output filtering is either an actual loudspeaker, or something that simulates a loudspeaker for a direct feed. If you remove a speaker’s conversion of electricity to sound pressure waves, what’s left over is essentially a non-adjustable equalizer. Take a look at this frequency-response plot for a 12″ guitar speaker by Eminence: It’s basically a 100 Hz to 5 kHz bandpass filter with some extra bumps and dips.

It’s a fair point to note that different guitar amps and amp sims may have these different blocks happening in different orders. Some might forget about the tone-control block entirely. Some might have additional processing available.

Now then.

The first thing to do is to find an active DI, if you can. Active DI boxes have very high input impedances, which (in short) means that just about any guitar pickup will drive that input without a problem.

Next, if you’re as lucky as I am, you have at your disposal a digital console with a guitar-amp simulation effect. The simulator puts all the processing I talked about into a handy package that gets inserted into a channel.

What if you’re not so lucky, though?

The first component is distortion. If you can’t get distortion that’s basically agreeable, you should skip it entirely. If you must generate your own clipping, your best bet is to find some analog device that you can drive hard. Overloading a digital device almost always sounds terrible, unless that digital device is meant to simulate some other type of circuit.
For instance, if you can dig up an analog mini-mixer, you can drive the snot out of both the input and output sides to get a good bit of crunch. (You can also use far less gain on either or both ends, if you prefer.)

Of course, the result of that sounds pretty terrible. The distortion products are unfiltered, so there’s a huge amount of information up in the high reaches of the audible spectrum. To fix that, let’s put some guitar-speaker-esque filtering across the whole business. A high and low-pass filter, plus a parametric boost in the high mids will help us recreate what a 12″ driver might do.
Now that we’ve done that, we can add another parametric filter to act as our tone control.

And there we go! It may not be the greatest guitar sound ever created, but this is an emergency and it’s better than nothing.

There is one more wrinkle, though, and that’s monitoring. Under normal circumstances, our personal monitoring network gets its signals just after each channel’s head amp. Usually that’s great, because nothing I do with a channel that’s post the mic pre ends up directly affecting the monitors. In this case, however, it was important for me to switch the “monitor pick point” on the guitar channel to a spot that was post all my channel processing – but still pre-fader.

In your case, this may not be a problem at all.

But what if it is, and you don’t have very much flexibility in picking where your monitor sends come from?

If you’re in a real bind, you could switch the monitor send on the guitar channel to be post-fader. Set the fader at a point you can live with, and then assign the channel output to an otherwise unused subgroup. Put the subgroup through the main mix, and use the subgroup fader as your main-mix level control for the guitar. You’ll still be able to tweak the level of the guitar in the mix, but the monitor mixes won’t be directly affected if you do.


The Effervescent Joy Of Meeting A Knowledgeable Outsider

Some of the best folks to find are those who know the craft, but aren’t invested in your workflow.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Last week, I got to spend a few days with students from Broadview Entertainment Arts University. The Live Sound class needs some honest-to-goodness shows to work on, so Bruce (their actual professor) and myself worked out a bit of a mechanism: I put a couple of gigs together every quarter, BEAU provides the room, I bring the PA, and we spend three days getting our collective hands dirty with building the thing.

Last week was the first round. As usual, I spent too much time talking and we didn’t get as far as maybe we should have. I also made some hilarious blunders, because everything involved in putting on a live gig is a perishable skill, and I sometimes have sizable gaps between productions. (For several minutes, I couldn’t find the blasted aux-in remap selector for my X32, even though I was on the “Input” routing page and staring right at it. I also absent-mindedly walked off the drum riser while I was mid-sentence. You can’t make this stuff up, folks.)

Anyway.

We had a really solid group of students all around. One of the most solid students was Patrick. Patrick is a guy who’s coming at this whole live-sound thing with a background in telecom. Telecom, like audio for entertainment, is the sort of business where you have to manage and troubleshoot every possible species of signal-transfer problem imaginable. Telecom skills are also becoming increasingly relevant to audio because of our increased reliance on high-speed network infrastructure. When all your audio, control, and clock signaling gets jammed onto a Cat6, it’s important to have some sort of clue as to what’s going on. (I have just enough clues to make things work. Other people have many more clues.)

As the story ended up going, we had a problem with my digi-snake. We got everything plugged together, and…oh dear. The consoles were only seeing one stage box, instead of both cascaded together. I walked over to the deck and started puzzling through things. Did the cascade connection get partially yanked? No. Did the boxes simply need a reset? No. Had I crunched the cascade cable at some point? No. I was on the brink of declaring that we’d just have to muddle through with one box when Patrick got involved.

Had I tried running a signal directly to the second box? Well, actually I hadn’t, because I was used to thinking of the two boxes as a unit.

Click.

Oh, look! The second box illuminated its green light of digital-link happiness.

Had I tried plugging directly into the secondary connection on the first box? Well, actually I hadn’t.

Click.

No happy-light was to be found.

I considered all that very nifty, but still being invested in my way of doing things, I failed to immediately see the obvious. Patrick enlightened me.

“The B-jack on the top box is the problem. Just connect them in reverse order, and you’ll have both. You can always change them around in the rack later.”

Of course, he was exactly right, and he had saved the day. (I was really glad were working on the problem the night before the show, instead of with 30 minutes to spare.)

The point here is that Patrick’s skillset, while not directly related to what we were doing, was fully transferable. He didn’t know the exact system we were working on, but he had plenty of experience at troubleshooting data-interconnects in general. He also had a distinct advantage over me. He was looking at the problem with a set of totally fresh eyes. Not being locked into a particular set of assumptions about how the system was supposed to work as a whole, he could conceptualize the individual pieces as being modular rather than as a single, static, integrated solution. I was thinking inside the flightcase, while Patrick was thinking outside the flightcase about everything inside that same flightcase. There’s a difference.

The whole situation was the triumph of the knowledgeable outsider. A person with the skills to make your plan work, but who isn’t yet invested in your specific plan may be just what you need when the whole mess starts to act up. They might be able to take a piece of the whole, reconfigure it, and slot it back in while you’re still getting your mind turned around. It’s really quite impressive.


Pre Or Post EQ?

Stop agonizing and just go with post to start.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Oh, the hand-wringing.

Should the audio-human take the pre-EQ split from the amplifier, or the post-EQ split? Isn’t there more control if we choose pre-EQ? If we choose incorrectly, will we ruin the show? HELP!

Actually, I shouldn’t be so dismissive. Shows are important to people – very important, actually – and so taking some time to chew on the many and various decisions involved is a sign of respect and maturity. If you’re actually stopping to think about this, “good on ya.”

What I will not stop rolling my eyes at, though, are live-sound techs who get their underwear mis-configured over not getting a pre-EQ feed from the bass/ keys/ guitar/ whatever. Folks, let’s take a breath. Getting a post-EQ signal is generally unlikely to sink any metaphorical ship, sailboat, or inflatable canoe that we happen to be paddling. In fact, I would say that we should tend to PREFER a post-EQ direct line. Really.


First of all, if this terminology sounds mysterious, it really isn’t. You almost certainly know that “pre” means “before” and “post” means “after.” If you’re deducing, then, that setting a line-out to “pre-EQ” gets you a signal from before the EQ happens, then you’re right. You’re also right in thinking that post-EQ splits happen after all the EQ tweaking has been applied to the signal.

And I think we should generally be comfortable with, and even gravitate toward getting our feed to the console from a point which has the EQ applied.

1) It’s consistent with lots of other things we do. Have you ever mic’ed a guitar amp? A drum? A vocalist? Of course you have. In all of those cases (and many others), you are effectively getting a post-EQ signal. Whether the tone controls are electronic, related to tuning, or just part of how someone sings, you are still subject to how those tonal choices are playing out. So, why are you willing to cut people the slack to make choices that affect your signal when it’s a mic that’s involved, but not a direct line?

2) There’s no reason to be afraid of letting people dial up an overall sound that they want. In fact, if it makes it easier on you, the audio-human, why would that be a bad thing? I’ve been in situations where a player was trying desperately to get their monitor mix to sound right, but was having to fight with an unfamiliar set of tone controls (a parametric EQ) through an engineer. It very well might have gone much faster to just have given the musician a good amount of level through their send, and then let them turn their own rig’s knobs until they felt happy. You can do that with a post-EQ line.

3) Along the same track, what if the player changes their EQ from song to song? What if there are FX going in and out that appear at the post-EQ split, but not from the pre-EQ option? Why throw all that work out the window, just to have “more control” at the console? That sounds like a huge waste of time and effort to me.

4) In any venue of even somewhat reasonable size, having pre-EQ control over the sound from an amplifier doesn’t mean as much as you think it might. If the player does call up a completely horrific, pants-wettingly terrible tone, the chances are that the amplifier is going to be making a LOT of that odious racket anyway. If the music is even somewhat loud, using your sweetly-tweaked, pre-EQ signal to blast over the caterwauling will just be overwhelming to the audience.

Ladies and gents, as I say over and over, we don’t have to fix everything – especially not by default. If we have the option, let’s trust the musicians and go post-EQ as our first attempt. If things turn out badly, toggling the switch takes seconds. (And even taking the other option might not be enough to fix things, so take some deep breaths.) If things go well, we get to ride the momentum of what the players are doing instead of swimming upstream. I say that’s a win.


A Guided Tour Of Feedback

It’s all about the total gain from the microphone’s reference point.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

smoothed-monitorsWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This site is mostly about live audio, and as such, I talk about feedback a lot. I’m used to the idea that everybody here has a pretty good idea of what it is.

But, every so often, I’ll do a consulting gig and be reminded that feedback can be a mysterious and unknown force. So, for those of you who are totally flummoxed by feedback monsters, this article exists for your specific benefit.

All Locations Harbor Dragons

The first thing to say is this: Any PA system with real mics on open channels, and in a real room, is experiencing feedback all the time. Always.

Feedback is not a phenomenon which appears and disappears. It may or may not be a problem at any particular moment in time. You may or may not be able to hear anything like it at a given instant. Even so, any PA system that is doing anything with a microphone is guaranteed to be in a feedback loop.

What matters, then, is the behavior of the signal running through that loop. If the signal is decaying into the noise floor before you can notice it, then you DO have feedback, but you DON’T have a feedback problem. If the signal is dropping slowly enough for you to notice some lingering effects, you are beginning to have a problem. If the signal through the feedback loop isn’t dropping at all, then you are definitely having a problem, and if the looped signal level is growing, you have a big problem that is only getting bigger.

Ouroboros

If every PA system is a dragon consuming its own tail – an ouroboros – then how does that self-consuming action take place?

It works like this:

1) A sound is made in the room.
2) At least one microphone converts that sound into electricity.
3) The electricity is passed through a signal chain.
4) At the end of the chain is the microphone’s counterpart, which is a loudspeaker.
5) The loudspeaker converts the signal into a sound in the room.
6) The sound in the room travels through direct and indirect paths to the same microphone(s) as above.
7) The new sound in the room, which is a reproduction of the original event, is converted into electricity.

The loop continues forever, or until the loop is broken in some way. The PA system continually plays a copy of a copy of a copy (etc) of the original sound.

How Much Is The Dragon Being Fed?

What ultimately determines whether or not your feedback dragon is manageable or not is the apparent gain from the microphone’s reference point.

Notice that I did NOT simply say “the gain applied to the microphone.”

The gain applied to the microphone certainly has a direct and immediate influence on the apparent gain from the mic’s frame of reference. If all other variables are held constant, then greater applied gain will reliably move you closer toward an audible feedback issue. Even so, the applied gain is not the final predictor of ringing, howling, screeching, or any other unkind noise.

What really matters is the apparent gain at the capsule(s).


Gain in “absolute” terms is a signal multiplier. A gain of 1, which may be referred to as “unity,” is when the signal level coming out of a system (or system part) is equal in level to the signal going in. A signal level X 1 is the same signal level. A gain of less than 1 (but more than zero) means that signal level drops across the in/ out junction, and a gain of greater than 1 indicates an increase in signal strength.

A gain multiplier of zero means a broken audio circuit. Gain multipliers of less than zero are inverted polarity, with the absolute value relative to 1 being what determines if the signal is of greater or lesser intensity.

Of course, audio humans are more used to gain expressed in decibels. A gain multiplier of 1 is 0 dB, where the input signal (the reference) is equal to the output. Gain multipliers greater than 1 have positive decibel values, and negative dB values are assigned to multipliers less than 1. “Negative infinity” gain is a multiplier of 0.


The apparent gain as referenced by the pertinent microphone(s) is what can also be referred to as “loop gain.” The more the reproduced sonic event “gets back into” the mic, the higher that loop gain appears to be. The loop gain is applied at every iteration through the loop, which each iteration taking some amount of time to occur. If the time for a sonic event to be reproduced and arrive back at the capsule is short, then feedback will build aggressively when the loop gain is positive, but also drop quickly when the loop gain is negative.

Loop gain, as you might expect, increases with greater electronic gain. It also increases as a mic’s polar pattern becomes wider, because the mic has greater sensitivity at any given arrival angle. Closer proximity to a source of reproduced sound also increases apparent gain, due to the apparent intensity of a sound source being higher at shorter distances. Greater room reflectivity is another source of higher loop gain; More of the reproduced sound is being redirected towards the capsule. Lastly, a frequency in phase with itself through the loop will have greater apparent gain than if it’s out of phase.

This is why it’s much, much harder to run monitor world in a small, “live” space than in a large, nicely damped space – or outside. It’s also why a large, reflective object (like a guitar) can suddenly put a system into feedback when all the angles become just right. The sound coming from the monitor hits the guitar, and then gets bounced directly into the most sensitive part of the mic’s polar pattern.

Dragon Taming

With all that on the table, then, how do you get control over such a wild beast?

Obviously, reducing the system’s drive level will help. Pulling the preamp or send level down until the loop gain becomes negative is very effective – and this is a big reason for bands to work WITH each other. Bands that avoid being “too loud for themselves” have fewer incidences of channels being run “hot.” Increasing the distance from the main PA to the microphones is also a good idea (within reason and practicality), as is an overall setup where the low-sensitivity areas of microphone polar patterns are pointed at any and all loudspeakers. In that same vein, using mics with tighter polar patterns can offer a major advantage, as long as the musicians can use those mics effectively. Adding heavy drape to a reflective room may be an option in some cases.

Of course, when all of that’s been done and you still need more level than your feedback monster will let you have, it’s probably time to break out the EQ.

Equalization can be effective with many feedback situations, due to loop gain commonly being notably NOT equal at all frequencies. In almost any situation that you will encounter in real-life, one frequency will end up having the highest loop gain at any particular moment. That frequency, then, will be the one that “rings.”

The utility of EQ is that you can reduce a system’s electronic gain in a selected bandwidth. Preamp levels, fader levels, and send levels are all full-bandwidth controls – but if only a small part of the audible spectrum is responsible for your troubles, it’s much better to address that problem specifically. Equalizers offering smaller bandwidths allow you to make cuts in problem areas without wrecking everything else. At the same time, very narrow filters can be hard to place effectively, and a change in phase over time can push a feedback frequency out of the filter’s effective area.

EQ as a feedback management device – like everything else – is an exercise in tradeoffs. You might be able to pull off some real “magic” in terms of system stability at high gain, but the mics might sound terrible afterwards. You can easily end up applying so many filters that reducing a full-bandwidth control’s level would do basically the same thing.

In general, doing as much as possible to tame your feedback dragon before the EQ gets involved is a very good idea. You can then use equalization to tamp down a couple of problem spots, and be ready to go.