Tag Archives: EQ

EQ Propagation

The question of where to EQ is, of course, tied inextricably to what to EQ.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

On occasion, I get the opportunity to guest-lecture to live-sound students. When things go the way I want them to, the students get a chance to experience the dialing up of monitor world (or part of it). One of the inevitable and important questions that arises is, “Why did you reach for the channel EQ when you were solving that one problem, but then use the EQ across the bus for this other problem?”

I’ve been able to give good answers to those questions, but I’ve also wanted to offer better explanations. I think I’ve finally hit upon an elegant way to describe my decision making process in regards to which EQ I use to solve different problems. It turns out that everything comes down to the primary “propagation direction” that I want for a given EQ change:

Effectively speaking, equalization on an input propagates downstream to all outputs. Equalization on an output effectively propagates upstream to all inputs.


What I’ve just said is, admittedly, rather abstract. That being so, let’s take a look at it concretely.

Let’s say we’re in the process of dialing up monitor world. It’s one of those all-too-rare occasions where we get the chance to measure the output of our wedges and apply an appropriate tuning. That equalization is applied across the appropriate bus. What we’re trying to do is equalize the box itself, so we can get acoustical output that follows a “reference curve.” (I advocate for a flat reference curve, myself.)

It might seem counter-intuitive, but if we’re going to tune the wedge electronically, what we actually have to do is transform all of the INPUTS to the box. Changing the loudspeaker itself to get our preferred reference curve would be ideal, but also very difficult. So, we use an EQ across a system output to change all the signals traveling to the wedge, counteracting the filtering that the drivers and enclosure impose on whatever makes it to them. If the monitor is making everything too crisp (for example), the “output” EQ lets us effectively dial high-frequency information out of every input traveling to the wedge.

Now, we put the signal from a microphone into one of our wedges. It starts off sounding generally good, although the channel in question is a vocal and we can tell there’s too much energy in the deep, low-frequency area. To fix the problem, we apply equalization to the microphone’s channel – the input. We want the exact change we’ve made to apply to every monitor that the channel might be sent to, and EQ across an input effectively transforms all the outputs that signal might arrive at.

There’s certainly nothing to stop us from going to each output EQ and pulling down the LF, but:

1) If we have a lot of mixes to work with, that’s pretty tedious, even with copy and paste, and…

2) We’ve now pushed away from our desired reference curve for the wedges, potentially robbing desired low-end information from inputs that would benefit from it. A ton of bottom isn’t necessary for vocals on deck, but what if somebody wants bass guitar? Or kick?

It makes much more sense to make the change at the channel if we can.

This also applies to the mud and midrange feedback weirdness that tends to pile up as one channel gets routed to multiple monitors. The problems aren’t necessarily the result of individual wedges being tuned badly. Rather, they are the result of multiple tunings interacting in a way that’s “wrong” for one particular mic at one particular location. What we need, then, is to EQ our input. The change then propagates to all the outputs, creating an overall solution with relative ease (and, again, we haven’t carved up each individual monitor’s curve into something that sounds weird in the process).

The same idea applies to FOH. If the whole mix seems “out of whack,” then a change to the main EQ effectively tweaks all the inputs to fix the offending frequency range.

So, when it’s time to grab an EQ, think about which way you want your changes to flow. Changes to inputs flow to all the connected outputs. Changes to outputs flow to all connected inputs.


The Difference Between The Record And The Show

Why is it that the live mix and the album mix end up being done differently?

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Jason Knoell runs H2Audio in Utah, and he recently sent me a question that essentially boils down to this: If you have a band with some recordings, you can play those recordings over the same PA in the same room as the upcoming show. Why is it that the live mix of that band, in that room, with that PA might not come together in the same way as the recording? The recording that you just played over the rig? Why would you NOT end up having the same relationship between the drums and guitars, or the guitars and the vocals, or [insert another sonic relationship here].

This is one of those questions where trying to address every tiny little detail isn’t practical. I will, however, try to get into the major factors I can readily identify. Please note that I’m ignoring room acoustics, as those are a common factor between a recording and a live performance being played into the same space.

Magnitude

It’s very likely that the recording you just pumped out over FOH (Front Of House) had a very large amount of separation between the various sources. Sure, the band might have recorded the songs in such a way as to all be together in one room, but even then, the “bleed” factor is very likely to be much smaller than what you get in a live environment. For instance, a band that’s in a single-room recording environment can be set up with gobos (go-betweens) screening the amps and drums. The players can also be physically arranged so that any particular mic has everything else approaching the element from off-axis.

They also probably recorded using headphones for monitors, and overdubbed the “keeper” vocals. They may also have gone for extreme separation and overdubbed EVERYTHING after putting down some basics.

Contrast this with a typical stage, where we’re blasting away with wedge loudspeakers, we have no gobos to speak of, and all the backline is pointed at the sensitive angles of the vocal mics. Effectively, everything is getting into everything else. Even if we oversimplify and look only at the relative magnitudes between sounds, it’s possible to recognize that there’s a much smaller degree of source-to-source distinctiveness. The band’s signals have been smashed together, and even if we “get on the gas” with the vocals, we might also be effectively pushing up part of the drumkit, or the guitars.

Time

Along with magnitude, we also have a time problem. With as much bleed as is likely in play, the oh-so-critical transients that help create vocal and musical intelligibility are very, very smeared. We might have a piece of backline, or a vocal, “arriving” at the listener several times over in quick succession. The recording, on the other hand, has far more sharply defined “timing information.” This can very likely lead to a requirement that vocals and lead parts be mixed rather hotter live than they would be otherwise. That is, I’m convinced that a “conservation of factors” situation exists: If we lose separation cues that come from timing, the only way to make up the deficit is through volume separation.

A factor that can make the timing problems even worse is those wedge monitors we’re using, combined with the PA handling reproduction out front. Not only are all the different sources getting into each other at different times, sources being run at high gain are arriving at their own mics several times significantly (until the loop decay becomes large enough to render the arrivals inaudible). This further “blurs” the timing information we’re working with.

Processing Limits

Because live audio happens in a loop that is partially closed, we can be rather more constrained in what we can do to a signal. For instance, it may be that the optimal choice for vocal separation would simply be a +3 dB, one-octave wide filter at 1 kHz. Unfortunately, that may also be the portion of the loop’s bandwidth that is on the verge of spiraling out of control like a jet with a meth-addicted Pomeranian at the controls. So, again, we can’t get exactly the same mix with the same factors. We might have to actually cut 1 kHz and just give the rest of the signal a big push.

Also, the acoustical contribution of the band limits the effectiveness of our processing. On the recording, a certain amount of compression on the snare might be very effective; All we hear is the playback with that exact dynamics solution applied. With everything live in the room, however, we hear two things: The reproduction with compression, and the original, acoustic sound without any compression at all. In every situation where the in-room sound is a significant factor, what we’re really doing is parallel compression/ EQ/ gating/ etc. Even our mutes are parallel – the band doesn’t simply drop into silence if we close all the channels.


Try as we might, live-sound humans can rarely exert the same amount of control over audio reproduction that a studio engineer has. In general, we are far more at the mercy of our environment. It’s very often impractical for us to simply duplicate the album mix and receive the same result (only louder).

But that’s just part of the fun, if you think about it.


Case Study: Creating A Virtual Guitar Rig In An Emergency

Distortion + filtering = something that can pass as a guitar amplifier in an emergency.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The Video

The Script

Imagine the scene: You’re setting up a band that has exactly one player with an electric guitar. They get to the gig, and suddenly discover a problem: The power supply for their setup has been left at home. Nobody has a spare, because it’s a specialized power supply – and nobody else plays an electric guitar anyway. The musician in question has no way to get a guitar sound without their rig.

At all.

As in, what they have that you can work with is a guitar and a cable. That’s it.

So, what do you do?

Well, in the worst-case scenario, you just find a direct box, run the guitar completely dry, and limp through it all as best you can.

But that’s not your only option. If you’re willing to get a little creative, you can do better than just having everybody grit their teeth and suffer. To get creative, you need to be able to take their guitar rig apart and put it back together again.

Metaphorically, I mean. You can put the screwdriver away.

What I’m getting at is this question: If you break the guitar rig into signal-processing blocks, what does each block do?

When it comes right down to it, a super-simple guitar amp amounts to three things: Some amount of distortion (including no distortion at all), tone controls, and an output filter stack.
The first two parts might make sense, but what’s that third bit?

The output filtering is either an actual loudspeaker, or something that simulates a loudspeaker for a direct feed. If you remove a speaker’s conversion of electricity to sound pressure waves, what’s left over is essentially a non-adjustable equalizer. Take a look at this frequency-response plot for a 12″ guitar speaker by Eminence: It’s basically a 100 Hz to 5 kHz bandpass filter with some extra bumps and dips.

It’s a fair point to note that different guitar amps and amp sims may have these different blocks happening in different orders. Some might forget about the tone-control block entirely. Some might have additional processing available.

Now then.

The first thing to do is to find an active DI, if you can. Active DI boxes have very high input impedances, which (in short) means that just about any guitar pickup will drive that input without a problem.

Next, if you’re as lucky as I am, you have at your disposal a digital console with a guitar-amp simulation effect. The simulator puts all the processing I talked about into a handy package that gets inserted into a channel.

What if you’re not so lucky, though?

The first component is distortion. If you can’t get distortion that’s basically agreeable, you should skip it entirely. If you must generate your own clipping, your best bet is to find some analog device that you can drive hard. Overloading a digital device almost always sounds terrible, unless that digital device is meant to simulate some other type of circuit.
For instance, if you can dig up an analog mini-mixer, you can drive the snot out of both the input and output sides to get a good bit of crunch. (You can also use far less gain on either or both ends, if you prefer.)

Of course, the result of that sounds pretty terrible. The distortion products are unfiltered, so there’s a huge amount of information up in the high reaches of the audible spectrum. To fix that, let’s put some guitar-speaker-esque filtering across the whole business. A high and low-pass filter, plus a parametric boost in the high mids will help us recreate what a 12″ driver might do.
Now that we’ve done that, we can add another parametric filter to act as our tone control.

And there we go! It may not be the greatest guitar sound ever created, but this is an emergency and it’s better than nothing.

There is one more wrinkle, though, and that’s monitoring. Under normal circumstances, our personal monitoring network gets its signals just after each channel’s head amp. Usually that’s great, because nothing I do with a channel that’s post the mic pre ends up directly affecting the monitors. In this case, however, it was important for me to switch the “monitor pick point” on the guitar channel to a spot that was post all my channel processing – but still pre-fader.

In your case, this may not be a problem at all.

But what if it is, and you don’t have very much flexibility in picking where your monitor sends come from?

If you’re in a real bind, you could switch the monitor send on the guitar channel to be post-fader. Set the fader at a point you can live with, and then assign the channel output to an otherwise unused subgroup. Put the subgroup through the main mix, and use the subgroup fader as your main-mix level control for the guitar. You’ll still be able to tweak the level of the guitar in the mix, but the monitor mixes won’t be directly affected if you do.


Sounding “Good” Everywhere

This is actually about studio issues, but hey…

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

My latest article for Schwilly Family Musicians has to do with the recorded side of life. Even so, I thought some of you might be interested:

‘Even before the age of smartphones, “translation” was a big issue for folks making records. The question that was constantly asked was, “How do I make this tune sound good everywhere?”

In my mind, that’s the wrong question.

The real question is, “Does this mix continue to make sense, even if the playback system has major limitations?”’


Read the whole piece here.


EQ: Separating The Problems

You have to know what you’re solving if you want to solve a problem effectively.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

There’s a private area on Facebook for “musicpreneurs” to hang out in. I’ve been trying to get more involved, so I’ve asked people to pose their sonic quandaries to me. One person was asking how to set up their system so as to get a certain, desired tone from their instrument.

I won’t rehash the whole answer here, but I will definitely tell you the key to my answer: Separate your problems. Figure out which “domain” a particular issue resides in, and then work within that area to find a solution.

That’s a statement that you can definitely generalize, but the particular discussion was mostly in the context of equalization. Equalization of live-audio signal chains seems to invite unfocused flailing at least as much as anything else. Somebody gets into a jam (not a jam session, but rather a difficult situation), and they start tweaking every tonal control they can get their hands on. Several minutes later, they’ve solved and unsolved several different problems, and might be happy with some part of their fix. Of course, they may have broken something else in the process.

If you’re like me, you’d prefer not to do that.

Not doing that involves being very clear about where your problem actually is.


Lots of people use the “wrong” EQ to address a perceived shortcoming with their sound. I think I’ve mentioned before that a place to find this kind of approach is with vocal processors. I’ve encountered more than person who, as far as I could tell, was trying to fix a PA system through the processing of an individual channel. That is, at a regular gig or rehearsal, they were faced with a system that exhibited poor tonality. For instance, for whatever reason, they might have felt that the PA lacked in high-end crispness.

So, they reach down to their processor, and throw a truckload of high-frequency boost onto their voice. Problem solved!

Except they just solved the problem everywhere, even if the problem doesn’t exist everywhere. They plug that vocal processor into a rig which has been nicely tuned, and now their voice is a raspy, irritating, sand-paper-esque noise that’s constantly on the verge of hard feedback.

They used channel-specific processing to manage a system-level problem, and the result was a channel that only works with one system – or a system with one channel that sounds right, while everything else is still a mess. They found a fix, but the fix was in the wrong domain.

The converse case of this is also common. An engineer gets into a bind when listening to a channel or two, and reaches for the EQ across the main speakers. Well, no problem…except that any new solution has now been applied to EVERYTHING running through the mains. That might be helpful, or it might mean that a whole new hole has just been dug. If the PA is well-tuned, then the problem isn’t the PA. Rather, the thing to solve is specific to a channel or group of channels, and should be addressed there if possible.

If you find yourself gunning the bottom end on every channel of your console, you’d be better served by changing the main EQ instead. If everything sounds fine except for one channel, leave the main processing alone and build a fix specific to your problem-child.

Obviously, there are “heat of the moment” situations where you just have to grab-n-go. At the same time, taking a minute to figure out which bridge actually has the troll living under it is a big help. Find the actual offender, correct that offender, leave everything else alone, and get better results overall.


Pre Or Post EQ?

Stop agonizing and just go with post to start.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Oh, the hand-wringing.

Should the audio-human take the pre-EQ split from the amplifier, or the post-EQ split? Isn’t there more control if we choose pre-EQ? If we choose incorrectly, will we ruin the show? HELP!

Actually, I shouldn’t be so dismissive. Shows are important to people – very important, actually – and so taking some time to chew on the many and various decisions involved is a sign of respect and maturity. If you’re actually stopping to think about this, “good on ya.”

What I will not stop rolling my eyes at, though, are live-sound techs who get their underwear mis-configured over not getting a pre-EQ feed from the bass/ keys/ guitar/ whatever. Folks, let’s take a breath. Getting a post-EQ signal is generally unlikely to sink any metaphorical ship, sailboat, or inflatable canoe that we happen to be paddling. In fact, I would say that we should tend to PREFER a post-EQ direct line. Really.


First of all, if this terminology sounds mysterious, it really isn’t. You almost certainly know that “pre” means “before” and “post” means “after.” If you’re deducing, then, that setting a line-out to “pre-EQ” gets you a signal from before the EQ happens, then you’re right. You’re also right in thinking that post-EQ splits happen after all the EQ tweaking has been applied to the signal.

And I think we should generally be comfortable with, and even gravitate toward getting our feed to the console from a point which has the EQ applied.

1) It’s consistent with lots of other things we do. Have you ever mic’ed a guitar amp? A drum? A vocalist? Of course you have. In all of those cases (and many others), you are effectively getting a post-EQ signal. Whether the tone controls are electronic, related to tuning, or just part of how someone sings, you are still subject to how those tonal choices are playing out. So, why are you willing to cut people the slack to make choices that affect your signal when it’s a mic that’s involved, but not a direct line?

2) There’s no reason to be afraid of letting people dial up an overall sound that they want. In fact, if it makes it easier on you, the audio-human, why would that be a bad thing? I’ve been in situations where a player was trying desperately to get their monitor mix to sound right, but was having to fight with an unfamiliar set of tone controls (a parametric EQ) through an engineer. It very well might have gone much faster to just have given the musician a good amount of level through their send, and then let them turn their own rig’s knobs until they felt happy. You can do that with a post-EQ line.

3) Along the same track, what if the player changes their EQ from song to song? What if there are FX going in and out that appear at the post-EQ split, but not from the pre-EQ option? Why throw all that work out the window, just to have “more control” at the console? That sounds like a huge waste of time and effort to me.

4) In any venue of even somewhat reasonable size, having pre-EQ control over the sound from an amplifier doesn’t mean as much as you think it might. If the player does call up a completely horrific, pants-wettingly terrible tone, the chances are that the amplifier is going to be making a LOT of that odious racket anyway. If the music is even somewhat loud, using your sweetly-tweaked, pre-EQ signal to blast over the caterwauling will just be overwhelming to the audience.

Ladies and gents, as I say over and over, we don’t have to fix everything – especially not by default. If we have the option, let’s trust the musicians and go post-EQ as our first attempt. If things turn out badly, toggling the switch takes seconds. (And even taking the other option might not be enough to fix things, so take some deep breaths.) If things go well, we get to ride the momentum of what the players are doing instead of swimming upstream. I say that’s a win.


A Guided Tour Of Feedback

It’s all about the total gain from the microphone’s reference point.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

smoothed-monitorsWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This site is mostly about live audio, and as such, I talk about feedback a lot. I’m used to the idea that everybody here has a pretty good idea of what it is.

But, every so often, I’ll do a consulting gig and be reminded that feedback can be a mysterious and unknown force. So, for those of you who are totally flummoxed by feedback monsters, this article exists for your specific benefit.

All Locations Harbor Dragons

The first thing to say is this: Any PA system with real mics on open channels, and in a real room, is experiencing feedback all the time. Always.

Feedback is not a phenomenon which appears and disappears. It may or may not be a problem at any particular moment in time. You may or may not be able to hear anything like it at a given instant. Even so, any PA system that is doing anything with a microphone is guaranteed to be in a feedback loop.

What matters, then, is the behavior of the signal running through that loop. If the signal is decaying into the noise floor before you can notice it, then you DO have feedback, but you DON’T have a feedback problem. If the signal is dropping slowly enough for you to notice some lingering effects, you are beginning to have a problem. If the signal through the feedback loop isn’t dropping at all, then you are definitely having a problem, and if the looped signal level is growing, you have a big problem that is only getting bigger.

Ouroboros

If every PA system is a dragon consuming its own tail – an ouroboros – then how does that self-consuming action take place?

It works like this:

1) A sound is made in the room.
2) At least one microphone converts that sound into electricity.
3) The electricity is passed through a signal chain.
4) At the end of the chain is the microphone’s counterpart, which is a loudspeaker.
5) The loudspeaker converts the signal into a sound in the room.
6) The sound in the room travels through direct and indirect paths to the same microphone(s) as above.
7) The new sound in the room, which is a reproduction of the original event, is converted into electricity.

The loop continues forever, or until the loop is broken in some way. The PA system continually plays a copy of a copy of a copy (etc) of the original sound.

How Much Is The Dragon Being Fed?

What ultimately determines whether or not your feedback dragon is manageable or not is the apparent gain from the microphone’s reference point.

Notice that I did NOT simply say “the gain applied to the microphone.”

The gain applied to the microphone certainly has a direct and immediate influence on the apparent gain from the mic’s frame of reference. If all other variables are held constant, then greater applied gain will reliably move you closer toward an audible feedback issue. Even so, the applied gain is not the final predictor of ringing, howling, screeching, or any other unkind noise.

What really matters is the apparent gain at the capsule(s).


Gain in “absolute” terms is a signal multiplier. A gain of 1, which may be referred to as “unity,” is when the signal level coming out of a system (or system part) is equal in level to the signal going in. A signal level X 1 is the same signal level. A gain of less than 1 (but more than zero) means that signal level drops across the in/ out junction, and a gain of greater than 1 indicates an increase in signal strength.

A gain multiplier of zero means a broken audio circuit. Gain multipliers of less than zero are inverted polarity, with the absolute value relative to 1 being what determines if the signal is of greater or lesser intensity.

Of course, audio humans are more used to gain expressed in decibels. A gain multiplier of 1 is 0 dB, where the input signal (the reference) is equal to the output. Gain multipliers greater than 1 have positive decibel values, and negative dB values are assigned to multipliers less than 1. “Negative infinity” gain is a multiplier of 0.


The apparent gain as referenced by the pertinent microphone(s) is what can also be referred to as “loop gain.” The more the reproduced sonic event “gets back into” the mic, the higher that loop gain appears to be. The loop gain is applied at every iteration through the loop, which each iteration taking some amount of time to occur. If the time for a sonic event to be reproduced and arrive back at the capsule is short, then feedback will build aggressively when the loop gain is positive, but also drop quickly when the loop gain is negative.

Loop gain, as you might expect, increases with greater electronic gain. It also increases as a mic’s polar pattern becomes wider, because the mic has greater sensitivity at any given arrival angle. Closer proximity to a source of reproduced sound also increases apparent gain, due to the apparent intensity of a sound source being higher at shorter distances. Greater room reflectivity is another source of higher loop gain; More of the reproduced sound is being redirected towards the capsule. Lastly, a frequency in phase with itself through the loop will have greater apparent gain than if it’s out of phase.

This is why it’s much, much harder to run monitor world in a small, “live” space than in a large, nicely damped space – or outside. It’s also why a large, reflective object (like a guitar) can suddenly put a system into feedback when all the angles become just right. The sound coming from the monitor hits the guitar, and then gets bounced directly into the most sensitive part of the mic’s polar pattern.

Dragon Taming

With all that on the table, then, how do you get control over such a wild beast?

Obviously, reducing the system’s drive level will help. Pulling the preamp or send level down until the loop gain becomes negative is very effective – and this is a big reason for bands to work WITH each other. Bands that avoid being “too loud for themselves” have fewer incidences of channels being run “hot.” Increasing the distance from the main PA to the microphones is also a good idea (within reason and practicality), as is an overall setup where the low-sensitivity areas of microphone polar patterns are pointed at any and all loudspeakers. In that same vein, using mics with tighter polar patterns can offer a major advantage, as long as the musicians can use those mics effectively. Adding heavy drape to a reflective room may be an option in some cases.

Of course, when all of that’s been done and you still need more level than your feedback monster will let you have, it’s probably time to break out the EQ.

Equalization can be effective with many feedback situations, due to loop gain commonly being notably NOT equal at all frequencies. In almost any situation that you will encounter in real-life, one frequency will end up having the highest loop gain at any particular moment. That frequency, then, will be the one that “rings.”

The utility of EQ is that you can reduce a system’s electronic gain in a selected bandwidth. Preamp levels, fader levels, and send levels are all full-bandwidth controls – but if only a small part of the audible spectrum is responsible for your troubles, it’s much better to address that problem specifically. Equalizers offering smaller bandwidths allow you to make cuts in problem areas without wrecking everything else. At the same time, very narrow filters can be hard to place effectively, and a change in phase over time can push a feedback frequency out of the filter’s effective area.

EQ as a feedback management device – like everything else – is an exercise in tradeoffs. You might be able to pull off some real “magic” in terms of system stability at high gain, but the mics might sound terrible afterwards. You can easily end up applying so many filters that reducing a full-bandwidth control’s level would do basically the same thing.

In general, doing as much as possible to tame your feedback dragon before the EQ gets involved is a very good idea. You can then use equalization to tamp down a couple of problem spots, and be ready to go.


Hard Surface Blues

The Acoustical Environment: It matters.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

glassbackWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

On the 30th of December, 2015, my (previously) regular gig hosted its final show. The building is going away, and we had to be out. We had already done the final mainstage show in the downstairs venue, but we had to have one more blowout to celebrate as many of our musician friends as we possibly could.

The show was six and a half hours in length.

The show was also upstairs.

As the night went on, and also afterwards, I was relieved to hear that people were pleased with the way things sounded. I was relieved because, for most of the night, I was standing at the console with my metaphorical head in my hands, saying, “This is not working. This is NOT working. It’s all a garbled firestorm of reflections. This is not working.”

See, I was very much used to mixing in the mainstage room. That venue had a pretty fair amount of acoustical treatment to the stage area, yielding a “live-end, dead-end” sort of design that worked pretty nicely. With a mainstage show you still had plenty of monitor wash, but a LOT of on-deck audio really was soaked up before it started banging around the room like a rabid ping-pong ball. This was also helpful for the FOH PA, because a lot of its spill was also absorbed. Combine that with the live part of the room having a pretty-darn-okay “short, rock-n-roll reverb,” and it really wasn’t too hard to wrangle audio in the basement.

But the upstairs, oh my goodness…

The picture up there tells quite a bit of the story. No acoustical treatment. A flat, glass, upstage wall. All hard surfaces everywhere. Oh boy.

There are two things I want you to get from this.

Thing 1: The Acoustical Environment Has A Tremendous Impact On Your Show

Decent gear that’s right for the application really does matter, but in the end, the room acoustics make a lot of highly consequential decisions that are – shall we say – “tough to appeal.” The flattest, most sonically neutral PA and monitors in the world are still subject to whatever environment they’re fired into. A beautiful-sounding PA that’s being used in an acoustically hostile room is ALWAYS going to be a beautiful-sounding PA being used in an acoustically hostile room.

Room acoustics are so important, and have such a huge influence that there are pieces of music which were written specifically for certain BUILDINGS. As in, “we can play this organ piece on almost any decent pipe-organ, but it won’t actually sound right unless the organ is in this one church.”

Soviet reverb adjusts YOU, comrade.

Anyway.

When you drop a performance into a different room, the performance is going to sound different. Maybe wildly so. There are all kinds of things that can happen to you, but in general, be aware that more reflectivity tends to play greater havoc with music that’s fast, and/ or built on a “dense” arrangement. In order for that kind of music to work nicely, you have to be able to discern where different sounds begin and end. Reverberation works against that; It lengthens the decay-time of sonic events, causing those events to “smear” across themselves and each other. Don’t be surprised if you have to pull back on “supporting” sounds in order to provide adequate separation for critical, detail-driven audio (like vocals). With the room-sound filling in lots of small spaces that would otherwise provide some contrast, you may have to exaggerate some proportions in order to keep things intelligible.

And notice that I said, “pull back.” In a tough space, the answer is not to start at the usual volume and push the lead parts even higher. You may very well run out of PA, or monitors, or audience tolerance, or all of those before you arrive at a workable destination. Start quietly so you’ll have room to get things separated. “Headroom” is not just a term for electronics, you know.

Thing 2: No, You Can’t Fix Acoustics With EQ

Please realize that you CAN alleviate certain acoustical problems with EQ. Also, please realize that, in many situations, EQ will be the only tool you can meaningfully use to help deal with some room issues.

I’m not saying that EQ has no place, and not to use it.

What I AM saying is that EQ can’t actually fix acoustics.

I used my “flagship” console for the upstairs gig. My flagship console has more powerful and flexible EQ than any other mix-rig I’ve ever used. Believe me, I was making use of that power and flexibility for the show. The show would have been much worse if I hadn’t used the tools at my disposal…

…but the problems weren’t truly fixed. They were just made more tolerable.

Acoustical problems are “time” issues. A sonic event becomes a longer event, whether by audible, discrete repetition, smooth reverberation, or something in between. Equalization is a tool for changing magnitude. Yes, equalizers precipitate those magnitude changes by way of manipulating the time domain, but they are not a tool which is useful for managing time.

The best you can do with an equalizer is to get a troublesome reverberation buildup to drop into the “noise floor” faster. This is not because you’ve managed to fix the acoustical problem, but rather because your input to that acoustical problem has been reduced. The reverb time at, say, 500 Hz has not changed in any way. What you may have succeeded in doing is to make the events at 500 Hz a few dB quieter, such that their decaying intensity becomes less audible against everything else more quickly.

In some rooms that can be enough, and it can be done without making the rig sound truly strange. At some point, though, the problems are so bad that getting the reflections to disappear into the noise floor via EQ creates bizarre results. A PA with a giant chasm torn into the midrange may not cause a huge buildup of reverb in that bandwidth, but it will also be painfully obvious that the PA has had all that midrange tossed into the abyss. (Reverberation is not the creation of sonic events at a certain frequency, but rather the lengthening of them. Kill all the bottom end in a PA to deal with a boomy room, and the sound will be very thin. Room acoustics don’t fill in what isn’t there. They combine with what is.)

To truly fix a problematic room, you either have to alter the room’s acoustical characteristics or find a way to avoid interacting with them. All that is beyond the scope of this article, so just walk away with this in mind: Venue acoustics matter a lot, being louder doesn’t help, and electronic processing may or may not be enough for you to reach the sonic destination you want.


Minimum Phase, Maximum Phase

Or…how you can learn to stop worrying and love your EQ.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

phaseeqWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

There’s a famous “Rane Note” out there which discusses the myth of “minimum phase.” You might want to read it. Even if you do read it, though, it’s possible to be confused. The confusion seems to come from a misunderstanding regarding the scope of the conversation – and also from misunderstandings regarding phase.

With phase being a more fundamental issue, it seems best to start there.

Set Phasers To Kill!

Some people hear the word “phase” and come dangerously close to an aneurysm. They panic. They have learned to equate phase exclusively with problems.

Phase issues can indeed cause problems. Yes, phase issues are one of the primary reasons why collecting a whole pile of speakers and spraying sound everywhere in a room is NOT a recipe for success with a PA system.

The thing is, though, that phase shift is a natural occurrence in all sorts of ways, and can be used as a very handy tool with audio systems. It’s like “The Force” from “Star Wars.” It is not, in itself, good or evil. It can be used well or badly, to your benefit or detriment. As stated in the opening sentence of “Myth #6” from the Rane Note I linked: “Phase shift is not a bad word.” I believe I can offer a proof: The equalization that you use with your PA system, from channel EQ on up to system-management filters, is probably NOT phase linear.

That is to say that the large majority of “affordable, user friendly” equalizers that I have encountered are “phase warping.” They are either one of two things. The first thing they can be is an analogue filter, which creates a resonant circuit using capacitors, inductors, and resistors. The second thing they can be is a digital filter which models the behavior of a resonant circuit. In both cases, phase shift is a natural part of the design. It is, in fact, required by the design. If the design had no phase shift, it wouldn’t work as a filter. Someone would have forgotten all the capacitors and inductors (or their digital equivalents). You would have no EQ at all.

This also has to do with what I mentioned about “the scope of the conversation.”

Your “common, easy to use” EQ is almost certainly an implementation of “IIR” filters. The components “resonate” or “ring,” and they do so forever until the system is de-energized. (There is, of course, a point where the ringing is so deep in the noise floor that it’s not worth bothering about.) There are such things as internally phase-linear equalizers. These equalizers use FIR filters, which do not rely on feedback for operation and can be constructed to operate without phase-shift. Because they are internally phase-linear, you CAN say that FIR based equalization is able to exhibit “less phase” than an IIR-based system. However, this statement is out of context when discussing the traditional “minimum phase” argument, which is a (pardon my crassness) pissing contest in the marketing realm of equalizers that use IIR filters. FIR implementations are outside of that context.

As an aside, I will also note that there is no such thing as a free lunch. You may have noticed that I used the words “internally phase-linear” when talking about FIR filters. I say that because FIR filters delay the signals they are working on. There is no internal phase shift because the entire bandwidth is subjected to the same delay time. However, if an FIR EQ is used, and delay compensation is neglected for unaffected signals, you will run into phase shift between the processed and unprocessed signals. The FIR-processed information will be “late.” (Whether this actually causes a real problem or not is related to your specific situation and is beyond the scope of this article.)

The Minimum Is The Maximum

Once the scope of the argument is sorted out, the second point of confusion seems – as ever – to be rooted in how products are marketed and discussed. For instance, it’s basically in-bounds for someone to say that the channel EQ on a console they use (or build) is “more musical” and “less phasey” than the channel EQ from some other mixer. People do have preferred workflows and equalizer flavors. You may also perceive that setting a channel EQ on console A to a +6 dB boost at 1 kHz sounds “less weird” than doing the same thing on console B.

But here’s the thing. I’m willing to bet that both of those consoles have channel EQs with factory-set bandwidths, and that console A’s bandwidth is wider. Here’s a simulation:

differentphase

Console A is on the left, and console B is on the right. Console B uses a more selective filter, which is produced by a more abrupt phase transition. The magnitude responses (output vs. frequency) of the two filters are NOT the same, even though the center frequency selections and gain settings are the same.

I’m comfortable making my bet because of my current understanding of how IIR equalization works, as evidenced above. Everything I’ve read or experienced leads me to believe a simple, but powerful theory of operation: With an IIR EQ, a particular magnitude response REQUIRES a specific phase response. There is no getting around it. A “musical” filter with a wide bandwidth will seem “less phasey” because its phase transition is, necessarily, gentle when compared with a narrower filter. There is no magic. The manufacturer of console A has not found a way to create a filter with the same magnitude response as what you get with console B, yet also with a milder phase transition. It’s not physically possible for them to do so.

The requirement of a specific phase response to create a specific magnitude response means that the minimum phase to create a certain filter is also the maximum phase. Any other phase means that you have a different filter. You may like the different filter better, and that’s completely acceptable, but it’s not the same filter. If we changed our consoles to have a “Q” control for their equalizers, and you managed to dial up the same magnitude response on both desks, the phase response between the two would become indistinguishable.

The reason to choose any particular EQ is that it gets you where you need to go, does so easily, and does so reliably. If the fundamental operational implementation is the same, then the phase-shift involved is a non-issue. If the EQ does what you need it to, you can be assured that no more or less phase-shift than is required has been introduced.


Zen And The Art Of Dialing Things In

Good instruments through neutral signal paths require very little “dialing in,” if any.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

zenWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Not long ago, Lazlo and The Dukes paid me a visit at my regular gig. They were coming off a spectacularly difficult show, and were pleased-as-punch to be in a room with manageable acoustics, a reasonably nice audio rig, and a guy to drive it all. We got settled-in via a piecemeal sort of approach. At one point, we got Steve on deck and ran his dobro through the system. He and I were both pretty happy within the span of about 30 seconds.

Later, Steve gushed about how I “just got it all ‘dialed up’ so fast.” Grateful for the compliment, and also wanting to be accurate about what occurred, I ensured Steve that he was playing a good instrument. I really hadn’t dialed anything in. I pushed up the faders and sends, and by golly, there was a nice-sounding dobro on the end of it all. I did a little experimenting with the channel EQ for FOH, wondering what would happen with a prominent midrange bump, but that was pretty optional.

In terms of “pop-culture Zen,” Steve had gotten dialed in without actually being dialed in.

How?

Step 1: The Instrument Must Be Shaped Like Itself

The finest vocal mics I’ve ever had have been the ones in front of terrific singers. The very best signal chains I’ve ever had for drums have been the ones receiving signals derived from drums that sound killer. I’ve hurriedly hung cheap transducers in front of amazing guitar rigs, and those rigs have always come through nicely.

Whatever the “source” is, it must sound correct in and of itself. If the source uses a pickup system, that system must produce an output which sounds the way the instrument should sound.

That seems reasonable, right? The first rule of Tautology Club is the first rule of Tautology Club.

Especially with modern consoles that have tons of processing available, we can do a lot to patch problems – but that’s all we’re doing. Patching. Covering holes in things that weren’t meant to have holes. Gluing bits down and hoping it all stays together for the duration of the show. Does that sound like a shaky, uncomfortable proposition? It does because it is.

But, if the instrument is making the right noise in the room, by itself, with no extra help, then it can never NOT make the right noise in the room. We can do all kinds of things to overpower and wreck that noise by way of a PA system, but the instrument itself will always be right. In contrast, an instrument which sounds wrong may potentially be beaten into shape with the rest of the rig…but the source still doesn’t sound right. It’s completely dependent on the PA, and if the PA fails to do the job, then you’re just stuck.

An instrument which just plain “sounds good” will require very little (if any) dialing-in, so long as…

Step 2: The Rig Is Shaped Like Everything

Another way to put this is that the instrument must be filled with itself, yet the FOH PA and monitor rig must be emptied of themselves. In technical terms, the transfer function of the PA system’s total acoustical output should ideally be flat “from DC to dog-whistles.”

Let’s say you want to paint a picture. You know that the picture will be very specific, but you don’t know what that picture will be in advance. What color of canvas should you obtain? White, of course. The entire visible spectrum should be reflected by the canvas, with as little emphasis or de-emphasis on any frequency range. This is also the optimal case for a general-purpose audio system. It should impose as little of its own character as is reasonably possible upon the signals passing through.

At a practical level, this means taking the time to tune FOH and monitor world such that they are both “neutral.” Unhyped, that is. Exhibiting as flat a magnitude response as possible. To the extent that this is actually doable, this means that an instrument which is shaped like itself – sonically, I mean – retains that shape when passed through the system. This also means that if there IS a desire to adjust the tonality of the source, the effort necessary to obtain that adjustment is minimized. It is much easier to, say, add midrange to a signal when the basic path for that signal passes the midrange at unity gain. If the midrange is all scooped out (to make the rig sound “crisp, powerful, and aggressive”), then that scoop will have to first be neutralized before anything else can happen. It’s very possible to run out of EQ flexibility before you get your desired result.

Especially when talking about monitor world, this is why I’m a huge advocate for the rig to not sound “good” or “impressive” as much as it sounds “neutral.” If the actual sound of the band in the room is appropriate for the song arrangements, then an uncolored monitor rig will assist in getting everybody what they need without a whole lot of fuss. A monitor rig that’s had a lot of cool-sounding “boom” and “snap” added will, by nature, prioritize sources that emphasize those frequency ranges (and this at the expense of other sources). This can take a good acoustical arrangement and make it poor, or aggravate the heck out of an already not-so-good band configuration. It also tends to lead to feedback problems, because the critical midrange gets lost. Broadband gain is added to compensate, which combines with the effectively positive gain on the low and high-ends, and it all can end with screeching or rumbling as the loop spins out of control.

The ironic thing here is that the “netural” systems end up sounding much more impressive later on, when the show is a success. The rigs that sound impressive with walkup music, on the other hand, sometimes aren’t so nice for the actual show.

So – an audio-human with a rig that is acoustically shaped like nothing is in command of a system that is actually shaped like everything. Under the right circumstances, this means that a signal through the rig will be dialed in without any specific dialing-in being required.