Tag Archives: EQ

Unsuckifying Your Monitor Mix

Communicate well, and try not to jam too much into any one mix.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

monsWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Monitors can be a beautiful thing. Handled well, they can elicit bright-eyed, enthusiastic approbations like “I’ve never heard myself so well!” and “That was the best sounding show EVER!” They can very easily be the difference between a mediocre set and a killer show, because of how much they can influence the musicians’ ability to play as a group.

I’ve said it to many people, and I’m pretty sure I’ve said it here: As an audio-human, I spend much more time worrying about monitor world than FOH (Front Of House). If something is wrong out front, I can hear it. If something is wrong in monitor world, I won’t hear it unless it’s REALLY wrong. Or spiraling out of control.

…and there’s the issue. Bad monitor mixes can do a lot of damage. They can make the show less fun for the musicians, or totally un-fun for the musicians, or even cause so much on stage wreckage that the show for the audience becomes a disaster. On top of that, the speed at which the sound on deck can go wrong can be startlingly high. If you’ve ever lost control of monitor world, or have been a musician in a situation where someone else has had monitor world “get away” from them, you know what I mean. When monitors become suckified, so too does life.

So – how does one unsuckify (or, even better, prevent suckification of) monitor world?

Foundational Issues To Prevent Suckification

Know The Inherent Limits On The Engineer’s Perception

At the really high-class gigs, musicians and production techs alike are treated to a dedicated “monitor world” or “monitor beach.” This is an independent or semi-independent audio control rig that is used to mix the show for the musicians. There are even some cases where there are multiple monitor worlds, all run by separate people. These folks are likely to have a setup where they can quickly “solo” a particular monitor mix into their own set of in-ears, or a monitor wedge which is similar to what the musicians have. Obviously, this is very helpful to them in determining what a particular performer is hearing.

Even so, the monitor engineer is rarely in exactly the same spot as any particular musician. Consequently, if the musicians are on wedges, even listening to a cue wedge doesn’t exactly replicate the total acoustic situation being experienced by the players.

Now, imagine a typical small-venue gig. There’s probably one audio human doing everything, and they’re probably listening mostly to the FOH PA. The way that FOH combines with monitor world can be remarkably different out front versus on deck. If the engineer has a capable console, they can solo up a complete monitor mix, probably through a pair of headphones. (A cue wedge is pretty unlikely to have been set up. They’re expensive and consume space.) A headphone feed is better than nothing, but listening to a wedge mix in a set of cans only tells an operator so much. Especially when working on a drummer’s mix, listening to the feed through a set of headphones has limited utility. A guy or gal might set up a nicely balanced blend, but have no real way of knowing if that mix is even truly audible at the percussionist’s seat.

If you’re not so lucky as to have a flexible console, your audio human will be limited to soloing individual inputs.

The point is that, at most small-venue shows, an audio human at FOH can’t really be expected to know what a particular mix sounds like as a total acoustic event. Remote-controlled consoles can fix this temporarily, of course, but as soon as the operator leaves the deck…all bets are off. If you’re a musician, assume that the engineer does NOT have a thoroughly objective understanding of what you’re hearing. If you’re an audio human, make the same assumption about yourself. Having made those assumptions, be gentle with yourself and others. Recognize that anything “pre set” is just a wild guess, and further, recognize that trying to take a channel from “inaudible in a mix” to “audible” is going to take some work and cooperation.

Use Language That’s As Objective As Possible

Over the course of a career, audio humans create mental mappings between subjective statements and objective measurements. For instance, when I’m working with well-established monitor mixes, I translate requests like “Could I get just a little more guitar?” into “Could I get 3 dB more guitar?” This is a necessary thing for engineers to formulate for themselves, and it’s appropriate to expect that a pro-level operator has some ability to interpret subjective requests.

At the same time, though, it can make life much easier when everybody communicates using objective language. (Heck, it makes it easier if there’s two-way communication at all.)

For instance, let’s say you’re an audio human working with a performer on a monitor mix, and they ask you for “a little more guitar.” I strongly recommend making the change that you translate “a little more” as corresponding to, and then stating your change (in objective terms) over the talkback. Saying something like, “Okay, that’s 3 dB more guitar in mix 2” creates a helpful dialogue. If that 3 dB more guitar wasn’t enough, the stating of the change opens a door for the musician to say that they need more. Also, there’s an opportunity for the musician’s perception to become calibrated to an objective scale – meaning that they get an intuitive sense for what a certain dB boost “feels” like. Another opportunity that arises is for you and the musician to become calibrated to each other’s terminology.

Beyond that, a two-way dialogue fosters trust. If you’re working on monitors and are asked for a change, making a change and then stating what you did indicates that you are trying to fulfill the musician’s wishes. This, along with the understanding that gets built as the communication continues, helps to mentally place everybody on the same team.

For musicians, as you’re asking for changes in your monitor mixes, I strongly encourage you to state things in terms of a scale that the engineer can understand. You can often determine that scale by asking questions like, “What level is my vocal set at in my mix?” If the monitor sends are calibrated in decibels, the engineer will probably respond with a decibel number. If they’re calibrated in an arbitrary scale, then the reply will probably be an arbitrary number. Either way, you will have a reference point to use when asking for things, even if that reference point is a bit “coarse.” Even if all you’ve got is to request that something go from, say, “five to three,” that’s still functionally objective if the console is labeled using an arbitrary scale.

For decibels, a useful shorthand to remember is that 3 dB should be a noticeable change in level for something that’s already audible in your mix. “Three decibels” is a 2:1 power ratio, although you might personally feel that “twice as loud” is 6 dB (4:1) or even 10 dB (10:1).

Realtime Considerations To Prevent And Undo Suckification

Too Much Loop Gain, Too Much Volume

Any instrument or device that is substantially affected by the sound from a monitor wedge, and is being fed through that same wedge, is part of that mix’s “loop gain.” Microphones, guitars, basses, acoustic drums, and anything else that involves body or airborne resonance is a factor. When their output is put through a monitor speaker, these devices combine with the monitor signal path to form an acoustical, tuned circuit. In tuned circuits, the load impedance determines whether the circuit “rings.” As the load impedance drops, the circuit is more and more likely to ring or resonate for a longer time.

If that last bit made your eyes glaze over, don’t worry. The point is that more gain (turning something up in the mix) REDUCES the impedance, or opposition, to the flow of sound in the loop. As the acoustic impedance drops, the acoustic circuit is more likely to ring. You know, feed back. *SQEEEEEALLLL* *WHOOOOOwoowooooOOOM*

Anyway.

The thing for everybody to remember – audio humans and musicians alike – is that a monitor mix feeding a wedge becomes progressively more unstable as gain is added. As ringing sets in, the sound quality of the mix drops off. Sounds that should start and then stop quickly begin to “smear,” and with more gain, certain frequency ranges become “peaky” as they ring. Too much gain can sometimes begin to manifest itself as an overall tone that seems harsh and tiring, because sonic energy in an irritating range builds up and sustains itself for too long. Further instability results in audible feedback that, while self-correcting, sounds bad and can be hard for an operator to zero-in on. As instability increases further, the mix finally erupts into “runaway” feedback that’s both distracting and unnerving to everyone.

The fix, then is to keep each mix’s loop gain as low as possible. This often translates into keeping things OUT of the monitors.

As an example, there’s a phenomenon I’ve encountered many times where folks start with vocals that work…and then add a ton of other things to their feed. These other sources are often far more feedback resistant than their vocal mic can be, and so they can apply enough gain to end up with a rather loud monitor mix. Unfortunately, they fall in love with the sound of that loud mix, except for the vocals which have just been drowned. As a result, they ask for the vocals to be cranked up to match. The loop gain on the vocal mic increases, which destabilizes the mix, which makes monitor world harder to manage.

As an added “bonus,” that blastingly loud monitor mix is often VERY audible to everybody else on stage, which interferes with their mixes, which can cause everybody else to want their overall mix volume to go up, which increases loop gain, which… (You get the idea.)

The implication is that, if you’re having troubles with monitors, a good thing to do is to start pulling things out of the mixes. If the last thing you did before monitor world went bad was, say, adding gain to a vocal mic, try reversing that change and then rebuilding things to match the lower level.

And not to be harsh or combative, but if you’re a musician and you require high-gain monitors to even play at all, then what you really have is an arrangement, ensemble, ability, or equipment problem that is YOURS to fix. It is not an audio-human problem or a monitor-rig problem. It’s your problem. This doesn’t mean that an engineer won’t help you fix it, it just means that it’s not their ultimate responsibility.

Also, take notice of what I said up there: High-GAIN monitors. It is entirely possible to have a high-gain monitor situation without also having a lot of volume. For example, 80 dB SPL C is hardly “rock and roll” loud, but getting that output from a person who sings at the level of a whisper (50 – 60 dB SPL C) requires 20 – 30 dB of boost. For the acoustical circuits that I’ve encountered in small venues, that is definitely a high-gain situation. Gain is the relative level increase or decrease applied to a signal. Volume is the output associated with a signal level resultant from gain. They are related to each other, but the relationship isn’t fixed in terms of any particular gain setting.

Conflicting Frequency Content

Independent of being in a high-gain monitor conundrum, you can also have your day ruined by masking. Masking is what occurs when two sources with similar frequency content become overlaid. One source will tend to dominate the other, and you lose the ability to hear both sources at once. I’ve had this happen to me on numerous occasions with pianists and guitar players. They end up wanting to play at the same time, using substantially the same notes, and the sonic characteristics of the two instruments can be surprisingly close. What you get is either too-loud guitar, too-loud piano, or an indistinguishable mash of both.

In a monitor-mix situation, it’s helpful to identify when multiple sources are all trying to occupy the same sonic space. If sources can’t be distinguished from one another until one sound just gets obliterated, then you may have a frequency-content collision in progress. These collisions can result in volume wars, which can lead to high-gain situations, which result in the issues I talked about in the previous section. (Monitor problems are vicious creatures that breed like rabbits.)

After being identified, frequency-content issues can be solved in a couple of different ways. One way is to use equalization to alter the sonic content of one source or another. For instance, a guitar and a bass might be stepping on each other. It might be decided that the bass sound is fine, but the guitar needs to change. In that case, you might end up rolling down the guitar’s bottom end, and giving the mids a push. Of course, you also have to decide where this change needs to take place. If everything was distinct before the monitor rig got involved, then some equalization change from the audio human is probably in order. If the problem largely existed before any monitor mixes were established, then the issue likely lies in tone choice or song arrangement. In that case, it’s up to the musicians.

One thing to be aware of is that many small-venue mix rigs have monitor sends derived from the same channel that feeds FOH. While this means that the engineer’s channel EQ can probably be used to help fix a frequency collision, it also means that the change will affect the FOH mix as well. If FOH and monitor world sound significantly different from each other, a channel EQ configuration that’s correct for monitor world may not be all that nice out front. Polite communication and compromise are necessary from both the musicians and the engineer in this case. (Certain technical tricks are also possible, like “multing” a problem source into a monitors-only channel.)

Lack Of Localization

Humans have two ears so that we can determine the location and direction of sounds. In music, one way for us to distinguish sources is for us to recognize those instruments as coming from different places. When localization information gets lost, then distinguishing between sources requires more separation in terms of overall volume and frequency content. If that separation isn’t possible to get, then things can become very muddled.

This relates to monitors in more than one way.

One way is a “too many things in one place that’s too loud” issue. In this instance, a monitor mix gets more and more put in it, and at a high enough volume that the monitor obscures the other sounds on deck. What the musician originally heard as multiple, individually localized sources is now a single source – the wedge. The loss of localization information may mean that frequency-content collisions become a problem, which may lead to a volume-war problem, which may lead to a loop-gain problem.

Another possible conundrum is “too much volume everywhere.” This happens when a particular source gets put through enough wedges at enough volume for it to feel as though that single source is everywhere. This can ruin localization for that particular source, which can also result in the whole cascade of problems that I’ve already alluded to.

Fixing a localization problem pretty much comes down having sounds occupy their own spatial point as much as possible. The first thing to do is to figure out if all the volume used for that particular source is actually necessary in each mix. If the volume is basically necessary, then it may be feasible to move that volume to a different (but nearby) monitor mix. For some of the players, that sound will get a little muddier and a touch quieter, but the increase in localization may offset those losses. If the volume really isn’t necessary, then things get much easier. All that’s required is to pull back the monitor feeds from that source until localization becomes established again.

It’s worth noting that “extreme” cases are possible. In those situations, it may be necessary to find a way to generate the necessary volume from a single, localized source that’s audible to everyone on the deck. A well placed sidefill can do this, and an instrument amplifier in the correct position can take this role if a regular sidefill can’t be conjured up.

Wrapping Up

This can be a lot to take in, and a lot to think about. I will freely confess to not always having each of these concepts “top of mind.” Sometimes, audio turns into a pressure situation where both musicians and techs get chased into corners. It can be very hard for a person who’s not on deck to figure out what particular issue is in effect. For folks without a lot of technical experience who play or sing, identifying a problem beyond “something’s not right” can be too much to ask.

In the heat of the moment, it’s probably best to simply remember that yes, monitors are there to be used – but not to be overused. Effective troubleshooting is often centered around taking things out of a misbehaving equation until the equation begins to behave again. So, if you want to unsuckify your monitors, try getting as much out of them as possible. You may be surprised at what actually ends up working just fine.


Echoes Of Feedback

By accident, I seem to have discovered an effective, alternate method for “ringing out” PA systems and monitor rigs.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Sometimes, the best way to find something is to be looking for something else entirely.

A couple of weeks ago, I got it into my head to do a bit of testing. I wanted to see how much delay time I could introduce into a monitor feed before I noticed that something was amiss. To that end, I took a mic and monitor that were already set up, routed the mic through the speaker, and inserted a delay (with no internal feedback) on the signal path. I walked between FOH (Front Of House) and the stage, each time adding another millisecond of delay and then talking into the mic.

For several go-arounds, everything was pretty nondescript. I finally got to a delay time that was just noticeable, and then I thought, “What the heck. I should put in something crazy to see how it sounds.” I set the delay time to something like a full second, and then barked a few words into the mic.

That’s when it happened.

First, silence. Then, loud and clear, the delayed version of what I had said.

…and then, the delayed version of the delayed version of what I had just said, but rather more quietly.

“Whoops,” I thought, “I must have accidentally set the delay’s feedback to something audible.” I began walking back to FOH, only to suddenly realize that I hadn’t messed up the delay’s settings at all. I had simply failed to take into account the entire (and I do mean the ENTIRE) signal path I was working on.

Hold that thought.

Back In The Day

There was a time when delay effects weren’t the full-featured devices we’re used to. Whether the unit was using a bit of tape or some digital implementation, you didn’t always get a processor with a knob labeled “feedback,” or “regen,” or “echoes,” or whatever. There was a chance that your delay processor did one thing: It made audio late. Anything else was up to you.

Because of this, certain consoles of the day had a feature on their aux returns that allowed for the signal passing through the return to be “multed” (split), and then sent back through the aux send to the processor it came from. (On SSL consoles, this feature was called “spin.”) You used this to get the multiple echoes we usually associate with delay as an effect for vocals or guitar.

At some point, processor manufacturers decided that including this feature inside the actual box they were selling was a good idea, and we got the “feedback” knob. There’s nothing exotic about the control. It just routes some of the output back to the input. So, if you have a delay set for some number of milliseconds, and send a copy of the output back to the input end (at a reduced level), then you get a repeat every time your chosen number of milliseconds ticks by. Each repeat drops in level by the gain reduction applied at the feedback control…and eventually, the echo signal can’t be readily heard anymore.

But anyway, the key point here is that whether or not it’s handled “internally,” repeating echoes from a delay line are usually caused by some amount of the processor’s output returning to the front end to be processed again. (I say “usually” because it’s entirely possible to conceive of a digital unit that operates by taking an input sample, delaying the sample, playing the sample back at some volume, and then repeats the process for the sample a certain number of times before stopping the process. In this case, the device doesn’t need to listen to its own output to get an echo.)

I digress. Sorry.

If the output were to be routed back to the input at “unity gain,” (with no reduction or increase in level relative to the original output signal) what would happen? That’s right – you’d get an unlimited number of repeats. If the output is routed back to the front end at greater than unity gain, what would happen? Each repeat would grow in level until the processor’s output was completely saturated in a hellacious storm of distorted echo.

Does that remind you of anything?

Acoustical Circuits

This is where my previous sentence comes into play: “I had simply failed to take into account the entire (and I do mean the ENTIRE) signal path I was working on.” I had temporarily forgotten that the delay line I was using for my tests had not magically started to exist in a vacuum, somehow divorced from the acoustical circuit it was attached to. Quite the opposite was true. The feedback setting on the processor might have been set at “negative infinity,” but that did NOT mean that processor output couldn’t return to the input.

It’s just that the output wasn’t returning to the input by a path that was internal to the delay processor.

I’ve talked about acoustical, resonant circuits before. We get feedback in live-audio rigs because, rather like a delay FX unit, our output from the loudspeakers is acoustically routed back to our input microphones. As the level of this re-entrant signal rises towards being equal with the original input, the hottest parts of the signal begin to “smear” and “ring.” If the level of the re-entrant signal reaches “unity,” then the ringing becomes continuous until we do something to reduce the gain. If the returning signal goes beyond unity gain, we get runaway feedback.

This is not fundamentally different from our delay FX unit. The signal output from the PA or monitor speakers takes some non-zero amount of time to get back into the microphone, just like the feedback to the delay takes a non-zero amount of time to return. We’re just not used to thinking of the microphone loop in that way. We don’t consciously set a delay time on the audio re-entering the mic, and we don’t intentionally set an amount of signal that we want to re-enter the capsule – we would, of course, prefer that ZERO signal re-entered the capsule.

And the “delay time” through the mic-loudspeaker loop is just naturally imposed on us. We don’t dial up “x number of milliseconds” on a display, or anything. However long it takes audio to find its way back through the inputs is however long it takes.

Even so, feedback through our mics is basically the same creature as our “hellacious storm” of echoes through a delay processor. The mic just squeals, howls, and bellows because of differences in overall gain at different frequencies. Those frequencies continue to echo – usually, so quickly that we don’t discern individual repeats – while the other frequencies die down. That’s why the fighting of feedback so often involves equalization: If we can selectively reduce the gain of the frequencies that are ringing, we can get their “re-entry level” down to the point where they don’t noticeably ring anymore. The echoes decay so far and so fast that we don’t notice them, and we say that the system has stabilized.

All of this is yet another specific case where the patterns of audio behavior mirror and repeat themselves in places you might not expect.

As it turns out, you can put this to very powerful use.

The Application

As I discussed in “Transdimensional Noodle Baking,” we can do some very interesting things with audio when it comes to manipulating it in time. Making light “late” is a pretty unwieldy thing for people to do, but making audio late is almost trivial in comparison.

And making audio events late, or spreading them out in time, allows you to examine them more carefully.

Now, you might not associate careful examination with fighting feedback issues, but being able to slow things down is a big help when you’re trying to squeeze the maximum gain-before-feedback out of something like a a monitor rig. It’s an especially big help when you’re like me – that is to say, NOT an audio ninja.

What I mean by not being an audio ninja is that I’m really quite poor at identifying frequencies. Those guys who can hear a frequency start to smear a bit, and instantly know which fader to grab on their graphic EQ? That’s not me. As such, I hate graphic EQs and avoid putting them into systems whenever possible. I suppose that I could dive into some ear-training exercises, but I just can’t seem to be bothered. I have other things to do. As such, I have to replace ability with effort and technology.

Now, couple another issue with that. The other issue is that the traditional method of “ringing out” a PA or monitor rig really isn’t that great.

Don’t get me wrong! Your average ringout technique is certainly useful. It’s a LOT better than nothing. Even so, the method is flawed.

The problem with a traditional ringout procedure is that it doesn’t always simulate all the variables that contribute to feedback. You can ring out a mic on deck, walk up, check it, and feel pretty good…right up until the performer asks for “more me,” and you get a high-pitched squeal as you roll the gain up beyond where you had it. The reason you didn’t find that high-pitched squeal during the ringout was because you didn’t have a person with their face parked in front of the mic. Humans are good absorbers, but we’re also partially reflective. Stick a person in front of the mic, and a certain, somewhat greater portion of the monitor’s output gets deflected back into the capsule.

You can definitely test for this problem if you have an assistant, or a remote for the console, but what if you have neither of those things? What if you’ve got some other weird, phantom ring that’s definitely there, and definitely annoying, but hard to pin down? It might be too quiet to easily catch on a regular RTA (Real Time Analyzer), and you might not be able to accurately whistle or sing the tone while standing where you can easily read your RTA. Even if you can carry an RTA with you (if you have a smartphone, you can carry a basic analyzer with you everywhere – for free) you still might not be able to accurately whistle or sing the offending frequency.

But what if you could spread out the ringing into a series of discrete echoes? What if you could visually record and inspect those echoes? You’d have a very powerful tuning tool at your disposal.

The Implementation

I admit, I’m pretty lucky. Everything I need to implement this super-nifty feedback finding tool lives inside my mixing console. For other folks, there’s going to be more “doing” involved. Nevertheless, you really only need to add two key things to your audio setup to have access to all this:

1) A digital delay that can pass all audio frequencies equally, is capable of long (1 second or more) delays, and can be run with no internal feedback.

2) A spectrograph that will show you a range of 10 seconds or more, and will also show you the frequency under a cursor that you can move around to different points of interest.

A spectrograph is a type of audio analysis system that is specifically meant to show frequency magnitude over a certain amount of time. This is similar to “waterfall” plots that show spectral decay, but a spectrograph is probably much easier to read for this application.

The delay is inserted in the audio path of the microphone, in such a way that the only signal audible in the path is the output of the delay. The delay time should be set to somewhere around 1.5 to 2 seconds, long enough to speak a complete phrase into the mic. The output of the signal path is otherwise routed to the PA or monitors as normal, and the spectrograph is hooked up so that it can directly (that is, via an electrical connection) “listen” to the signal path you’re testing. The spectrograph should be set up so that ambient noise is too low to be visible on the analysis – otherwise, the output will be harder to interpret.

To start, you apply a “best guess” amount of gain to the mic pre and monitor sends. You’ll need to wait several seconds to see if the system starts to ring out of control, because the delay is making everything “late.” If the system does start to ring, the problem frequencies should be very obvious on the spectrograph. Adjust the appropriate EQs accordingly, or pull the gain back a bit.

With the spectrograph still running, walk up to the mic. Stick your face right up on the mic, and clearly but quickly say, “Check, test, one, two.” (“Check, test, one, two” is a phrase that covers most of the audible frequency spectrum, and has consonant sounds that rely on high-mid and high frequency reproduction to sound good.)

DON’T FREAKIN’ MOVE.

See, what you’re effectively doing is finding the “hot spots” in the sound that’s re-entrant to the microphone, and if you move away from the mic you change where those hot spots are. So…

Stay put and listen. The first thing you’ll hear is the actual, unadulterated signal that went through the microphone and got delivered through the loudspeaker. The repeats you will hear subsequently are what is making it back into the microphone and getting re-amplified. If you hear the repeats getting more and more “odd” and “peaky” sounding, that’s actually good – it means that you’re finding problem areas.

After the echoes have decayed mostly into silence, or are just repeating and repeating with no end in sight, walk back to your spectrograph and freeze the display. If everything is set up correctly, you should be able to to visually identify sounds that are repeating. The really nifty thing is that the problem areas will repeat more times than the non-problem areas. While other frequencies drop off into black (or whatever color is considered “below the scale” by your spectrograph) the ringy frequencies will still be visible.

You can now use the appropriate EQs to pull your problem frequencies down.

Keep iterating the procedure until you feel like you have a decent amount of monitor level. As much as possible, try to run the tests with gains and mix levels set as close to what they’ll be to the show as possible. Lots of open mics going to lots of different places will ring differently than a few mics only going to a single destination each.

Also, make sure to remember to disengage the delay, walk up on deck, and do a “sanity” check to make sure that everything you did was actually helpful.



If you’re having trouble visualizing this, here are some screenshots depicting one of my own trips through this process:

This spectrograph reading clearly shows some big problems in the low-mid area.

Some corrective EQ goes in, and I retest.

That’s better, but we’re not quite there.

More EQ.

That seems to have done the trick.



I can certainly recognize that this might be more involved than what some folks are prepared to do. I also have to acknowledge that this doesn’t work very well in a noisy environment.

Even so, turning feedback problems into a series of discrete, easily examined echoes has been quite a revelation for me. You might want to give it a try yourself.


Audio Processing In Graphical Terms

A guest post for Schwilly Family Musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The “doing of things” to audio can seem pretty abstract, and so I decided to write a piece that uses pictures to demonstrate signal processing. Go on and have a look.


Vocal Processors, And The Most Dangerous Knob On Them

If you were wondering, the most dangerous knob is the one that controls compression.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Not every noiseperson is a fan of vocal processors.

(Vocal processors, if you didn’t know, are devices that are functionally similar to guitar multi-fx units – with the exception that they expect input to come from a vocal mic, and so include a microphone preamp.)

Vocal processors can be deceivingly powerful devices, and as such, can end up painting an audio-human into a corner that they can’t get out of. The other side of that coin is that they can allow you to intuitively dial up a sound that you like, without you having to translate your intuitive choices into technical language while at a gig.

What I mean by that last bit is this: Let’s say that you like a certain kind of delay effect on your voice. There’s a specific delay time that just seems perfect, a certain number of repeat echoes that feels exactly right, an exact wet/ dry mix that gives you goosebumps, and an effect tonality that works beautifully for you. With your own vocal processor, you can go into rehearsal and fiddle with the knobs for as long as it takes to get exactly that sound. Further, you don’t have to be fully acquainted with what all the settings mean in a scientific sense. You just try a bit more or less of this or that, and eventually…you arrive. If you then save that sound, and take that vocal processor to a gig, that very exact sound that you love comes with you.

Which is great, because otherwise you have to either go without FX, or (if you’re non-technical) maybe struggle a bit with the sound person. The following are some conversations that you might have.

You: Could I have both reverb and delay on my vocal?

FOH (Front Of House) Engineer: Ummm…we only have reverb.

You: Oh.

You: Gimme a TON of delay in the monitors.

Audio Human: Oh, sorry, my FX returns can only be sent to the main mix.

You: Aw, man…

You: Could I have a touch more mid in my voice?

[Your concept of “a touch more mid” might be +6 dB at 2000 Hz, with a 2-octave-wide filter. The sound-wrangler’s concept of “a touch more mid” might be +3 dB at 750 Hz, with a one-octave-wide filter. Further, you might not be able to put a number on what frequency you want, especially if what I just said sounds like gobbledygook. Heck, the audio human might not even be able to connect a precise number with what they’re doing.]

Sound Wrangler: How’s that?

You: That’s not quite right. Um…

[This one’s directly in line with my original example.]

You: Could I get some delay on my voice?

Audio Human: Sure!

[The audio human dials up their favorite vocal-delay sound.]

You: Actually, it’s more of a slap-delay.

[Your concept of slap-delay might be 50 ms of delay time. The audio-human’s concept of slap-delay might be 75 ms.]

Audio Human: How’s that?

You: That’s…better. It’s not quite it, though. Maybe if there was one less repeat?

[The audio-human’s delay processor doesn’t work in “repeats.” It works in the dB level of the signal that’s fed back into the processor. The audio-human takes a guess, and ends up with what sounds like half a repeat less.]

Audio Human: Is that better?

You: Yeah, but it’s still not quite there. Um…

Having your own vocal processor can spare you from all this. It also spares the engineer from having to manage when the FX should be “in” or bypassed. (This often isn’t a huge issue, but it can become one if you’re really specific about what you want to happen where.) There are real advantages to being self-contained.

There are negative sides, though, as I alluded to earlier. Having lots of power at your disposal feels good, but if you’re not well-acquainted with what that power is actually doing, you can easily sabotage yourself. And your band. And the engineer who’s trying to help you.

EQ Is A Pet Dog

The reason that I say that “EQ is a pet dog” is twofold.

1) EQ is often your friend. Most of the time, it’s fun to play with, and it “likes” to help you out.

2) In certain situations, an EQ setting that was nice and sweet can suddenly turn around and “bite” you. This isn’t because EQ is “a bad dog,” it’s because certain equalization tweaks in certain situations just don’t work acoustically.

What I’ve encountered on more than one occasion are vocal-unit EQ settings that are meant to either sound good in low-volume or studio contexts. I’ve also encountered vocal-unit EQ that seems to have been meant to correct a problem with the rehearsal PA…which then CAUSES a problem in a venue PA that doesn’t need that correction.

To be more specific, I’ve been in various situations where folks had a whole busload of top-end added to their vocal sound. High-frequency boosts often sound good on “bedroom” or “headphone” vocals. Things get nice and crisp. “Breathy.” Even “airy,” if I dare to say so. In a rehearsal situation, this can still work. The rehearsal PA might not be able to get loud enough for the singer to really hear themselves when everybody’s playing, especially if feedback can’t be easily corrected. However, the singer hears that nice, crisp vocal while everybody’s NOT playing, and remembers that sound even they get swamped.

Anyway.

The problem with having overly hyped high-end in a live vocal (especially with a louder band in a small room) is really multiple problems. First, it tends to focus your feedback issues into the often finicky and unpredictable zone of high-frequency material. If there’s a place where both positionally dependent and positionally independent frequency response for mics, monitors, and FOH speakers is likely to get “weird” and “peaky,” the high-frequency zone is that place. (What I mean by “positionally dependent” is that high-frequency response is pretty easy to focus into a defined area…and what THAT means is that you can be in a physical position where you have no HF feedback problems, and then move a couple of steps and make a quarter turn and SQUEEEEAAALLL!)

The second bugbear associated with cranked high-end is that, when the vocals are no longer isolated, the rest of the band can bleed into the vocal mic LIKE MAD. That HF boost that sounds so nice on vocals by themselves is now a cymbal and guitar-hash louder-ization device. If we get into a high-gain situation (which can happen even with relatively quiet bands), what we then end up doing is making the band sound even louder when compared to your voice. If the band started out a bit loud, we may just have gotten to the audience’s tipping point – especially since high-frequency information at “rock” volume can be downright painful. Further, we’re now spending electrical and acoustical headroom on what we don’t want (more of the band’s top end), instead of what we do want (your vocal’s critical range).

Now, I’m not saying that you can’t touch the EQ in your vocal processor, or that you shouldn’t use your favorite manufacturer preset. What I am saying, though, is that dramatic vocal-processor EQ can really wreck your day at the actual show. You might want to find a way to quickly get the EQ bypassed or “flattened,” if you can.

“Compression” Is The Most Dangerous Knob On That Thing

Now, why would I say that, especially after all my ranting about EQ?

Well, it’s like this.

An experienced audio tech with flexible EQ tools can probably “undo” enough of an unhelpful in-the-box equalization solution, given a bit of time. Compression, on the other hand, really can’t be fully “undone” in a practical sense in most situations. (Yes – there is a process called “companding” which involves compression and complementary expansion, but to make it work you have to have detailed knowledge of the compression parameters.) Like EQ, compression can contribute to feedback problems, but it does so in a “full bandwidth” sense that is also much more weird and hard to tame. It can also cause the “we’re making the band louder via the vocal mic” problem, but in a much more pronounced way. It can prevent the vocalist from actually getting loud enough to separate from the rest of the band – and it can even cause a vocalist to injure themselves.

Let’s pick all that apart by talking about what a compressor does.

A compressor’s purpose is to be an automatic fader that can react at least as quickly (if not a lot more quickly) as a human, and that can react just as consistently (if not a lot more consistently) as a human. When a signal exceeds a certain set-point, called the threshold, the automatic fader pulls the signal down based on the “ratio” parameter. When the signal falls back towards the threshold, the fader begins to return to its original gain setting. “Attack” is the speed that the fader reduces gain, and “release” is the speed that the fader returns to its original gain.

Now, how can an automatic fader cause problems?

If the compressor threshold is set too low, and the ratio is too high, the vocalist is effectively pulled WAY down whenever they try to deliver any real power. If I were to set a vocalist so that they were comfortably audible when the band was silent, but then pulled that same vocalist down 10 dB when the band was actually playing, the likely result with quite a few singers would be drowned vocals. This is effectively what happens with an over-aggressive compressor. The practical way for the tech to “fight back” is to add, say, 10 dB (or whatever) of gain on their end – which is fine, except that most small-venue live-sound contexts can’t really tolerate that kind of compensating gain boost. In my experience, small room sound tends to be run pretty close to the feedback point, say, 3-6 dB away from the “Zone of Weird Ringing and Other Annoyances.” When that’s the case, going up 10 dB puts you 4-7 dB INTO the “Zone.”

But the thing is, the experience of that trouble area is extra odd, because your degree of being in it varies. When the singer really goes for it, the processor’s compressor reduces the vocal mic’s gain, and your feedback problem disappears. When they back off a bit, though, the compressor releases, which means the gain goes back up, which means that the strange, phantom rings and feedback chirps come back. It’s not like an uncompressed situaton, where feedback builds at a consistent rate because the overall gain is also consistent. The feedback becomes the worst kind of problem – an intermittent one. Feedback and ringing that quickly comes and goes is the toughest kind to fight.

Beyond just that, there’s also the problem of bleed. If you have to add 10 dB of gain to a vocal-mic to battle against the compressor, then you’ve also added 10 dB of gain to whatever else the mic is hearing when the vocalist isn’t singing. Depending on the situation, this can lead to a very-markedly extra-loud band, with all kinds of unwanted FX applied, and maybe with ear-grating EQ across the whole mess. There’s also the added artistic issue of losing dynamic “swing” between vocal and instrumental passages. That is, the music is just LOUD, all the time, with no breaks. (An audience wears down very quickly under those conditions.) In the circumstance of a singer who’s not very strong when compared to the band, you can get the even more troublesome issue of the vocal’s intelligibility being wrecked by the bleed, even though the vocal is somewhat audible.

Last, there’s the rare-but-present monster of a vocalist hurting themselves. The beauty of a vocal processor is that the singer essentially hears what’s being presented to the audience. The ugliness behind the beauty is that this isn’t always a good thing. Especially in the contexts of rock and metal, vocal monitors are much less about sounding “hi-fi” and polished, and much more about “barking” at a volume and frequency range that has a fighting chance of telling the singer where they are. Even in non-rock situations, a vital part of the singer knowing where they are is knowing how much volume they’re producing when compared to the band. The most foolproof way for this to happen is for the monitors to “track” the vocalists dynamics on a 1:1 basis – if the singer sings 3 dB louder, the monitors get 3 dB louder.

When compression is put across the vocalist immediately after the vocal mic, the monitors suddenly fail to track their volume in a linear fashion. The singer sings with more power, but then the compressor kicks in and holds the monitor sound back. The vocalist, having lost the full volume advantage of their own voice plus the monitors, can feel that they’re too quiet. Thus, they try to sing louder to compensate. If this goes too far, the poor singer just might blow out their voice, and/ or be at risk for long-term health issues. An experienced vocalist with a great band can learn to hear, enjoy, and stop compensating for compression…but a green(er) singer in a pressure situation might not do so well.

(This is also why I advocate against inserting compression on a vocal when your monitor sends are post-insert.)

To be brutally honest, the best setting for a vocal-processor’s compressor is “bypass.” Exceptions can be made, but I think they have to be made on a venue-to-venue, show-to-show basis.

All of this might make it sound like I advocate against the vocal processor. That’s not true. I think they’re great for people in the same way that other powerful tools are great. It’s just that power tools can really hurt you if you’re not careful.


If It Ain’t Broken…

…don’t fix it. If it seems like it’s broken, it may not be.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

There certainly can be a point where you have to “fix” a band.

Be warned, however, that even the very experienced can make a DISASTROUSLY BAD call about where that point actually is. When you’re tempted to make that call, start by assuming that you’re wrong and try to figure out what you’ve missed.

Then stop and think about it some more.

Trying to remake a band’s sound into your own sound is almost never the right idea.

Especially if you haven’t been asked to.


My Interview On AMR

I was invited to do a radio show on AMR.fm! Here are some key bits.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

About a week ago, I was invited into “The Cat’s Den.” While that might sound like a place where a number of felines reside, it’s actually the show hosted by John, the owner of AMR.fm. We talked about a number of subjects related to local music and small venues. John was kind enough to make the show’s audio available to me, and I thought it would be nifty to chop it all up into topical segments.

The key word up there being “chop.”

That is, what you’re hearing in these files has been significantly edited. The whole thing was about two hours long, and there was a lot of “verbal processing” that occurred. That’s what happens during a live, long-form interview, but it’s not the best way to present the discussion afterwards. Even with having tightened up the key points of the show, I’ve taken pains to not misrepresent what either of us were getting at. The meaning of each bit should be fully intact, even if every sentence hasn’t been included.

So…

The Introduction

Supatroy

A quick reference to an earlier show that featured Supatroy Fillmore. (Supatroy has done a lot of work in our local music scene.)

Why The Computerization Of Live-Audio Is A Great Thing

Computerizing live-sound allows guys like me to do things that were previously much harder (or even impossible) to do.

How I Got Started

A little bit about my pro-audio beginnings…way back in high-school.

Building And Breaking Things

I’m not as “deep into the guts” of audio equipment as the folks who came before me. I give a quick shout-out to Tim Hollinger from The Floyd Show in this bit.

Functional Is 95%

A segment about why I’m pretty much satisfied by gear that simply passes signal in a predictable and “clean” way.

The Toughest Shows

The most challenging shows aren’t always the loudest shows. Also, the toughest shows can be the most fun. I use two “big production” bands as examples: Floyd Show and Juana Ghani. The question touches on an interview that I did with Trevor Hale.

I Worry Most About Monitor World

If something’s wrong in FOH, I can probably hear it. If something’s not quite right on the stage, it’s quite possible that I WON’T hear it – and that worries me.

Communication Between Bands And Audio Humans

I’m not as good at communicating with bands as I’d like to be. Also, I’m a big proponent of people politely (but very audibly) asking for what they need.

The Most Important Thing For Bands To Do

If a band doesn’t sound like a cohesive ensemble without the PA, there’s no guarantee that the PA and audio-human will be able to fix that.

Why Talk About Small-Venue Issues?

I believe that small-venue shows are the backbone of the live-music industry. As such, I think it’s worthwhile to talk about how to do those shows well.

Merchant Royal

John asks me about who’s come through Fats Grill and really grabbed my attention. I proceed to pretty much gush about how cool I think Merchant Royal is.

What Makes A Great Cover Tune?

In my opinion, doing a great job with a cover means getting the song to showcase your own band’s strengths. I also briefly mention that Luke Benson’s version of “You Can’t Always Get What You Want” actually gets me to like the song. (I don’t normally like that song.)

The Issues Of A Laser-Focused Audience

I’m convinced that most people only go to shows with their favorite bands in their favorite rooms. Folks that go to a bar or club “just to check out who’s playing” seem to be incredibly rare anymore. (Some of these very rare “scene supporting” people are John McCool and Brian Young of The Daylates, as well as Christian Coleman.) If a band is playing a room that the general public sees as a “venue” as opposed to a “hangout,” then the band isn’t being paid to play music. The band is being paid based on their ability to be an attraction.

Look – it’s complicated. Just listen to the audio.

Everybody Has Due Diligence

Bands and venues both need to promote shows. Venues also need to be a place where people are happy to go. When all that’s been done, pointing fingers and getting mad when the turnout is low isn’t a very productive thing.

Also: “Promoting more” simply doesn’t turn disinterested people into interested people – at least as far as I can tell.

Shout Outs

This bit is the wrap up, where I say thanks to everybody at Fats Grill for making the place happen. John and I also list off some of our favorite local acts.

 


A Vocal Group Can Be Very Helpful

Microsurgery is great, but sometimes you need a sledgehammer.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Folks tend to get set in their ways, and I’m no exception. For ages, I have resisted doing a lot of “grouping” or “busing” in a live context, leaving such things for the times when I’ve been putting together a studio mix. I think this stems from wanting maximum flexibility, disliking the idea of hacking at an EQ that affects lots of inputs, and just generally being in a small-venue context.

Stems. Ha! Funny, because that’s a term that’s used for submixes that feed a larger mix. Submixes that are derived from grouping/ busing tracks together. SEE WHAT I DID THERE?

I’m in an odd mood today.

Anyway…

See, in a small-venue context, you don’t often get to mix in the same way as you would for a recording. It’s often not much help to, say, bus the guitars and bass together into a “tonal backline” group. It’s not usually useful because getting a proper mix solution so commonly comes down to pushing individual channels – or just bits of those channels – into cohesion with the acoustic contribution that’s already in the room with you. That is, I rarely need to create a bed for the vocals to sit in that I can carefully and subtly re-blend on a moment’s notice. No…what I usually need to do is work on the filling in of individual pieces of a mix in an individual way. One guitar might have its fader down just far enough that the contribution from the PA is inaudible (but not so far down that I can’t quickly push a solo over the top), while the other guitar is very much a part of the FOH mix at all times.

The bass might be another issue entirely.

Anyway, I don’t need to bus things together for that. There’s no point. What I need to do for each channel is so individualized that a subgroup is redundant. Just push ’em all through the main mix, one at a time, and there you go. I don’t have to babysit the overall guitar/ bass backline level – I probably have plenty already, and my main problem is getting the vocals over the whole thing anyway.

The same overall reasoning works if you’ve only got one vocal mic. There’s no reason to chew up a submix bus with one vocal channel – I mean, there’s nothing there to “group.” It’s one channel. However, there are some very good reasons to bus multiple vocal inputs into one signal line, especially if you’re working in a small venue. It’s a little embarrassing that it’s taken me so long to embrace this thinking, but hey…here we are NOW, so let’s go!

The Efficient Killing Of Feedback Monsters

I’m convinced that a big part of the small venue life is the running of vocal mics at relatively high “loop gain.” That is, by virtue of being physically nearby to the FOH PA (not to mention being in an enclosed and often reflective space) your vocal mics “hear” a lot more of themselves than they might otherwise. As such, you very quickly can find yourself in a situation where the vocal sound is getting “ringy,” “weird,” “squirrely,” or even into full-on sustained feedback.

A great way to fight back is a vocal group with a flexible EQ across the group signal.

As I said, I’ve resisted this for years. Part of the resistance came from not having a console that could readily insert an EQ across a group. (I can’t figure out why the manufacturer didn’t allow for it. It seems like an incredibly bizarre limitation to put on a digital mixer.) Another bit of my resistance came from not wanting to do the whole “hack up the house graph” routine. I’ve prided myself on having a workflow where the channel with the problem gets a surgical fix, and everything else is left untouched. I think it’s actually a pretty good mentality overall, but there’s a point where a guy finally recognizes that he’s sacrificing results on the altar of ideology.

Anwyay, the point is that a vocals-only subgroup with an EQ is a pretty good (if not really good) compromise. When you’ve got a bunch of open vocal mics on deck, the ringing in the resonant acoustical circuit that I like to call “real music in a real room” is often a composite problem. If all the mics are relatively close in overall gain, then hunting around for the one vocal channel that’s the biggest problem is just busywork. All of them together are the problem, so you may as well work on a fix that’s all of them together. Ultra-granular control over individual sources is a great thing, and I applaud it, but pulling 4 kHz (or whatever) down a couple of dB on five individual channels is a waste of time.

You might as well just put all those potential problem-children into one signal pipe, pull your offending frequency out of the whole shebang, and be done with the problem in a snap. (Yup, I’m preaching to myself with this one.)

The Efficient Addition Of FX Seasoning

Now, you don’t always want every single vocal channel to have the same amount of reverb, or delay, or whatever else you might end up using. I definitely get that.

But sometimes you do.

So, instead of setting multiple aux sends to the same level, why not just bus all the vocals together, set a pleasing wet/ dry mix level on the FX processor, and be done? Yes, there are a number of situations where you should NOT do this: If you need FX in FOH and monitor world, then you definitely need a separate, 100% “wet” FX channel. (Even better is having separate FX for monitor world, but that’s a whole other topic.) Also, if you can’t easily bypass the FX chain between songs, you’ll want to go the traditional route of “aux to FX to mutable return channel.”

Even so, if the fast and easy way will work appropriately, you might as well go the fast and easy way.

Compress To Impress

Yet another reason to bus a bunch of vocals together is to deal with the whole issue of “when one guy sings, it’s in the right place, but when they all do a chorus it’s overwhelming.” You can handle the issue manually, of course, but you can also use compression on the vocal group to free your attention for other things. Just set the compressor to hold the big, loud choruses down to a comfortable level, and you’ll be most of the way (if not all the way) there.

In my own case, I have a super-variable brickwall limiter on my full-range output, a limiter that I use as an overall “keep the PA at a sane level” control. A strategy that’s worked very well for me over the last while is to set that limiter’s threshold as low as I can possibly get away with…and then HAMMER the limiter with my vocal channels. The overall level of the PA stays in the smallest box possible, while vocal intelligibility remains pretty decent.

Even if you don’t have the processing flexibility that my mix rig does, you can still achieve essentially the same thing by using compression on your vocal group. Just be aware that setting the threshold too low can cause you to push into feedback territory as you “fight” the compressor. You have to find the happy medium between letting too little and too much level through.

Busing your vocals into a subgroup can be a very handy thing for live-audio humans to do. It’s surprising that it’s taken me so long to truly embrace it as a technique, but hey – we’re all learning as we go, right?


Dirty Secrets About Power

The amount of power actually being delivered to your loudspeakers might not be what you think. What power IS getting delivered might not be doing what you think.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

I’m pretty sure that power – that is, energy delivered to loudspeaker drivers – is one of the most misunderstood topics in live-audio. It’s an area of the art that’s often presented in a simplified way for the sake of convenience. Convenience is hardly a bad thing, but simplifying a complex and mission-critical set of concepts can be troublesome. For one, misinformation (or just misinterpretation) starts to be viewed as fact. Going hand-in-hand with that is the phenomenon of folks who mean well, but make bad decisions. These bad decisions lead to the death of loudspeakers, over and under spending on amps and speakers, seemingly reckless system operation…the list goes on.

So, with all the potential problems that can be caused by the oversimplification of the topic “Powering Loudspeakers,” why does “reduction for the sake of convenience” continue to occur?

I think the answer to that is ironically simple: The proper powering of loudspeakers is, in truth, maddeningly complex. There are lots of “microfactors” involved that are quite simple, but when they all get stuck together…things get hairy. At some point, educators with limited time, equipment manufacturers with limited space in instruction manuals, and established pros with limited patience have to decide on what to gloss over. (I’ve done it myself. Certain parts of my article on clipping let some intricacies go without complete explanation.)

With that being the case, this article can’t possibly cover every little counter-intuitive detail. What it can do, however, is give you some idea of how many more particulars are actually out there, while also giving you some insight into a few of those particulars.

So, in no particular order…

Dirty Secret #1: Amp And Speaker Manufacturers Assume A Lot

You may have heard the phrase “Assume Nothing.” That saying does NOT apply to the people who build mass-produced loudspeakers and amplifiers. It doesn’t apply because it CAN NOT apply – otherwise, they’d never get anything built, or their instruction manuals would be gigantic.

Amplifier manufacturers, on their part, assume that you’re going to use their product with mostly “musical” signals. They also assume that you can put together a sane system with the “how to make this thing work” information they provide in their documentation. Further, they make suggestions about using amplifiers with continuous power ratings that are greater than the continuous power ratings of your speakers, because they assume that you’re not going to drive the amp up to its clip lights all the time.

Loudspeaker manufacturers also assume that you’re going to drive their boxes with music. They also ship products with the assumption that you’ll use the speaker in accordance with the instructions. They publish power ratings that are contingent on you being sane, especially with your system equalizers.

The upshot of it all is that the folks who make your gear also make VERY powerful assumptions about your ability to use their products within the design limits. They do this (and disclaim a lot of responsibility), because a ton of factors related to actual system use have traditionally been outside their control. Anytime you read an instruction manual – especially the specifications page – take care to remember that the numbers you see are simplifications and averages that reflect a mountain of assumptions.

Dirty Secret #2: Musical Signals Don’t Get You Your Continuous Power Rating

The reason that technical folks distinguish between signals like sine waves, pink noise, and “music” is because they have very different power densities. Sine waves, for instance, have a continuous level that’s 3 dB below their peak level. Pink noise often has to have an accompanying specification of “crest factor” (the ratio between the peak and average level), because different noise generators can give you different results. Some pink noise generators give you a signal with 6 dB between the peak and average levels. Others might give you 12 dB.

Music is all over the map.

Some music signals have peaks that are 20+ dB above the average power. Of course, in our current age of “compress and limit everything,” it’s common to see ratios that are much smaller. I myself use rather aggressive limiting, because I need to keep a pretty tight rein on how loud the PA system can go. Even so, my peak levels tend to be about 10 dB above the average level.

So if you’ve got an amp that’s rated for “x” continuous watts, and you drive the unit all the way to its undistorted peak, music is probably giving you x/10 watts…or less. In my case, the brickwall limit that I set is usually 10 dB below clip, which means that my actual continuous power is something like 5 watts per channel. This calculation is pretty consistent with what I think the speakers are actually doing, because they get about 96 dB @ 1 watt @ 1 meter. Five watts continuous would mean about 103 dB SPL per full-range box, and there are two full-range boxes in the PA, so that’s 106 dB total…yup, that seems about right.

Yeah, so, your system? If you’re driving it with actual music that isn’t insanely limited, you can go ahead and divide your amp’s continuous power rating by about 10. Don’t get overconfident, though, because you can still wreck your drivers. It’s all because…

Dirty Secret #3: Power Isn’t Always Evenly Distributed

Remember that bit up there about manufacturers making assumptions? Think about this sentence: “They publish power ratings that are contingent on you being sane, especially with your system equalizers.”

Dirty secret #2 may have you feeling pretty safe. In fact, you may be thinking that secret #2 directly contravenes some of the things that I said about cooking your loudspeakers with an amp that’s too big.

Hold up there, chum!

When a loudspeaker builder says that the system will handle, say, 500 watts, what they actually mean is: “This system will survive 500 watts of continuous input, as long as the input is distributed with roughly equal power per octave.” Not everything in the box will take 500 watts without dying. In particular, the HF driver may be rated for a tenth – or less – of what the total system is advertised to do. Now, if you combine that with a system operator who just loves to emphasize high-frequency material (“I love that top-end snap and sizzle, dude!”), you may just be delivering a LOT of juice to a rather fragile component…

…especially if the operator uses a huge amp, because they’re under the false impression that amp headroom = safety. A 1000 watt amplifier, combined with a tech who drives hard, scoops the mids, and has boxes with passive crossovers, is plenty capable of beating a 50-watt-rated HF driver into the ground.

On the flipside, a system without protective filtering on the low-frequency side can get killed in a similar way. Some audio-humans just HAVE to “gun”the low-frequency bands on their system EQ, because “boom and thump are what get the girls dancing, dude!” Well, that’s all fine and good, but most live-sound speakers that are reasonably affordable can’t handle deep bass at high power. Heck, the box that the drivers are in often acts as a filter for material that’s below about 40 Hz.

Of course, there may not be an electronic filter to keep 40 Hz and below out of the amplifier, or out of the LF driver. Thus, our system operator might just be dumping a huge amount of energy into a woofer without actually being able to hear it. The power doesn’t just disappear, of course, which means that “driver failure because of too much power at too low a frequency” might be just around the corner.

Dirty Secret #4: Accidents Aren’t Usually Musical Signals

Building on what I’ve said above, I should be clear that folks do get away with using overpowered amps (for a time) because of feeding them “music.” They end up keeping the peaks at a reasonable level, and so the continuous power stays in a safe place as well.

Then, something goes wrong.

Maybe some feedback gets really out of control. Maybe somebody drops a microphone. All of a sudden, you might have a high-frequency sine-wave with peaks – and continuous level – that’s far beyond what a horn driver can live with. In the blink of an eye, you might have a low-frequency peak that can rip a subwoofer cone.

Ouch.

Dirty Secret #5: Squeezing Every Drop Of Performance From Something Is For Either Amateurs Or Rich People

This secret connects pretty directly with #3 and #4. Lots of folks worry about getting every single dollar’s worth of output from a live-audio rig. It’s very understandable, and also very unhealthy. To extract every possible ounce of output from a loudspeaker system requires powerful, expensive amplifiers that have the capability to flat-out murder the speakers. For this reason, “performance enthusiasts” are either people who can’t afford to buy both more power AND more speakers, or they’re people who can afford to buy (and fix, and fix, and fix again) a lot of gear that’s run very hard.

The moral of the story is that your expectation needs to be that – in line with secret #3 – getting continuous output consistent with about 1/10th of a rig’s rated power is actually getting your money’s worth. If you don’t have enough acoustical output at that level, then you either need to upgrade to a system that gets louder with the same number of boxes, or you need to buy more loudspeakers and more amps to expand your system.

Dirty Secret #6: More Power Means More Than Just Buying More Amps

This follows along with secret #5. If you want more power, then you need more gear. That seems simple enough, but I’m convinced that linear PA growth is accompanied by geometric “support” growth.

What I mean by this is that getting ahold of a more powerful PA is more than just getting the amps and speakers together. More power means heavier and more expensive amp racks, or more (and more expensive because of quantity) amp racks. It may mean that you have to construct patch panels to keep everything organized. More PA power also means that you need more AC power “from the wall” in the venue. Past a certain point, you have to start thinking about an actual power distro system – and that can be a major project with huge pitfalls in and of itself. You need more space for storage. You need a bigger vehicle, if you’re going to transport it all.

Getting more power doesn’t just mean more of the “core” gear that creates and uses that power. It means more of everything that’s connected to that gear.

Dirty Secret #7: The Point Of Diminishing Returns Occurs Very Quickly. Immediately, In Fact.

The last secret is also, in some ways, the biggest bummer. Audio is a logarithmic affair, which means that the gains you get from spending more money and providing more power to a system begin decreasing as soon as you even get started. I’m dead serious.

For example, let’s say you’ve got a loudspeaker that averages about 95 dB SPL @ 1 watt @ 1 meter. You put one continuous watt – one measly watt – across the box, and stand roughly three feet away. That 95 dB SPL seems pretty good. Now, you go up to two watts. Did you get 95 dB more? Nope – that would mean that you could get “space shuttle takeoff” levels out of one loudspeaker. Not gonna happen.

So…did you get 20 dB more?

No.

10 dB?

Nope.

You doubled the power, and got three decibels more level out of the speaker. That’s just enough of a difference to definitively notice that things have gotten louder. If you want three more dB, you’ll have to double the power again. So far we’re only at four watts, but I think you can see just how fast the battle for more output starts to go against you. If your system is running at full tilt, and you want more output, you’re going to have to find a way to “double” the system – and even when you do, you’ll only get a little more out of it. If you want to get 10 times as loud, you need 10 times as much total PA.

The vast majority of a PA system’s output comes from the first watt going into each box. It’s a fact that’s in plain sight, but it (and its ramifications) often aren’t talked about very much.

That makes it one of the dirtiest secrets of all.


The Acoustic Crossover

If you don’t need it, don’t spend power (or volume) on it.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

For loudspeakers, a crossover is used to separate full-range audio into multiple “passbands,” with each passband being appropriate for a certain enclosure or driver. For instance, there’s no need to send a whole bunch of high-frequency information to a large-diameter speaker if you’ve also got a handy device that’s better for top-end. On the flipside, failing to filter low-frequency information is a good way to wreck a “meant for HF” output transducer.

A beautifully implemented crossover creates a smooth transition from box to box and driver to driver. Crossovers can also help with getting the maximum performance out of an amplifier/ loudspeaker chain – again, because pushing material to a driver that can’t reproduce it is a waste of power.

Most of the time, we think of a crossover as an electrical device. Whether the filter network is a bunch of passive components at the end of a speaker cable, or a DSP sitting in front of the amplifiers, the mental image of a crossover is that of a signal processor.

…but remember how I’ve talked about acoustical resonant circuits? The reality of the pro-audio life, especially in small rooms, is that the behaviors of electrical devices show up in acoustical form all the time. In the past few years, I’ve found that creating acoustical crossovers between the stage wash and the FOH (Front of House) PA can be incredibly useful.

Why This Matters In Small Rooms

In a small venue, you don’t always have a lot of power to spare. It’s rarely practical to deploy a PA system that can operate at “nothing more than a brisk walk” for most of the show. Instead, you’re probably using a LOT of the audio rig’s capability at any given time.

Even if you have a good deal of power to spare, you often don’t have very much volume to spare. A small venue gets loud in a big hurry – not only because of acoustics, but because the average audience member is “pretty dang close” to the stage and PA.

Taken together, these issues present hat-explodingly good reasons to avoid chewing up your power and/ or SPL budget with audio that you just don’t need. Traditionally, dealing with this has taken the form of not reinforcing entire sources or channels. (This can oftentimes, and unfortunately, be appropriate. I’ve done several shows where one person was so loud that everyone EXCEPT them was in the PA.) An “all-or-nothing per channel” approach is sometimes a bit too much, though. What can be better is to use powerful and dramatic, yet judiciously applied subtractive EQ.

Aggressive Filtration

A good way to illustrate what I mean by “powerful and dramatic, yet judiciously applied subtractive EQ” is to show you some analysis traces. For instance, here’s my starting point for a vocal HPF (High Pass Filter):

vocalfilter

The filter frequency is 500 Hz. Effectively, I’m chucking out everything at or below about 250 Hz.

“But, doesn’t that sound really thin,” you ask?

Indeed, it does sound a bit thin at times. If I don’t have a lot of monitor wash, or the singer doesn’t have a voice that’s rich in low-mid, or if they just don’t want to get right up on the mic, then I need to roll my filter down. On the other hand, in situations where the monitors were loud, the vocalists had strong voices, and they had their lips stuck to the mics, I’ve had HPF filters up as high as 1 kHz or more.

The point is that the stage-wash often gives me everything I need for low-mid in the vocals, so why duplicate that energy in the FOH PA? If I create a nice transition between the PA and what’s already in the room, I only have to spend power on what I need for clarity.

Now, here’s a trace for a guitar amp:

guitarfilter

Of course, you don’t necessarily need something as extreme as this all the time. What’s great about filtering a guitar like this, though, is that you’ve thrown away everything except the “soul” of the instrument – 400 Hz to 2 kHz. Especially with “overly scooped” guitar sounds, what you need for the guitar to actually sit in the live mix is more midrange than what you’re getting. Of course, you could turn up the ENTIRE guitar to get what you need – but why? You’ll be killing the audience. It’s much better to “just turn up the mids” without turning up anything else.

…and even if the guitar is only really in the PA during solos, this kind of filter can still be a good thing to implement. If you have to REALLY get on the gas for a lead part, you can avoid tearing people’s heads off with piercing high end – as well as avoid stomping all over the rhythm player and the bassist.

By combining a highly filtered sound with the stage volume, you effectively get to EQ the guitar without having to completely overwhelm the natural sound from the amp. (This is just an acoustical version of what multiband equalizers do anyway. You select a frequency range to work on, and everything else is left alone. Whether this happens purely with electrical signals or in combination with acoustic events is relevant, but ultimately a secondary issue.)

Now, how about a kick drum?

kickfilter

Again, this kind of thing isn’t appropriate in all contexts. You wouldn’t do this for a jazz gig…but in a LOT of other situations, what you need from the kick drum is “thump” and an appropriately placed “pop” or “click.”

And that’s it.

In a small venue, reproducing much of a rock or pop kick’s midrange is unhelpful. All you do is run over everything else, which makes you turn up everything else, which makes your whole mix REALLY LOUD.

Instead, you can create an acoustical crossover to sweeten the kick “just enough,” without getting any louder than necessary.

All Wet

Saving power and volume also applies for situations where you want effects to come from the PA. It’s very easy to get too loud when you want to put reverb, delay, or even chorus on something. The reason for this is because these effects have a “dry” (unprocessed) component, that has to be blended properly with the “wet” sound. What can happen, then, is that you end up pushing the entire sound up too far – because you want to hear the effects. The “dry” sound in the signal combines with the “dry” sound in the room, which makes for an acoustical result that isn’t as “wet” as you wanted…so, you push the volume until the “dry” sound through the PA overwhelms the sound in the room.

That can be pretty loud.

Instead of brute force, though, you can just tilt the “wet” ratio much further in favor of the effect.

In fact, I’ve been in some situations where, say, a snare drum was in exactly the right place without any help from the PA. In that case, I set up my routing so that the snare reverb was 100% wet – no “dry” signal at all. I already had all the “dry” sound I needed from the snare in the room, and so I just turned up the “all wet” reverb until the total, acoustical result was what I wanted.

The bottom line with all this is that, in a small space, you can get pretty darn decent sound without a screaming-loud PA. You just have to use the sound that you already have, and very selectively add the bits that need a little help. The more fine-grained you can be with the creation of this acoustic crossover, the more you can bend the total acoustical result to your will…within reason, of course.


Offline Measurement

Accessible recording gear means you don’t have to measure “live” if you don’t want to.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

I’m not an audio ninja. If you make a subtle change to a system EQ while the system is having pink noise run through it, I may or may not be able to tell that you’ve made a change, or I may not be able to tell you how wide or deep a filter you used. At the same time, I highly recognize the value of pink noise as an input to analysis systems.

“Wait! WAIT!,” I can hear you shouting, “What the heck are you talking about?”

I’m talking about measuring things. Objectivity. Using tools to figure out – to whatever extent is possible – exactly what is going on with an audio system. Audio humans use function and noise generators for measurement because of their predictability. For instance, unlike a recording of a song, I know that pink noise has equal power per octave and all audible frequencies present at any given moment. (White noise has equal power PER FREQUENCY, which means that each octave has twice as much power as the previous octave.)

If that paragraph sounded a little foreign to you, then don’t panic. Audio analysis is a GINORMOUS topic, with lots of pitfalls and blind corners. At the same time, I have a special place in my heart for objective measurement of audio devices. I get the “warm-n-fuzzies” for measurement traces because they are, in my mind, a tool for directly opposing a lot of the false mythology and bogus claims encountered in the business of sound.

Anyway.

Measurement is a great tool for dialing in live-sound rigs of all sorts. Because of its objectivity (assuming you actually use your measurement system correctly), it helps to calibrate your ears. You can look at a trace, listen to what something generating that trace sounds like, and have a reference point to work from. If you have a tendency to carve giant holes in a PA system’s frequency response when tuning by ear, measurement can help tame your overzealousness. If you’re not quite sure where that annoying, harsh, grating, high-mid peak is, measurement can help you find it and fix it.

…and one of the coolest things that I’ve discovered in recent years is that you don’t necessarily have to measure a system “live.” Offline measurement and tuning is much more possible than it ever has been before – mostly because digital tech has made recording so accessible.

How It Used To Be And Often Still Is

Back in the day, it was relatively expensive (as well as rather space-intensive and weight-intensive) to bring recording capabilities along with a PA system. Compact recording devices had limited capabilities, especially in terms of editing. Splicing tape while wrangling a PA wasn’t something that was going to happen.

As a result, if you wanted to tune a PA with the help of some kind of analyzer, you had to actually run a signal through the PA, into a measurement mic, and into the analysis device.

The sound you were measuring had to be audible. Very audible, actually, because test signals have to drown out the ambient noise in the room to be really usable. Sounds other than the test signal being audible to the measurement mic mean that your measurement’s accuracy is corrupted.

So, if you were using noise, the upshot was that you and everybody else in the room had to listen to a rather unpleasant blast of sound for as long as it took to get a reference tuning in place. It’s not much fun (unless you’re the person doing the work), and you can’t do it everywhere. Even when using a system that can take inputs other than noise, you still had to measure and make your adjustments “live,” with an audible signal in the room.

Taking A Different Route

The beautiful thing about today’s technology is that we have alternatives. In some cases, you might prefer to do a “fully live” tuning of a PA system or monitor rig – but if you’d prefer a different approach, it’s entirely possible.

It’s all because of how easy recording is, really.

The thing is, any audio-analysis system doesn’t really care where its input comes from. An analyzer really isn’t bothered about if its information is coming from a live measurement mic, or if the information is a recording of what came out of that measurement mic. All the analyzer knows is that some signal is being presented to it.

If you’re working with a single-input analyzer, offline measurement and tuning is basically about getting the “housekeeping” right:

  1. Run your measurement signal to the analyzer, without any intervening EQ or other processing. If that signal is supposed to give you a “flat” measurement trace, then make sure it does. You need a reference point that you can trust.
  2. Now, disconnect the signal from the analyzer and route that same measurement signal through the audio device(s) that you want to test. This includes the measurement mic if you’re working on something that produces acoustical output – like monitor wedges or an FOH (Front Of House) PA. The actual thing that delivers the signal to be captured and analyzed is the “device-under-test.” For the rest of this article, I’m effectively assuming that the device-under-test is a measurement mic.
  3. Connect the output of the device-under-test to something that can record the signal.
  4. Record at least several seconds of your test signal passing through what you want to analyze. I recommend getting at least 30 seconds of recorded audio. Remember that the measurement-signal to ambient-noise ratio needs to be pretty high – ideally, you shouldn’t be able to hear ambient noise when your test signal is running.
  5. If at all possible, find a way to loop the playback of your measurement recording. This will let you work without having to restart the playback all the time.
  6. Run the measurement recording through the signal chain that you will use to process the audio in a live setting.
  7. Send the output of that signal chain to the analyzer, but do NOT actually send the output to the PA or monitor rig.

Because the recorded measurement isn’t being sent to the “acoustical endpoints” (the loudspeakers) of your FOH PA or monitor rig, you don’t have to listen to loud noise while you adjust. As you make changes to, say, your system EQ, you’ll see the analyzer react. Get a curve that you’re comfortable with, and then you can reconnect your amps and speakers for a reality check. (Getting a reality check of what you just did in silence is VERY important – doubly so if you made drastic changes somewhere.)

Dual-FFT

So, all of that up there is fine and good, but…what if you’re not working with a simple, single input analyzer? What if you’re using a dual-FFT system like SMAART, EASERA, or Visual Analyzer?

Well, you can still do offline measurement, but things get a touch more complicated.

A dual-FFT (or “transfer function”) analysis system works by comparing a reference signal to a measurement signal. For offline measurement to work with comparative analysis, you have to be able to play back a copy of the EXACT signal that you’ll be using for measurement. You also have to be able to play that signal in sync with your measurement recording, but on a separate channel.

For me, the easiest way to accomplish this is to have a pre-recorded (as opposed to “live generated”) test signal. I set things up so that I can record the device-under-test while playing back the test signal through that device. For example, I could have the pre-recorded test signal on channel one, connect my measurement device so that it’s set to record on channel two, hit “record,” and be off to the races.

There is an additional wrinkle, though – time-alignment. Dual-FFT analyzers give skewed results if the measurement signal is early or late when compared to the reference signal, because, as far as the analyzer is concerned, the measurement signal is diverging from the reference. Of course, any measured signal is going to diverge from the reference, but you don’t want unnecessary divergence to corrupt the analysis. The problem, though, is that your test signal takes time to travel from the loudspeaker to the measurement microphone. The measurement recording, when compared to the reference recording, is inherently “late” because of this propagation delay.

Systems like SMAART and EASERA have a way of doing automatic delay compensation in a quick and painless way, but Visual Analyzer doesn’t. If your software doesn’t have an internal method for delay compensation, you’ll need to do it manually. This means:

  1. Preparing a test signal that includes an audible click, pop, or other transient that tells you where the signal starts.
  2. After recording the measurement signal, you’ll need to use that click or pop to line up the measurement recording with the test-signal, in terms of time. The more accurate the “sync,” the more stable your measurement trace will be.

If you’d rather not make your own test signal, you’re welcome to download and use this one. The “click” at the beginning is several cycles of a 2 kHz tone.

The bottom line is that you can certainly do “live” measurements if you want to, but you also have the option of capturing your measurement for “silent” tweaking. It’s ultimately about doing what’s best for your particular application…and remembering to do that “reality check” listen of your modifications, of course.