Tag Archives: SPL

Loud Doesn’t Create Excitement

A guest post for Schwilly Family Musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

amped

The folks in the audience have to be “amped up” about your songs before the privilege of volume is granted.

The full article is here.


How Much Output Should I Expect?

A calculator for figuring out how much SPL a reasonably-powered rig can develop.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

howloudWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

As a follow-on to my article about buying amplifiers, I thought it would be helpful to supply an extra tool. The purpose of this calculator is to give you an idea of the SPL delivered by a “sanely” powered audio rig.

A common mistake made when estimating output is to assume that the continuous power the amp is rated for will be easily applied to a loudspeaker. This leads to inflated estimations of PA performance, because, in reality, actually applying the rated continuous power of the amp is relatively difficult. It’s possible with a signal of narrow bandwidth and narrow dynamic range – like feedback, or sine-wave synth sounds, but most music doesn’t behave that way. Most of the time, the signal peaks are far above the continuous level…

…and, to be brutally honest, continuous output is what really counts.


This Calculator Requires Javascript

This calculator is an “aid” only. You should not rely upon it solely, especially if you are using it to help make decisions that have legal implications or involve large amounts of money. (I’ve checked it for glaring errors, but other bugs may remain.) The calculator assumes that you have the knowledge necessary to connect loudspeakers to amplifiers in such a way that the recommended power is applied.


Enter the sensitivity (SPL @ 1 watt @ 1 meter) of the loudspeakers you wish to use:

Enter the peak power rating of your speakers, if you want slightly higher performance at the expense of some safety. If you prefer greater safety, enter half the peak rating:

Enter the number of loudspeakers you intend to use:

Enter the distance from the loudspeakers to where you will be listening. Indicate whether the measurement is in feet or meters. (Measurements working out to be less than 1 meter will be clamped to 1 meter.)

Click the button to process the above information:

Recommended amplifier continuous power rating at loudspeaker impedance:
0 Watts

Calculated actual continuous power easily deliverable to each loudspeaker:
0 Watts

Calculated maximum continuous output for one loudspeaker at 1 meter:
0 dB SPL

Calculated maximum continuous output for one loudspeaker at the given listening position:
0 dB SPL

Calculated maximum continous output for entire system at the given listening position:
0 dB SPL

How The Calculator Works

First, if you want to examine the calculator’s code, you can get it here: Maxoutput.js

This calculator is intentionally designed to give a “lowball” estimate of your total output.

First, the calculator divides your given amplifier rating in half, operating on the assumption that an amp rated with sine-wave input will have a continuous power of roughly half its peak capability. An amp driven into distortion or limiting will have a higher continuous output capability, although the peak output will remain fixed.

The calculator then assumes that it will only be easy for you to drive the amp to a continuous output of -12 dB referenced to the peak output. Driving the amp into distortion or limiting, or driving the amp with heavily compressed material can cause the achievable continuous output to rise.

The calculator takes the above two assumptions and figures the continuous acoustic output of one loudspeaker with a continuous input of -12 dB referenced to the peak wattage available.

The next step is to figure the apparent level drop due to distance. The calculator uses the “worst case scenario” of inverse square, or 6 dB of SPL lost for every doubling of distance. This essentially presumes that the system is being run in an anechoic environment, where sound pressure waves traveling away from the listener are lost forever. This is rarely true, especially indoors, but it’s better to return a more conservative answer than an “overhyped” number.

The final bit is to sum the SPLs of all the loudspeakers specified to be in the system. This is tricky, because the exact deployment of the rig has a large effect – and the calculator can’t know what you’re going to do. The assumption is that all the loudspeakers are audible to the listener, but that half of them appear to be half as loud.


Loud Thoughts

“Loud” is a subjective sort of business.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

splWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The concept of “loud” is really amorphous, especially when you consider just how important it is to live shows. A show that’s too loud for a given situation will quickly turn into a mess, in one way or another. Getting a desired signal “loud enough” in a certain monitor mix may be key to a great performance.

And yet…”loud” is subjective. Perceived changes in level are highly personalized. People tolerate quite a bit of level when listening to music they like, and tolerate almost no volume at all when hearing something that they hate. One hundred decibels SPL might be a lot of fun when it’s thumping bass, but it can also be downright abrasive when it’s happening at 2500 Hz.

Twice As Loud

Take a look at that heading. Do you realize that nobody actually, truly knows what “twice as loud” means?

People might think they know. You’ll hear statements like “people generally think 6 dB is about twice as loud,” but then later someone else will say, “people perceive a 10 dB difference to be twice as loud.” There’s a range of perception, and it’s pretty sloppy when you actually do the math involved.

What I mean is this. The decibel is a measure of power. (You can convert other things, like voltage and pressure, into power equivalents.) Twice the power is 3 dB, period. It’s a mathematical definition that the industry has embraced for decades. It’s an objective, quantitative measurement of a ratio. Now, think about the range of perception that I presented just now. It’s a little eyebrow raising when you realize that the range for perceiving “twice as loud” is anywhere from 4X to 10X the power of the original signal. If a 1000 watt PA system at full tilt is the baseline, then there are listeners who would consider the output to be doubled at 4000 watts…and other folks who wouldn’t say it was twice as loud until a 10kW system was tickling its clip lights!

It’s because of this uncertainty that I try (and encourage others to seriously consider) communicating in terms of decibels. Especially in the context of dialing up a PA or monitor rig to everybody’s satisfaction, it helps greatly if some sort of quantitative and objective reference point is used. Yes, statements like “I need the guitar to be twice as loud,” or “I think the mix needs 10% more of the backup singers” ARE quantitative – but they aren’t objective. Do you need 3dB more guitar? Six decibels? Ten? Do you want only 0.4 dB more of the backup singers? (Because that’s what [10 log 1.1] works out to.) Communicating in decibels is far less arbitrary.

(The irony of using a qualitative phrase like “far less” in the context of advocating for objective quantification is not lost on me, by the way.)

The Meter Is Only Partially Valid As An Argument

Even if nobody actually knows what “twice as loud” means, one thing that people do know is when they feel a show is too loud.

For those of use who embrace measurement and objectivity, there’s a tendency that we have. When we hear a subjective statement, we get this urge to fire up a meter and figure out if that statement is true. I’m all for this behavior in many scenarios. Challenging unsubstantiated hoo-ha is, I think, one of the areas of pro-audio that still has some “frontier” left in it. My opinion is that more claims need to be challenged with the question, “Where’s your data?”

But when it comes to the topic of “loud,” especially the problem of “too loud,” whipping out an SPL meter and trying to argue on the basis of objectivity is of only narrow appropriateness. In the case of a show that feels too loud for someone, the meter can help you calibrate their perception of loud to an actual number that you can use. You can then decide if trying to achieve a substantially lower reading is feasible or desirable. If a full-on rock band is playing in a room, making 100 dBC at FOH without the PA even contributing, and one person thinks they only ought to be 85 dB C…that one person is probably out of luck. The laws of physics are really unlikely to let you fulfill that expectation. At the same time, you have to realize that your meter reading (which might suggest that the PA is only contributing three more decibels to the show) is irrelevant to that person’s perception.

If something is too loud for someone, the numbers from your meter have limited value. They can help you form a justifying argument for why the show level is where it is, but they’re not a valid argument all by themselves.


It’s Not Actually About The Best Sound

What we really want is the best possible show at the lowest practical gain.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

soundWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

As it happens, there’s a bit of a trilogy forming around my last article – the one about gain vs. stability. In discussions like this, the opening statement tends to be abstract. The “abstractness” is nice in a way, because it doesn’t restrict the application too much. If the concept is purified sufficiently, it should be usable in any applicable context.

At the same time, it’s nice to be able to make the abstract idea more practical. That is, the next step after stating the concept is to talk about ways in which it applies.

In live audio, gain is both a blessing and a curse. We often need gain to get mic-level signals up to line-level. We sometimes need gain to correct for “ensemble imbalances” that the band hasn’t yet fixed. We sometimes need gain to make a quiet act audible against a noisy background. Of course, the more gain we add, the more we destabilize the PA system, and the louder the show gets. The day-to-day challenge is to find the overall gain which lets us get the job done while maintaining acceptable system stability and sound pressure.

If this is the overall task, then there’s a precept which I think can be derived from it. It might only be derivable indirectly, depending on your point of view. Nevertheless:

Live sound is NOT actually about getting the best sound, insofar as “the best sound” is divorced from other considerations. Rather, the goal of live sound is to get the best possible holistic SHOW, at the lowest practical gain.

Fixing Everything Is A Bad Idea

The issue with a phrase like “the best sound” is that it morphs into different meanings for different people. For instance, at this stage in my career, I have basically taken the label saying “The Best Sound” and stuck it firmly on the metaphorical box containing the sound that gets the best show. For that reason alone, the semantics can be a little difficult. That’s why I made the distinction above – the distinction that “the best sound” or “the coolest sound” or “the best sound quality” is sometimes thought of without regard to the show as a whole.

This kind of compartmentalized thinking can be found both in concert audio veterans and greenhorns. My gut feeling is that the veterans who still section off their thinking are the ones who never had their notions challenged when they were new enough.

…and I think it’s quite common among new audio humans to think that the best sound creates the best show. That is, if we get an awesome drum sound, and a killer guitar tone, and a thundering bass timbre, and a “studio ready” vocal reproduction, we will then have a great show.

The problem with this line of thinking is that it tends to create situations where a tech is trying to “fix” almost everything about the band. The audio rig is used as a tool to change the sound of the group into a processed and massaged version of themselves – a larger than life interpretation. The problem with turning a band into a “bigger than real” version of itself is that doing so can easily require the FOH PA to outrun the acoustical output of the band AND monitor world by 10 dB or more. Especially in a small-venue context, this can mean lots and lots of gain, coupled with a great deal of SPL. The PA system may be perched on the edge of feedback for the duration of the show, and it may even tip over into uncontrolled ringing on occasion. Further, the show can easily be so loud that the audience is chased off.

To be blunt, your “super secret” snare-drum mojo is worthless if nobody wants to be in the same room with it. (If you follow me.)

Removed from other factors, the PA does sound great…but with the other factors being considered, that “great” sound is creating a terrible show.

Granularity

The correction for trying to fix everything is to only reinforce what actually needs help. This approach obeys the “lowest possible gain” rule. PA system gain is applied only to the sources that are being acoustically swamped, and only in enough quantity that those sources stop being swamped.

In a sense, you might say that there’s a certain amount of total gain (and total resultant volume) that you can have that is within an acceptable “window.” When you’ve used up your allotted amount of gain and volume, you need to stop there.

At first, the selectivity of what gets gain applied is not very narrow. For newer operators and/ or simplified PA systems, the choice tends to be “reproduce most of the source or none of it.” You might have, say, one guitar that’s in the PA, plus a vocal that’s cranked up, and some kick drum, and that’s all. Since the broadband content of the source is getting reproduced by the PA, adding any particular source into the equation chews up your total allowable gain in a fairly big hurry. This limits the correction (if actually necessary) that the PA system can apply to the total acoustical solution.

The above, by the way, is a big reason why it’s so very important for bands to actually sound like a band without any help from the PA system. That does NOT mean “so loud that the PA is unnecessary,” but rather that everything is audible in the proper proportions.

Anyway.

As an operator learns more and gains more flexible equipment, they can be more selective about what gets a piece of the gain allotment. For instance, let’s consider a situation where one guitar sound is not complementing another. The overall volumes are basically correct, but the guitar tones mask each other…or are masked by something else on stage. An experienced and well-equipped audio human might throw away everything in one guitar’s sound, except for a relatively narrow area that is “out of the way” of the other guitar. The audio human then introduces just enough of that band-limited sound into the PA to change the acoustical “solution” for the appropriate guitar. The stage volume of that guitar rig is still producing the lion’s share of the SPL in the room. The PA is just using that SPL as a foundation for a limited correction, instead of trying to run right past the total onstage SPL. The operator is using granular control to get a better show (where the guitars each have their own space) while adding as little gain and SPL to the experience as possible.

If soloed up, the guitar sound in the PA is terrible, but the use of minimal gain creates a total acoustical solution that is pleasing.

Of course, the holistic experience still needs to be considered. It’s entirely possible to be in a situation that’s so loud that an “on all the time” addition of even band-limited reinforcement is too much. It might be that the band-limited channel should only be added into the PA during a solo. This keeps the total gain of the show as low as is practicable, again, because of granularity. The positive gain is restricted in the frequency domain AND the time domain – as little as possible is added to the signal, and that addition is made as rarely as possible.

An interesting, and perhaps ironic consequence of granularity is that you can put more sources into the PA and apply more correction without breaking your gain/ volume budget. Selective reproduction of narrow frequency ranges can mean that many more channels end up in the PA. The highly selective reproduction lets you tweak the sound of a source without having to mask all of it. You might not be able to turn a given source into the best sound of that type, but granular control just might let you get the best sound practical for that source at that show. (Again, this is where the semantics can get a little weird.)

Especially for the small-venue audio human, the academic version of “the best sound” might not mean the best show. This also goes for the performers. As much as “holy grail” instrument tones can be appreciated, they often involve so much volume that they wreck the holistic experience. Especially when getting a certain sound requires driving a system hard – or “driving” an audience hard – the best show is probably not being delivered. The amount of signal being thrown around needs to be reduced.

Because we want the best possible show at the lowest practical gain.


The Board Feed Problem

Getting a good “board feed” is rarely as simple as just splitting an output.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

boardfeedWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

I’ve lost count of the number of times I’ve been asked for a “board mix.” A board mix or feed is, in theory, a quick and dirty way to get a recording of a show. The idea is that you take either an actual split from the console’s main mix bus, or you construct a “mirror” of what’s going into that bus, and then record that signal. What you’re hoping for is that the engineer will put together a show where everything is audible and has a basically pleasing tonality, and then you’ll do some mastering work to get a usable result.

It’s not a bad idea in general, but the success of the operation relies on a very powerful assumption: That the overwhelming majority of the show’s sound comes from the console’s output signal.

In very large venues – especially if they are open-air – this can be true. The PA does almost all the work of getting the show’s audio out to the audience, so the console output is (for most practical purposes) what the folks in the seats are listening to. Assuming that the processing audible in the feed-affecting path is NOT being used to fix issues with the PA or the room, a good mix should basically translate to a recorded context. That is, if you were to record the mix and then play it back through the PA, the sonic experience would be essentially the same as it was when it was live.

In small venues, on the other hand…

The PA Ain’t All You’re Listening To

The problem with board mixes in small venues is that the total acoustical result is often heavily weighted AWAY from what the FOH PA is producing. This doesn’t mean that the show sounds bad. What it does mean is that the mix you’re hearing is the PA, AND monitor world, AND the instruments’ stage volume, hopefully all blended together into a pleasing, convergent solution. That total acoustic solution is dependent on all of those elements being present. If you record the mix from the board, and then play it back through the PA, you will NOT get the same sonic experience that occurred during the live show. The other acoustical elements, no longer being present, leave you with whatever was put through the console in order to make the acoustical solution converge.

You might get vocals that sound really thin, and are drowning everything else out.

You might not have any electric guitar to speak of.

You might have only a little bit of the drumkit’s bottom end added into the bleed from the vocal mics.

In short, a quick-n-dirty board mix isn’t so great if the console’s output wasn’t the dominant signal (by far) that the audience heard. While this can be a revealing insight as to how the show came together, it’s not so great as a demo or special release.

So, what can you do?

Overwhelm Or Bypass

Probably the most direct solution to the board feed problem is to find a way to make the PA the overwhelmingly dominant acoustic factor in the show. Some ways of doing this are better than others.

An inadvisable solution is to change nothing about the show and just allow FOH to drown everything. This isn’t so good because it has a tendency to create a painfully loud experience for the audience. Especially in a rock context, getting FOH in front of everything else might require a mid-audience continuous sound pressure of 110 dB SPL or more. Getting away with that in a small room is a sketchy proposition at best.

A much better solution is to lose enough volume from monitor world and the backline, such that FOH being dominant brings the total show volume back up to (or below) the original sound level. This requires some planning and experimentation, because achieving that kind of volume loss usually means finding a way of killing off 10 – 20 dB SPL of noise. Finding a way to divide the sonic intensity of your performance by anywhere from 10 to 100(!) isn’t trivial. Shielding drums (or using a different kit setup), blocking or “soaking” instrument amps (or changing them out), and switching to in-ear monitoring solutions are all things that you might have to try.

Alternatively, you can get a board feed that isn’t actually the FOH mix.

One way of going about this is to give up one pre-fade monitor path to use as a record feed. You might also get lucky and be in a situation where a spare output can be configured this way, requiring you to give up nothing on deck. A workable mix gets built for the send, you record the output, and you hope that nothing too drastic happens. That is, the mix doesn’t follow the engineer’s fader moves, so you want to strenuously avoid large changes in the relative balances of the sources involved. Even with that downside, the nice thing about this solution is that, large acoustical contributions from the stage or not, you can set up any blend you like. (With the restriction of avoiding the doing of weird things with channel processing, of course. Insane EQ and weird compression will still be problematic, even if the overall level is okay.)

Another method is to use a post-fade path, with the send levels set to compensate for sources being too low or too hot at FOH. As long as the engineer doesn’t yank a fader all the way down to -∞ or mute the channel, you’ll be okay. You’ll also get the benefit of having FOH fader moves being reflected in the mix. This can still be risky, however, if a fader change has to compensate for something being almost totally drowned acoustically. Just as with the pre-fade method, the band still has to work together as an actual ensemble in the room.

If you want to get really fancy, you can split all the show inputs to a separate console and have a mix built there. It grants a lot of independence (even total independence) from the PA console, and even lets you assign your own audio human to the task of mixing the recording in realtime. You can also just arrange to have the FOH mix person run the separate console, but managing the mix for the room and “checking in” with the record mix can be a tough workload. It’s unwise to simply expect that a random tech will be able to pull it off.

Of course, if you’re going to the trouble of patching in a multichannel input split, I would say to just multitrack the show and mix it later “offline” – but that wouldn’t be a board feed anymore.

Board mixes of various sorts are doable, but if you’re playing small rooms you probably won’t be happy with a straight split from FOH. If you truly desire to get something usable, some “homework” is necessary.


The Calculus Of Music

There’s a lot of math behind the sound of a show, but you don’t have to work it out symbolically.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This post is the fault of my high-school education, my dad, and Neil deGrasse Tyson.

In high-school, I was introduced to calculus. I wasn’t particularly interested in the drills and hairy algebra, but I did have an interest in the high-level concepts. I’ve kept my textbook around, and I will sometimes open it up and skim it.

My dad is a lover of cars, and that means he gets magazines about cars and car culture. Every so often, I’ll run across one and see what’s in the pages.

About a month ago, I was on another jaunt through my calculus book when I happened upon a car-mag with an article by Neil deGrasse Tyson. (You know Dr. Tyson. He’s the African-American superstar astrophysicist guy. He hosted and narrated the new version of “Cosmos.”) In that article was a one-line concept that very suddenly connected some dots in my head: Dr. Tyson pointed out that sustained speed isn’t all that exciting – rather, acceleration is where the fun is.

Acceleration.

Change.

The rate of change.

Derivative calculus.

Exciting derivative calculus makes for exciting music.

What?

Let me explain.

Δy/Δx: It’s Where The Fun Is!

The first thing to say here is that there’s no need to be frightened of those symbols in the section heading. The point of all this is not to say that everybody should reduce music to a set of equations. I’m not suggesting that folks should have to “solve” music in a symbolic way, as a math problem. What I am saying is that mathematical concepts of motion and change can be SUPER informative about the sound of a show. (Or a recording, too.)

I mean, gosh, motion and change. That sounds like it’s really important for an art form involving sine waves. And vibrating stuff, like guitar strings and loudspeakers and such.

Anyway.

Those symbols up there (Δy/Δx) reflect the core of what derivative calculus is concerned with. It’s the study of how fast things are changing. Δy is, conventionally, the change in the vertical-axis value, whereas Δx is the change in the horizontal-axis value. If you remember your geometry, you might recall that the slope of a line is “rise over run,” or “how much does the line go up or down in a given horizontal space?” Rise over run IS Δy/Δx. Derivative calculus is nothing more exotic than finding the slopes of lines, but the algebra does get a bit hairy because of people wanting to get the slopes of lines that are tangent to single, instantaneous points on a curve YOUR EYES ARE GLAZING OVER, I KNOW.

Let’s un-abstractify this. (Un-abstractify is totally a word. I just created it. Send me royalties.)

Remember that article I wrote about the importance of transients? Transients are where a change in volume is high, relative to the amount of time that passes. An uncompressed snare-drum note has a big peak that happens quickly. It’s the same for a kick-drum hit. The “thump” or “crack” happens fast, and decays in a hurry. The difference in sound-pressure from “silence” to the peak volume of the note is Δy, and the time that passes is Δx. Think about it – you’ve seen a waveform in an audio-editor, right? The waveform is a graph of audio intensity over time. The vertical axis (y) is the measure of how loud things are, and the horizontal axis (x) is how much time has passed. Like this:

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

For music to be really exciting, there has to be dramatic change. For music to be calming, the change has to be restrained. If you want something that’s danceable, or if you want something that has defined, powerful impact regardless of danceability, you’ve got to have room for “big Δy.” There has to be space for volume curves that have steep slopes. The derivative calculus has to be interesting, or all you’ll end up with is a steady-state drone (or crushingly deafening roar, depending on volume) that doesn’t take the audience on much of a ride. (Again, if you want a calming effect, then steady-state at low-volume is probably what you want.) This works across all kinds of timescales, by the way. Your music might not have sharp, high-speed transients that take place over a few milliseconds, but you can still move the audience with swells and decrescendos that develop over the span of minutes.

Oh, and that graphic at the top of the page? That’s actually a roughly-traced vocal waveform, with some tangent-lines drawn in to show the estimated derivatives at those points. The time represented is relatively small – about one second. Notice the separation between the “hills?” Notice how steep the hills are? It turns out that the vocal in that recording is highly intelligible, and I would strongly argue that a key component in that intelligibility is a high rate of change in the right places. Sharp transitions from sound to sound help to tell you where words begin and end. When it all runs together, what you’ve got is incoherent mumbling. (This even works for text. You can read this, because the whitespace between words creates sharp transitions from word to word. This,ontheotherhand…)

Oh, and from a technical standpoint, headroom is really important for delivering large “Δy” events. If the PA is running at close to full tilt, there’s no room to shove a pronounced peak through it all. If you want to reproduce sonic events involving large derivatives, you have to have a pretty healthy helping of unused power at your disposal.

Now, overall level does matter as well, which leads us into another aspect of calculus.

Integral Volume

Integral calculus contrasts with derivative calculus, in that integration’s concern is with how much area is under the curve. From the perspective of an audio-human, the integral of the “sonic-events curve” tells you a lot about how much power you’re really delivering to those loudspeaker voice-coils. Short peaks don’t do much in terms of heating up coil windings, so loudspeakers can tolerate rather high levels over the short term. Long-term power handling is much lower, because that’s where you can get things hot enough to melt.

From a performance perspective, integration has a lot to say about just how loud your show is perceived to be. I’ve been in the presence of bands that had tremendous “derivative calculus” punching power, and yet they didn’t overwhelm the audience with volume. It was all because the total area under the volume curve was well managed. The long-term level of the band was actually fairly low, which meant that people didn’t feel abused by the band’s sound.

This overall concept (which includes the whole discussion of derivatives) is a pretty touchy subject in live audio. That is, it can all be challenging to get right. It’s situationally dependent, and it has to be “just so.” Too much is a problem, and too little is a problem. For example, take this blank graph which represents a hypothetical, bar-like venue where the band hasn’t started yet:

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

If the band volume’s area under the curve is too small, they’ll be drowned out by the talking of the crowd. Go too high, though, and the crowd will bail out. It’s a balancing act, and one that isn’t easy to directly define with raw numbers. For instance, here’s an example of what (I think) some reggae bands might look like over the span of several seconds:

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The “large Δy” events reach deep into the really-loud zone, but they’re very brief. Further, there are places where the noise floor peeks through significantly. This ability for the crowd to hear themselves talking helps to send the message that the band isn’t too loud. Overall, the area under the curve is probably halfway to three-quarters into the “comfortable volume” zone. Now, what about a “guitar” band:

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The peaks don’t go up quite as far. In terms of sustained level, the band is probably also halfway to three-quarters into the comfortable zone – and yet some folks will feel like the band is a bit loud. It’s because the sustained roar of the guitars (and everything else) is enough to completely overwhelm the noise floor. The crowd can’t hear themselves talk, which sends the message that the band’s intensity is higher than it is in terms of “pure numbers.”

As an aside, this says a lot about the problems of the volume war. At some point, we started crushing all the exciting, flavorful, “large Δy” material in order to get maximum area under the curve…and eventually, we started to notice just how ridiculous things were sounding.

And then there’s one of my pet peeves, which is the indie-rock idiom of scrubbing away at a single-coil-pickup guitar’s strings with the amp’s tone controls set for “maximum clang.” It creates one of the most sustained, abrasive, yet otherwise boring noises that a person can have the displeasure of hearing. Let me tell you how I really feel…

Anyway.

Excitement, intelligibility, and appropriate volume levels are probably just a few of the things described by the calculus of music. I’ll bet there’s more out there to be discovered. We just have to keep our cross-disciplinary antennae extended.


Speed Fishing

“Festival Style” reinforcement means you have to go fast and trust the musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Last Sunday was the final day of the final iteration of a local music festival called “The Acoustic All-Stars.” It’s a celebration of music made with traditional or neo-traditional instruments – acoustic-electric guitars, fiddles, drums, mandolins, and all that sort of thing. My perception is that the musicians involved have a lot of anticipation wrapped up in playing the festival, because it’s a great opportunity to hear friends, play for friends, and make friends.

Of course, this anticipation can create some pressure. Each act’s set has a lot riding on it, but there isn’t time to take great care with any one setup. The longer it takes to dial up the band, the less time they have to play…and there are no “do overs.” There’s one shot, and it has to be the right shot for both the listeners and the players.

The prime illustrator for all this on Sunday was Jim Fish. Jim wanted to use his slot to the fullest, and so assembled a special team of musicians to accompany his songs. The show was clearly a big deal for him, and he wanted to do it justice. Trying to, in turn, do justice to his desires required that a number of things take place. It turns out that what had to happen for Jim can (I think) be generalized into guidelines for other festival-style situations.

Pre-Identify The Trouble Spots, Then Make The Compromises

The previous night, Jim had handed me a stage plot. The plot showed six musicians, all singing, wielding a variety of acoustic or acoustic-electric instruments. A lineup like that can easily have its show wrecked by feedback problems, because of the number of open mics and highly-resonant instruments on the deck. Further, the mics and instruments are often run at (relatively) high-gain. The PA and monitor rig need to help with getting some more SPL (Sound Pressure Level) for both the players and the audience, because acoustic music isn’t nearly as loud as a rock band…and we’re in a bar.

Also, there would be a banjo on stage right. Getting a banjo to “concert level” can be a tough test for an audio human, depending on the situation.

Now, there’s no way you’re going to get “rock” volume out of a show like this – and frankly, you don’t want to get that kind of volume out of it. Acoustic music isn’t about that. Even so, the priorities were clear:

I needed a setup that was based on being able to run with a total system gain that was high, and that could do so with as little trouble as possible. As such, I ended up deploying my “rock show” mics on the deck, because they’re good for getting the rig barking when in a pinch. The thing with the “rock” mics is that they aren’t really sweet-sounding transducers, which is unfortunate in an acoustic-country situation. A guy would love to have the smoothest possible sound for it all, but pulling that off in a potentially high-gain environment takes time.

And I would not have that time. Sweetness would have to take a back seat to survival.

Be Ready To Abandon Bits Of The Plan

On the day of the show, the lineup ended up not including two people: The bassist and the mandolin player. It was easy to embrace this, because it meant lower “loop gain” for the show.

I also found out that the fiddle player didn’t want to use her acoustic-electric fiddle. She wanted to hang one particular mic over her instrument, and then sing into that as well. We had gone with a similar setup at a previous show, and it had definitely worked. In this case, though, I was concerned about how it would all shake out. In the potentially high-gain environment we were facing, pointing this mic’s not-as-tight polar pattern partially into the monitor wash held the possibility for creating a touchy situation.

Now, there are times to discuss the options, and times to just go for it. This was a time to go for it. I was working with a seasoned player who knew what she wanted and why. Also, I would lose one more vocal mic, which would lower the total loop-gain in the system and maybe help us to get away with a different setup. I knew basically what I was getting into with the mic we chose for the task.

And, let’s be honest, there were only minutes to go before the band’s set-time. Discussing the pros and cons of a sound-reinforcement approach is something you do when you have hours or days of buffer. When a performer wants a simple change in order to feel more comfortable, then you should try to make that change.

That isn’t to say that I didn’t have a bit of a backup plan in mind in case things went sideways. When you’ve got to make things happen in a hurry, you need to be ready to declare a failing option as being unworkable and then execute your alternate. In essence, festival-style audio requires an initial plan, some kind of backup plan, the willingness to partially or completely drop the original plan, and an ability to formulate a backup plan to the new plan.

The fiddle player’s approach ended up working quite nicely, by the way.

Build Monitor World With FOH Open

If there was anything that helped us pull-off Jim’s set, it was this. In a detail-oriented situation, it can be good to start with your FOH (Front Of House) channels/ sends/ etc. muted (or pulled back) while you build mixes for the deck. After the monitors are sorted out, then you can carefully fill in just what you need to with FOH. There are times, though, that such an approach is too costly in terms of the minutes that go by while you execute. This was one such situation.

In this kind of environment, you have to start by thinking not in terms of volume, but in terms of proportions. That is, you have to begin with proportions as an abstract sort of thing, and then arrive at a workable volume with all those proportions fully in effect. This works in an acoustic music situation because the PA being heavily involved is unlikely to tear anyone’s head off. As such, you can use the PA as a tool to tell you when the monitor mixes are basically balanced amongst the instruments.

It works like this:

You get all your instrument channels set up so that they have equal send levels in all the monitors, plus a bit of a boost in the wedge that corresponds to that instrument’s player. You also set their FOH channel faders to equal levels – probably around “unity” gain. At this point, the preamp gains should be as far down as possible. (I’m spoiled. I can put my instruments on channels with a two-stage preamp that lets me have a single-knob global volume adjustment from silence to “preamp gain +10 dB.” It’s pretty sweet.)

Now, you start with the instrument that’s likely to have the lowest gain before feedback. You begin the adventure there because everything else is going to have to be built around the maximum appropriate level for that source. If you start with something that can get louder, then you may end up discovering that you can’t get a matching level from the more finicky channel without things starting to ring. Rather than being forced to go back and drop everything else, it’s just better to begin with the instrument that will be your “limiting factor.”

You roll that first channel’s gain up until you’ve got a healthy overall volume for the instrument without feedback. Remember, both FOH and monitor world should both be up. If you feel like your initial guess on FOH volume is blowing past the monitors too much (or getting swamped in the wash), make the adjustment now. Set the rest of the instruments’ FOH faders to that new level, if you’ve made a change.

Now, move on to the subsequent instruments. In your mind, remember what the overall volume in the room was for the first instrument. Roll the instruments’ gains up until you get to about that level on each one. Keep in mind that what I’m talking about here is the SPL, not the travel on the gain knob. One instrument might be halfway through the knob sweep, and one might be a lot lower than that. You’re trying to match acoustical volume, not preamp gain.

When you’ve gone through all the instruments this way, you should be pretty close to having a balanced instrument mix in both the house and on deck. Presetting your monitor and FOH sends, and using FOH as an immediate test of when you’re getting the correct proportionality is what lets you do this.

And it lets you do it in a big hurry.

Yes, there might be some adjustments necessary, but this approach can get you very close without having to scratch-build everything. Obviously, you need to have a handle on where the sends for the vocals have to sit, and your channels need to be ready to sound decent through both FOH and monitor-world without a lot of fuss…but that’s homework you should have done beforehand.

Trust The Musicians

This is probably the nail that holds the whole thing together. Festival-style (especially in an acoustic context) does not work if you aren’t willing to let the players do their job, and my “get FOH and monitor world right at the same time” trick does NOT work if you can’t trust the musicians to know their own music. I generally discourage audio humans from trying to reinvent a band’s sound anyway, but in this kind of situation it’s even more of something to avoid. Experienced acoustic music players know what their songs and instruments are supposed to sound like. When you have only a couple of minutes to “throw ‘n go,” you have to be able to put your faith in the music being a thing that happens on stage. The most important work of live-sound does NOT occur behind a console. It happens on deck, and your job is to translate the deck to the audience in the best way possible.

In festival-style acoustic music, you simply can’t “fix” everything. There isn’t time.

And you don’t need to fix it, anyway.

Point a decent mic at whatever needs micing, put a working, active DI on the stuff that plugs in, and then get out of the musicians’ way.

They’ll be happier, you’ll be happier, you’ll be much more likely to stay on schedule…it’s just better to trust the musicians as much as you possibly can.


The Party-Band Setup To Rule Lots Of Them

A guest post for Schwilly Family Musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

“Party-bands” can make a fair bit of money playing at upmarket events. A setup that lets you pick just about any arbitrary volume to play at can help you secure a wider variety of bookings. For a full article on all this, pay a visit to Schwilly Family Musicians.


What Can You Do For Two People?

Quite a bit, actually, because even the small things have a large effect.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

MiNX is a treat to see on the show schedule. They’re not just a high-energy performance, but a high-energy performance delivered by only two people, and without resorting to ear-splitting volume. How could an audio-human not appreciate that?

A MiNX show is hardly an exercise in finding the boundaries of one’s equipment. Their channel count is only slightly larger than a singer-songwriter open mic. It looks something like this:

  1. Raffi Vocal Mic
  2. Ischa Vocal Mic
  3. Guitar Amp Mic
  4. Acoustic Guitar DI
  5. Laptop DI

That’s it. When you compare those five inputs with the unbridled hilarity that is a full rock band with 3+ vocals, two guitars, a bass rig, keys, and full kit of acoustic drums, a bit of temptation creeps in. You get the urge to think that because the quantity of things to potentially manage has gone down, the amount of attention that you have to devote to the show is reduced. This is, of course, an incorrect assumption.

But why?

Low Stage Volume Magnifies FOH

A full-on rock band tends to produce a fair amount of stage volume. In a small room, this stage volume is very much “in parallel” with the contribution from the PA. If you mute the PA, you may very well still have concert-level SPL (Sound Pressure Level) in the seats. There are plenty of situations where, for certain instruments, the contribution from the PA is nothing, or something but hardly audible, or something audible but in a restricted frequency area that just “touches up” the audio from stage.

So, you might have 12 things connected to the console, but only really be using – say – the three vocal channels. Everything else cold very well be taking care of itself (or mostly so), and thus the full-band mix is actually LESS complex and subtle than a MiNX-esque production. The PA isn’t overwhelmingly dominant for a lot of the channels, and so changes to those channel volumes or tones are substantially “washed out.”

But that’s not the way it is with MiNX and acts similar to them.

In the case of a production like MiNX, the volume coming off the stage is rather lower than that of a typical rock act. It’s also much more “directive.” With the exception of the guitar amplifier, everything else is basically running through the monitors. Pro-audio monitors – relative to most instruments and instrument amps – are designed to throw audio in a controlled pattern. There’s much less “splatter” from sonic information that’s being thrown rearward and to the sides. What this all means is that even a very healthy monitor volume can be eclipsed by the PA without tearing off the audience’s heads.

That is, unlike a typical small-room rock show, the audience can potentially be hearing a LOT of PA relative to everything else.

And that means that changes to FOH (Front Of House) level and tonality are far less washed out than they would normally be.

And that means that little changes matter much more than they usually do.

You’ve Got To Pay Attention

It’s easy to be taken by surprise by this. Issues that you might normally let go suddenly become fixable, but you might not notice the first few go-arounds because you’re just used to letting those issues slide. Do the show enough times, though, and you start noticing things. For instance, the last time I worked on a MiNX show was when I finally realized that some subtle dips at 2.5 kHz in the acoustic guitar and backing tracks allowed me to run those channels a bit hotter without stomping on Ischa’s vocals. This allows for a mix that sounds less artificially “separated,” but still retains intelligibility.

That’s a highly specific example, but the generalized takeaway is this: An audio-human can be tempted to just handwave a simpler, quieter show, but that really isn’t a good thing to do. Less complexity and lower volume actually means that the details matter more than ever…and beyond that, you actually have the golden opportunity to work on those details in a meaningful way.

When the tech REALLY needs to be paying attention to the small details of the mix is when the PA system’s “tool metaphor” changes from a sledgehammer to a precision scalpel.

When you’ve only got a couple of people on deck, try hard to stay sharp. There might be a lot you can do for ’em, and for their audience.


Transdimensional Noodle Baking

When you start messing with the timing and spatial behavior of sound, weird and wonderful things can happen.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

One of my favorite lines from the movie “The Matrix” is the bit delivered by The Oracle after Neo accidentally knocks over a vase:

“What’ll really bake your noodle later on is, would you still have broken it if I hadn’t said anything?”

That’s where I get the whole “noddle baking” thing. It’s stuff that really messes with your mind when you consider the implications. Of course, our noodles don’t get baked as often as they used to. We’re a pretty sophisticated lot, not at all astounded by ideas like how looking up at the stars is actually looking back in time. We’re just used to it all. “Yeah, yeah, the light from the stars has been traveling to us for millions of years, we’re seeing the universe as it was before our civilization was really a thing, yadda, yadda…”

Even so, there’s still plenty of room for us audio types to have our minds sauteed – probably because the physics of audio is so accessible to us. Really messing around with light waves is tough, but all kinds of nerdery is possible when it comes to sound. Further, the implications of our messing about with audio are surprisingly weird, whacky, wonderful, and even downright bizarre.

Here, let me demonstrate.

Time And Distance Are Partially Interchangeable

Pretty much every audio human who gets into “the science” can quote you the rule of thumb: Sound pressure waves propagate away from their source at roughly 1 foot/ millisecond, or 1 meter/ 3 milliseconds. (Notice that I said “roughly.” Sound is actually a little bit faster than that, but the error is small enough to be acceptable in cases where inches or centimeters aren’t of concern.) In a way that’s very similar to light, time and distance can be effectively the same measurement. When you hear a loudspeaker that’s 20 feet away, you aren’t actually hearing what the box sounds like now. You’re hearing what the box sounded like 20 milliseconds ago.

Now, we tend to gloss over all that. It’s just science that you get used to remembering. Think about what it means, though: To an extent, you can move a loudspeaker stack relative to the listener without physically changing the position of the boxes. Assuming that other sound sources (which the brain can use for a timing reference) stay the same, adding delay time to a source effectively makes the source farther away to the observer.

…and yet, the physical location of the source hasn’t changed. When you think about it, that’s pretty wild. What’s even wilder is that the loudspeaker’s coverage and acoustical interaction with the room remain unchanged, even though the loudspeaker is now effectively further away. Think about it: If we had physically moved the loudspeaker away from the listener(s), more people would be in the coverage pattern. The coverage area expands with distance, but only when the distance is physical. Similarly, if we had moved the loudspeaker away by actually picking it up and placing it farther away, the ratio of direct sound to reverberant sound would have changed. The reverberant field’s SPL (Sound Pressure Level) would have been higher, relative to the SPL of the pressure wave traveling directly to the listeners. By using a delay line, though, the SPL of the sound that arrives directly is unchanged…even though the sound is effectively farther from the audience.

Using a digital delay line, we can sort of “bend the rules” concerning what happens when the speakers are farther away from the audience.

Whoa.

It’s important to note, of course, that the rules are only “bent.” Also, they’re only bent in terms of select portions of how humans perceive sound. Whether or not the loudspeakers are physically moved or simply moved in time, the acoustical interactions with other sound sources DO change. This can be good or bad. Also, a loudspeaker that’s farther from a listener should have lower apparent SPL than the same box with the same power at a closer distance.

But that’s not what happens with a “unity gain” delay line.

And that’s another noodle-baker. The loudspeaker is perceptually farther from the listener, yet it has the same SPL as a nearby source.

Whoa.

(There is no spoon. It is yourself that bends.)

The Strange Case Of Delay And The Reference Frame

That bit above is nifty, but it’s actually pretty basic.

You want something really wild?

When we physically move a loudspeaker, we are most likely reducing its distance to some listeners while increasing its distance to others. (Obvious, right?) However, when we move a loudspeaker using time delay, the loudspeaker’s apparent distance to ALL listeners is increased. No matter where the listener is, the loudspeaker is pushed AWAY from them. It’s like the loudspeaker is in a bubble of space-time that is more distant from all points outside the bubble. Your frame of reference doesn’t matter. The delayed sound source always seems to be more distant, no matter where you’re standing.

Now THERE’S some Star Trek for ya.

If you’re not quite getting it, do this thought experiment: You put a monitor wedge on the floor. You and a friend stand thirty feet apart, with the monitor wedge halfway between the two of you. The distance from each listener to the wedge is 15 feet, right? Now, a delay of 15 ms is applied to the signal feeding the wedge. Remember, both of you are listening to the same wedge. Thus, in the sense of time (that is, ignoring other factors like SPL and direct/ reverberant ratio) each of you perceives the wedge as having moved to where the other person is standing. This is because, again, both of you are hearing the same signal in the same wedge. Both of you are registering a sonic event that has been made 15 ms “late.” It doesn’t matter where one of you is listening from – the wedge always “moves” away from you.

(Cue the theme to “The Twilight Zone.”)

Let me re-emphasize that other, important cues as to the loudspeaker’s location are not present. If you try this experiment in real life, you won’t truly experience the sound of a wedge that’s been moved 15 feet away. The acoustic interaction and SPL factors simply won’t be present. What we’re talking about here is the time component only, divorced from other factors.

You can use this “delay lines let you put your loudspeakers in a weird pocket-dimension” behavior to do cool things like directional subwoofer arrays.

For instance, here’s a subwoofer sitting all by itself. It’s pretty close to being omnidirectional at, say, 63 Hz.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

That’s great, and all, but what if we don’t want to toss all that low end around on the stage behind the sub? Well, first, we can put another sub in front of the first one. We put sub number 2 a quarter-wavelength in front of sub 1. (We do, of course, have to decide which frequency we’re most concerned about. In this case, it’s 63 Hz, so all measurements are relative to that frequency.) For someone standing in front of our sub-array, the front sub is about 4 ms early. By the same token, the folks on stage hear sub 2 as being roughly 4 ms late.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Here’s where the “delay always pushes away from you” mind-screw becomes useful. If we delay subwoofer 2 by a quarter-wavelength, the folks on stage and the folks in the audience will BOTH get the effect of sub 2 being farther away. Because the sub is a quarter-wave too early for the audience, the delay will precisely line it up with subwoofer 1. However, because the second sub is already “late” for the folks on stage, pushing the subwoofer a quarter-wave away means that it’s now “half a wave” late. Being a half-wave late is 180 degrees out of phase, which means that our problem frequency will cancel when you’re standing behind the array. The folks in front get all the bottom end they want, and the performers on stage aren’t being rattled nearly as much.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Radical, dude.

The Acoustical Foam TARDIS

For my final trick, let me tell you about how to make a room acoustically larger by filling it up more. It’s very Dr. Who-ish.

What I’m getting at is making the room more absorptive. To make the walls of the room absorb more sound, we have to add something – acoustic foam, for instance. By adding that foam (or drape, or whatever), we reduce the amount of sound that can strike a boundary and re-emit into the space. In a certain way, reducing the SPL and high-frequency content of reflections makes the room larger. Reduced overall reflection level is consistent with audio that has had to travel farther to reach the listener, and high-frequency content is particularly attenuated as sound travels through air.

So, in an acoustic sense, reducing the room’s actual physical volume by adding absorptive material actually makes the room “sound” bigger. Of course, this doesn’t always register, because we’re culturally used to the idea that large spaces have a “loud” reverberant field. We westerners tend to build big buildings that are made of things like stone, metal, glass, and concrete – which makes for a LOT of sound reflectance.

It might be a little bit better to say that increased acoustical absorption makes an interior sound more and more like being outside. Go far enough with your acoustic treatment (whether or not this is a good idea is beyond the scope of this article), and you could acoustically go outdoors by entering the building.

Whoa.