Tag Archives: Listening

Where’s Your Data?

I don’t think audio-humans are skeptical enough.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

traceWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

If I’m going to editorialize on this, I first need to be clear about one thing: I’m not against certain things being taken on faith. There are plenty of assumptions in my life that can’t be empirically tested. I don’t have a problem with that in any way. I subscribe quite strongly to that old saw:

You ARE entitled to your opinion. You ARE NOT entitled to your own set of “facts.”

But, of course, that means that I subscribe to both sides of it. As I’ve gotten farther and farther along in the show-production craft, especially the audio part, I’ve gotten more and more dismayed with how opinion is used in place of fact. I’ve found myself getting more and more “riled” with discussions where all kinds of assertions are used as conversational currency, unbacked by any visible, objective defense. People claim something, and I want to shout, “Where’s your data, dude? Back that up. Defend your answer!”

I would say that part of the problem lies in how we describe the job. We have (or at least had) the tendency to say, “It’s a mix of art and science.” Unfortunately, my impression is that this has come to be a sort of handwaving of the science part. “Oh…the nuts and bolts of how things work aren’t all that important. If you’re pleased with the results, then you’re okay.” While this is a fair statement on the grounds of having reached a workable endpoint through unorthodox or uneducated means, I worry about the disservice it does to the craft when it’s overapplied.

To be brutally frank, I wish the “mix of art and science” thing would go away. I would replace it with, “What we’re doing is science in the service of art.”

Everything that an audio human does or encounters is precipitated by physics – and not “exotic” physics, either. We’re talking about Newtonian interactions and well-understood electronics here, not quantum entanglement, subatomic particles, and speeds approaching that of light. The processes that cause sound stuff to happen are entirely understandable, wieldable, and measurable by ordinary humans – and this means that audio is not any sort of arcane magic. A show’s audio coming off well or poorly always has a logical explanation, even if that explanation is obscure at the time.

I Should Be Able To Measure It

Here’s where the rubber truly meets the road on all this.

There seems to be a very small number of audio humans who are willing to do any actual science. That is to say, investigating something in such a way as to get objective, quantitative data. This causes huge problems with troubleshooting, consulting, and system building. All manner of rabbit trails may be followed while trying to fix something, and all manner of moneys are spent in the process, but the problem stays un-fixed. Our enormous pool of myth, legend, and hearsay seems to be great for swatting at symptoms, but it’s not so hot for tracking down the root cause of what’s ailing us.

Part of our problem – I include myself because I AM susceptible – is that listening is easy and measuring is hard. Or, rather, scientific measuring is hard.

Listening tests of all kinds are ubiquitous in this business. They’re easy to do, because they aren’t demanding in terms of setup or parameter control. You try to get your levels matched, setup some fast signal switching, maybe (if you’re very lucky) make it all double-blind so that nobody knows what switch setting corresponds to a particular signal, and go for it.

Direct observation via the senses has been used in science for a long time. It’s not that it’s completely invalid. It’s just that it has problems. The biggest problem is that our senses are interpreted through our brains, an organ which develops strong biases and filters information so that we don’t die. The next problem is that the experimental parameter control actually tends to be quite shoddy. In the worst cases, you get people claiming that, say, console A has a better sound than console B. But…they heard console A in one place, with one band, and console B in a totally different place with a totally different band. There’s no meaningful comparison, because the devices under test AND the test signals were different.

As a result, listening tests produce all kinds of impressions that aren’t actually helpful. Heck, we don’t even know what “sounds better” means. For this person over here, it means lots of high-frequency information. For some other person, it means a slight bass boost. This guy wants a touch of distortion that emphasizes the even-numbered harmonics. That gal wants a device that resembles a “straight wire” as much as possible. Nobody can even agree on what they like! You can’t actually get a rigorous comparison out of that sort of thing.

The flipside is, if we can actually hear it, we should be able to measure it. If a given input signal actually sounds different when listened to through different signal paths, then those signal paths MUST have different transfer functions. A measurement transducer that meets or exceeds the bandwidth and transient response of a human ear should be able to detect that output signal reliably. (A measurement mic that, at the very least, significantly exceeds the bandwidth of human hearing is only about $700.)

As I said, measuring – real measuring – is hard. If the analysis rig is setup incorrectly, we get unusable results, and it’s frighteningly easy to screw up an experimental procedure. Also, we have to be very, very defined about what we’re trying to measure. We have to start with an input signal that is EXACTLY the same for all measurements. None of this “we’ll set up the drums in this room, play them, then tear them down and set them up in this other room,” can be tolerated as valid. Then, we have to make every other parameter agree for each device being tested. No fair running one preamp closer to clipping than the other! (For example.)

Question Everything

So…what to do now?

If I had to propose an initial solution to the problems I see (which may not be seen by others, because this is my own opinion – oh, the IRONY), I would NOT say that the solution is for everyone to graph everything. I don’t see that as being necessary. What I DO see as being necessary is for more production craftspersons to embrace their inner skeptic. The lesser amount of coherent explanation that’s attached to an assertion, the more we should doubt that assertion. We can even develop a “hierarchy of dubiousness.”

If something can be backed up with an actual experiment that produces quantitative data, that something is probably true until disproved by someone else running the same experiment. Failure to disclose the experimental procedure makes the measurement suspect however – how exactly did they arrive at the conclusion that the loudspeaker will tolerate 1 kW of continuous input? No details? Hmmm…

If a statement is made and backed up with an accepted scientific model, the statement is probably true…but should be examined to make sure the model was applied correctly. There are lots of people who know audio words, but not what those words really mean. Also, the model might change, though that’s unlikely in basic physics.

Experience and anecdotes (“I heard this thing, and I liked it better”) are individually valid, but only in the very limited context of the person relating them. A large set of similar experiences across a diverse range of people expands the validity of the declaration, however.

You get the idea.

The point is that a growing lack of desire to just accept any old statement about audio will, hopefully, start to weed out some of the mythological monsters that periodically stomp through the production-tech village. If the myths can’t propagate, they stand a chance of dying off. Maybe. A guy can hope.

So, question your peers. Question yourself. Especially if there’s a problem, and the proposed fix involves a significant amount of money, question the fix.

A group of us were once troubleshooting an issue. A producer wasn’t liking the sound quality he was getting from his mic. The discussion quickly turned to preamps, and whether he should save up to buy a whole new audio interface for his computer. It finally dawned on me that we hadn’t bothered to ask anything about how he was using the mic, and when I did ask, he stated that he was standing several feet from the unit. If that’s not a recipe for sound that can be described as “thin,” I don’t know what is. His problem had everything to do with the acoustic physics of using a microphone, and nothing substantial AT ALL to do with the preamp he was using.

A little bit of critical thinking can save you a good pile of cash, it would seem.

(By the way, I am biased like MAD against the the crowd that craves expensive mic pres, so be aware of that when I’m making assertions. Just to be fair. Question everything. Question EVERYTHING. Ask where the data is. Verify.)


EQ Or Off-Axis?

A case-study in fixing a monitor mix.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

monitorseverywhereWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

I’m really interested in monitors. They contribute immensely to the success (or crushing failure) of a show, affect musicians in ways that are often inaudible to me, and tend to require a fair bit of management. I wrote a whole article on the topic of unsuckifying them. Some of the most interesting problems to solve involve monitor mixes, because those problems are a confluence of multiple factors that combine to smash your face in.

You know, like Devastator, the Decepticon super-robot formed by the Constructicons. The GREEN (and purple) super-robot. From the 1980s. It was kind of a pain to put him together, if I remember correctly.

Sorry, what were we talking about again?

Monitors.

So, my regular gig picked up a “rescue” show, because another venue shut down unexpectedly. A group called The StrangeHers was on deck, with Amanda in to play some fiddle. (Amanda is a fiddle player in high demand. If she’s not playing with a band, she is being recruited by that band. I expect that her thrash-metal debut will come shortly.) We were rushing around, trying to get monitor world sorted out. When we got to Amanda, she jumped in with a short, but highly astute question:

“The vocals are loud, but I can’t really make them out. They sound all muddy. Is there a problem with the EQ, or is it something else?”

Indirect

Amanda’s monitor was equalized correctly. The lead vocal was equalized correctly. Well…that is…ELECTRONICALLY. The signal processing software acting as EQ was doing exactly what it should have been doing. Amanda’s problem had to do with effective EQ: The total, acoustical solution for her was incorrect.

In other words, yes, we had an EQ problem, but it wasn’t a problem that would be appropriately fixed with an equalizer.

One of the lessons that live-sound tries to teach – over and over again, with swift and brutal force – is that actually resolving an issue requires addressing whatever is truly precipitating that issue. You can “patch” things by addressing the symptoms, but you won’t have a fix until you get to the true, root cause.

What was precipitating the inappropriate, total EQ for Amanda could be boiled down to one fundamental factor: She wasn’t getting enough “direct” sound.

To start with, she was “off-axis” from all the other monitors she was hearing. Modern loudspeakers for live-sound applications do tend to have nice, tight, pattern control at higher frequencies. As the frequency of the reproduced content decreases, though, the output has more and more of a tendency to just “go everywhere.” Real directivity at low frequencies requires big “boxes,” as the wavelengths involved are quite large. Big boxes, however, are generally not what we want on deck, so we have to deal with what we’ve got. What we’ve got, then, is a reality where standing to the side of a monitor gets you very little in the way of frequency content that contributes to vocal intelligibility (roughly 1 kHz and above), and quite a lot of sound that contributes to vocal “mud.”

Another major factor was that the rest of what Amanda was hearing had been bounced off a boundary at least once. Any “intelligibility zone” material that made it to Amanda’s ears was significantly late when compared to everything else, and probably smeared badly from containing multiple reflections of itself. Compounding that was the issue of a room that contained both people and acoustical treatment. Most anything that was reflected back to the deck was probably missing a lot of high-frequency information. It had been heavily absorbed on the way out and the way back.

Figuring It Out

This is not to say that all of the above snapped instantly into my head when Amanda asked what was wrong. I had to have other clues in order to chase down a fix. Those clues were:

1) Before the show, I had put the mics through the monitors, walked up on deck, and listened to what it all sounded like. For the test, I had a very healthy send level from each vocal mic to the monitors that were directly behind that microphone. Vocal intelligibility was certainly happening at that time, and although things would definitely change as the room changed, the total acoustical solution wouldn’t become unrecognizably different.

2) Nobody else had complained. Although this is hardly the most reliable factor, it does figure in. If the vocals were a muddy mess everywhere, I’m betting that I would have gotten more agreement from the other band members. This suggested that the problem was local to Amanda, and by extension, that a global change (EQ on the vocal channel) would potentially create an incorrect solution for the other folks.

3) On the vocal channel, the send level to the other monitors was high in comparison to the send level to Amanda’s monitor. This was probably my biggest and most immediate clue. When other monitors are getting sends that are +9 dB in relation to another box, the performer is probably hearing mostly the garbled wash from everything OTHER than their own monitor. If the send level to Amanda’s wedge had been high, I might have concluded that the overall EQ for that particular wedge was wrong – although my encouraging, pre-show experience would have suggested that the horn had died at some point. (Ya cain’t fix THAT with an equalizer, pilgrim…)

So, with the clues that I had, I decided to try increasing the send level to Amanda’s monitor to match the send levels to the other monitors. Just like that, Amanda had a LOT more direct sound, everything was copacetic, and off we went.


Regarding Arrangements and Audio Humans – A Letter

A guest-post for Schwilly Family Musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

dannyWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Yes, that’s me in the picture up there.

Anyway…


‘It has come to my attention that some of you have, often by accident, placed more responsibility in my hands than might be prudent. This may have come from many things: A misunderstanding of how our roles intersect, an overestimation of what physics will allow me to get away with, misplaced hero-worship, or other such thoughts.

What I am referring to specifically is the idea that your song arrangements are best managed by way of a sound person wielding a tremendous chain of signal transduction and processing equipment. You’ve seen and heard concert setups that have impressed you, and you’ve thought, “This gear, in the hands of a competent tech, will make us sound good.”

My dear Bands, I don’t wish to be combative or contradictory, but I cannot agree with you on that concept.’


Read the rest (for free!) at Schwilly Family Musicians.


Does It Have To Be This Loud?

A love-letter to patrons of live music.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

loveWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Dear Live-Music Patron,

Occasionally, you have a question to ask an audio human. That question isn’t often posed to me personally, although, in aggregate, the query is probably made multiple times in every city on every night. The question isn’t always direct, and it can morph into different forms – some of which are statements:

“I can’t hear anything except the drums.”

“The guitar on the right is hurting my ears.”

“It’s hard to talk to my girlfriend/ boyfriend in here.”

“Can you keep everything the same, but turn the mains down?”

“Can you make it so the mic doesn’t make that screech again?”

And so on.

Whenever the conversation goes this way, there’s a singular question lying at the heart of the matter:

“Does it have to be this loud?”

There are a number of things I want to say to you regarding that question, but the most important bit has to come first. It’s the one thing that I want you to realize above everything else.

You’re asking a question that is 100% legitimate.

You may have asked it in one way or another, only to be brushed off. You may have had an exasperated expression pointed your way. You may have been given a brusque “Yes” in response. You may have encountered shrugging, swearing, eye-rolling, sneering, or any number of other responses that were rude, unhelpful, or downright mean.

But that doesn’t mean that your question is wrong or stupid. You’re right to ask it. It’s one of the minor tragedies in this business that production people and music players talk amongst themselves so much, and yet almost never have a real conversation with you. Another minor tragedy is that us folks who run the shows are usually not in a position to have a nuanced discussion with you when it would actually be helpful.

It’s hard to explain why it’s so loud when it’s so loud that you have to ask if “it has to be this loud.”

So, I want to try to answer your question. I can’t speak to every individual circumstance, but I can talk about some general cases.

Sometimes No

I am convinced that, at some time in their career, every audio tech has made a show unnecessarily loud. I’ve certainly done it.

As “music people,” we get excited about sonic experiences as an end in themselves. We’re known for endlessly chasing after tiny improvements in some miniscule slice of the audible spectrum. We can spend hours debating the best way to make the bass (“kick”) drum sound like a device capable of extinguishing all multicellular life on the planet. The sheer number of words dedicated to the construction of “massive” rock and roll guitar noises is stunning. The amount of equipment and trickery that can be dedicated to, say, getting a bass guitar to sound “just so” might boggle your mind.

It’s entirely possible for us to become so enraptured in making a show – or even just a small portion of a show – sound a certain way that we don’t realize how much level we’re shoveling into the equation. We get the drums cookin’, and then we realize that the guitars are a little low, and then the bass comes up to balance that out, and then the vocals are buried, so we crank up the vocals and WHAT? I CAN’T HEAR YOU!

It does happen. Sometimes it’s accidental, and sometimes it’s deliberate. Some techs just don’t feel like a rock show is a rock show until they “feel” a certain amount of sound pressure level.

In these cases, when the audio human’s mix choices are the overwhelming factor in a show being too loud, the PA really should be pulled back. It doesn’t have to be that loud. The problem and the solution are simple creatures.

But Sometimes Yes

The thing with live audio is that the problems and the solutions are often not so simple as what I just got into. It’s very possible, especially in a small room, for the sound craftsperson’s decisions to NOT be the overwhelming factor in determining the volume of a gig. I – and others like me – have spent lots of time in situations where we’ve had to deal with an unfortunate consequence of the laws of physics:

The loudest thing in the room is as quiet as we can possibly be, and quite often, a balanced mix requires something else to be much louder than that thing.

If the instrumentalists (drums, bass, guitars, etc) are blasting away at 110 dB without any help from the sound system, then the vocals will have to be in that same neighborhood in order to compete. It’s a conundrum of either being too loud with a flat-out awful mix, or too loud with a mix that’s basically okay. In a case like that, an audio human just has to get on the gas and wait to go home. Someone’s going to be mad at us, and it might as well not be the folks who are into the music.

There’s another overarching situation, though, and that’s the toughest one to talk about. It’s a difficult subject because it has to do with subjectivity and incompatible expectations. What I’m getting at is when some folks want background music, and the show is not…can not be presented as such.

There ARE bands that specialize in playing “dinner” music. They’re great at performing inoffensive selections that provide a bed for conversation at a comfortable volume. What I hope can be understood is that this is indeed a specialization. It’s a carefully cultivated, specific skill that is not universally pursued by musicians. It’s not universally pursued because it’s not universally applicable.

Throughout human history, a great many musical events, musical instruments, and musical artisans have had a singular purpose: To be noticed, front and center. For thousands of years, humans have used instruments like drums and horns as acoustic “force multipliers” – sonic levers, if you will. We have used them to call to each other over long distances, or send signals in the midst of battle. Fanfares have been sounded at the arrivals of kings. On a parallel track, most musicians that I know do not simply play to be involved in the activity of playing. They play so as to be listened to.

Put all that together, and what you have is a presentation of art that simply is not meant to be talked over. In the cases where it’s meant to coexist with a rambunctious audience, it’s even more meant to not be talked over. From the mindset of the players to the technology in use, the experience is designed specifically to stand out from the background. It can’t be reduced to a low rumble. That isn’t what it is. There’s no reason that it has to be painfully loud, but there are many good reasons why a conversation in close proximity might not be practical.

So.

Does it have to be this loud?

Maybe.


What Just Changed?

If an acoustical environment changes significantly, you may start to have mysterious problems.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

screenWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Just recently, I was working a show for Citizen Hypocrisy. I did my usual prep, which includes tamping down the “hotspots” in vocal mics – gain before feedback being important, and all that.

Everything seemed copacetic, even the mic for the drumkit vocal. There was clarity, decent volume, and no ringing. The band got in the room and set up their gear. This time around, an experiment was conducted: A psychedelic background video was set to play in a loop as a backdrop. We eventually did a quick, “just in time” soundcheck, and we were off.

As things kicked into gear, I noticed something. I was getting high frequency feedback from one of the mics, it wasn’t running away by any means, but it was audible and annoying. I thought to myself, “We did just give DJ’s microphone a push in monitor world. I probably need to pull the top-end back a bit.” I did just that…but the feedback continued. I started soloing things into my headphones.

“Huh, guitar-Gary’s mic seems okay. DJ’s mic is picking up the feedback, but it doesn’t seem to be actually part of that loop. That leaves…*unsolos DJ’s mic, solos drum-Gary’s mic* Well, THERE’S the problem.”

The mic for vocals at the drumkit? It was squeaking like a pissed-off mouse. I hammered the offending frequency with a notch filter, and that was it.

But why hadn’t I noticed the problem when I was getting things baselined for the night? Gary hadn’t changed the orientation of the mic so that it was pointing at the drumfill, and neither the input gain nor send gains had changed, so why had the problem cropped up?

The answer: Between my setup and actually getting the show moving in earnest, we had changed the venue’s acoustical characteristics, especially as they concerned the offending microphone. We had deployed the screen behind the drums.

Rolling With The Changes

Citizen Hypocrisy was playing at my regular gig. Under conditions where we are NOT running video, the upstage wall is a mass of acoustical wedge foam. For most purposes, high-mid and high frequency content is soaked up by the treatment, never to be heard again. However, when we are running video, the screen drops in front of the foam. For high frequencies, the screen is basically a giant, relatively efficient reflector. My initial monitor-EQ solution was built for the situation where the upstage wall was an absorber. When the screen came down, that solution was partially invalidated. Luckily, what had to be addressed was merely the effective gain of a very narrow frequency range. We hadn’t run into a “showstopper bug,” but we had still encountered a problem.

The upshot of all this is:

Any change to a show’s acoustical environment, whether by way of surface absorption, diffusion, and reflectance, or by way of changes in atmospheric conditions, can invalidate a mix solution to some degree.

Now, you don’t have to panic. My feeling is that we sometimes overstate the level of vigilance required in regards to acoustical changes at a show. You just have to keep listening, and keep your brain turned on. If the acoustical environment changes, and you hear something you don’t like, then try to do something about it. If you don’t hear anything you don’t like, there’s no reason to find something to do.

For instance, at my regular gig, putting more people into the room is almost always an automatic improvement. I don’t have to change much (if anything at all), because the added absorption makes the mix sound better.

On the reverse side, I once ran a summer show for Puddlestone where I suddenly had a “feedback monster” where one hadn’t existed for a couple of hours. The feedback problem coincided with the air conditioning finally getting a real handle on the temperature in the room. My guess is that some sort of acoustical refraction was occurring, where it was actually hotter near the floor where all the humans were. For the first couple of hours, some amount of sound was bending up and away from us. When the AC really took hold, it might have been that the refraction “flattened out” enough to get a significant amount of energy back into the mics. (My explanation could also be totally incorrect, but it seems plausible.) Obviously, I had to make a modification in accordance with the problem, which I did.

In all cases, if things were working before, and suddenly are no longer working as well, a good question to ask yourself is: “What changed between when the mix solution was correct, and now, when it seems incorrect?” It’s science! You identify the variable(s) that got tweaked, and then manage the variables under your control in order to bring things back into equilibrium. If you have to re-solve your mix equation, then that’s what you do.

And then you go back to enjoying the show.

Until something else changes.


Loud Doesn’t Create Excitement

A guest post for Schwilly Family Musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

amped

The folks in the audience have to be “amped up” about your songs before the privilege of volume is granted.

The full article is here.


Loud Thoughts

“Loud” is a subjective sort of business.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

splWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The concept of “loud” is really amorphous, especially when you consider just how important it is to live shows. A show that’s too loud for a given situation will quickly turn into a mess, in one way or another. Getting a desired signal “loud enough” in a certain monitor mix may be key to a great performance.

And yet…”loud” is subjective. Perceived changes in level are highly personalized. People tolerate quite a bit of level when listening to music they like, and tolerate almost no volume at all when hearing something that they hate. One hundred decibels SPL might be a lot of fun when it’s thumping bass, but it can also be downright abrasive when it’s happening at 2500 Hz.

Twice As Loud

Take a look at that heading. Do you realize that nobody actually, truly knows what “twice as loud” means?

People might think they know. You’ll hear statements like “people generally think 6 dB is about twice as loud,” but then later someone else will say, “people perceive a 10 dB difference to be twice as loud.” There’s a range of perception, and it’s pretty sloppy when you actually do the math involved.

What I mean is this. The decibel is a measure of power. (You can convert other things, like voltage and pressure, into power equivalents.) Twice the power is 3 dB, period. It’s a mathematical definition that the industry has embraced for decades. It’s an objective, quantitative measurement of a ratio. Now, think about the range of perception that I presented just now. It’s a little eyebrow raising when you realize that the range for perceiving “twice as loud” is anywhere from 4X to 10X the power of the original signal. If a 1000 watt PA system at full tilt is the baseline, then there are listeners who would consider the output to be doubled at 4000 watts…and other folks who wouldn’t say it was twice as loud until a 10kW system was tickling its clip lights!

It’s because of this uncertainty that I try (and encourage others to seriously consider) communicating in terms of decibels. Especially in the context of dialing up a PA or monitor rig to everybody’s satisfaction, it helps greatly if some sort of quantitative and objective reference point is used. Yes, statements like “I need the guitar to be twice as loud,” or “I think the mix needs 10% more of the backup singers” ARE quantitative – but they aren’t objective. Do you need 3dB more guitar? Six decibels? Ten? Do you want only 0.4 dB more of the backup singers? (Because that’s what [10 log 1.1] works out to.) Communicating in decibels is far less arbitrary.

(The irony of using a qualitative phrase like “far less” in the context of advocating for objective quantification is not lost on me, by the way.)

The Meter Is Only Partially Valid As An Argument

Even if nobody actually knows what “twice as loud” means, one thing that people do know is when they feel a show is too loud.

For those of use who embrace measurement and objectivity, there’s a tendency that we have. When we hear a subjective statement, we get this urge to fire up a meter and figure out if that statement is true. I’m all for this behavior in many scenarios. Challenging unsubstantiated hoo-ha is, I think, one of the areas of pro-audio that still has some “frontier” left in it. My opinion is that more claims need to be challenged with the question, “Where’s your data?”

But when it comes to the topic of “loud,” especially the problem of “too loud,” whipping out an SPL meter and trying to argue on the basis of objectivity is of only narrow appropriateness. In the case of a show that feels too loud for someone, the meter can help you calibrate their perception of loud to an actual number that you can use. You can then decide if trying to achieve a substantially lower reading is feasible or desirable. If a full-on rock band is playing in a room, making 100 dBC at FOH without the PA even contributing, and one person thinks they only ought to be 85 dB C…that one person is probably out of luck. The laws of physics are really unlikely to let you fulfill that expectation. At the same time, you have to realize that your meter reading (which might suggest that the PA is only contributing three more decibels to the show) is irrelevant to that person’s perception.

If something is too loud for someone, the numbers from your meter have limited value. They can help you form a justifying argument for why the show level is where it is, but they’re not a valid argument all by themselves.


It’s Not Actually About The Best Sound

What we really want is the best possible show at the lowest practical gain.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

soundWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

As it happens, there’s a bit of a trilogy forming around my last article – the one about gain vs. stability. In discussions like this, the opening statement tends to be abstract. The “abstractness” is nice in a way, because it doesn’t restrict the application too much. If the concept is purified sufficiently, it should be usable in any applicable context.

At the same time, it’s nice to be able to make the abstract idea more practical. That is, the next step after stating the concept is to talk about ways in which it applies.

In live audio, gain is both a blessing and a curse. We often need gain to get mic-level signals up to line-level. We sometimes need gain to correct for “ensemble imbalances” that the band hasn’t yet fixed. We sometimes need gain to make a quiet act audible against a noisy background. Of course, the more gain we add, the more we destabilize the PA system, and the louder the show gets. The day-to-day challenge is to find the overall gain which lets us get the job done while maintaining acceptable system stability and sound pressure.

If this is the overall task, then there’s a precept which I think can be derived from it. It might only be derivable indirectly, depending on your point of view. Nevertheless:

Live sound is NOT actually about getting the best sound, insofar as “the best sound” is divorced from other considerations. Rather, the goal of live sound is to get the best possible holistic SHOW, at the lowest practical gain.

Fixing Everything Is A Bad Idea

The issue with a phrase like “the best sound” is that it morphs into different meanings for different people. For instance, at this stage in my career, I have basically taken the label saying “The Best Sound” and stuck it firmly on the metaphorical box containing the sound that gets the best show. For that reason alone, the semantics can be a little difficult. That’s why I made the distinction above – the distinction that “the best sound” or “the coolest sound” or “the best sound quality” is sometimes thought of without regard to the show as a whole.

This kind of compartmentalized thinking can be found both in concert audio veterans and greenhorns. My gut feeling is that the veterans who still section off their thinking are the ones who never had their notions challenged when they were new enough.

…and I think it’s quite common among new audio humans to think that the best sound creates the best show. That is, if we get an awesome drum sound, and a killer guitar tone, and a thundering bass timbre, and a “studio ready” vocal reproduction, we will then have a great show.

The problem with this line of thinking is that it tends to create situations where a tech is trying to “fix” almost everything about the band. The audio rig is used as a tool to change the sound of the group into a processed and massaged version of themselves – a larger than life interpretation. The problem with turning a band into a “bigger than real” version of itself is that doing so can easily require the FOH PA to outrun the acoustical output of the band AND monitor world by 10 dB or more. Especially in a small-venue context, this can mean lots and lots of gain, coupled with a great deal of SPL. The PA system may be perched on the edge of feedback for the duration of the show, and it may even tip over into uncontrolled ringing on occasion. Further, the show can easily be so loud that the audience is chased off.

To be blunt, your “super secret” snare-drum mojo is worthless if nobody wants to be in the same room with it. (If you follow me.)

Removed from other factors, the PA does sound great…but with the other factors being considered, that “great” sound is creating a terrible show.

Granularity

The correction for trying to fix everything is to only reinforce what actually needs help. This approach obeys the “lowest possible gain” rule. PA system gain is applied only to the sources that are being acoustically swamped, and only in enough quantity that those sources stop being swamped.

In a sense, you might say that there’s a certain amount of total gain (and total resultant volume) that you can have that is within an acceptable “window.” When you’ve used up your allotted amount of gain and volume, you need to stop there.

At first, the selectivity of what gets gain applied is not very narrow. For newer operators and/ or simplified PA systems, the choice tends to be “reproduce most of the source or none of it.” You might have, say, one guitar that’s in the PA, plus a vocal that’s cranked up, and some kick drum, and that’s all. Since the broadband content of the source is getting reproduced by the PA, adding any particular source into the equation chews up your total allowable gain in a fairly big hurry. This limits the correction (if actually necessary) that the PA system can apply to the total acoustical solution.

The above, by the way, is a big reason why it’s so very important for bands to actually sound like a band without any help from the PA system. That does NOT mean “so loud that the PA is unnecessary,” but rather that everything is audible in the proper proportions.

Anyway.

As an operator learns more and gains more flexible equipment, they can be more selective about what gets a piece of the gain allotment. For instance, let’s consider a situation where one guitar sound is not complementing another. The overall volumes are basically correct, but the guitar tones mask each other…or are masked by something else on stage. An experienced and well-equipped audio human might throw away everything in one guitar’s sound, except for a relatively narrow area that is “out of the way” of the other guitar. The audio human then introduces just enough of that band-limited sound into the PA to change the acoustical “solution” for the appropriate guitar. The stage volume of that guitar rig is still producing the lion’s share of the SPL in the room. The PA is just using that SPL as a foundation for a limited correction, instead of trying to run right past the total onstage SPL. The operator is using granular control to get a better show (where the guitars each have their own space) while adding as little gain and SPL to the experience as possible.

If soloed up, the guitar sound in the PA is terrible, but the use of minimal gain creates a total acoustical solution that is pleasing.

Of course, the holistic experience still needs to be considered. It’s entirely possible to be in a situation that’s so loud that an “on all the time” addition of even band-limited reinforcement is too much. It might be that the band-limited channel should only be added into the PA during a solo. This keeps the total gain of the show as low as is practicable, again, because of granularity. The positive gain is restricted in the frequency domain AND the time domain – as little as possible is added to the signal, and that addition is made as rarely as possible.

An interesting, and perhaps ironic consequence of granularity is that you can put more sources into the PA and apply more correction without breaking your gain/ volume budget. Selective reproduction of narrow frequency ranges can mean that many more channels end up in the PA. The highly selective reproduction lets you tweak the sound of a source without having to mask all of it. You might not be able to turn a given source into the best sound of that type, but granular control just might let you get the best sound practical for that source at that show. (Again, this is where the semantics can get a little weird.)

Especially for the small-venue audio human, the academic version of “the best sound” might not mean the best show. This also goes for the performers. As much as “holy grail” instrument tones can be appreciated, they often involve so much volume that they wreck the holistic experience. Especially when getting a certain sound requires driving a system hard – or “driving” an audience hard – the best show is probably not being delivered. The amount of signal being thrown around needs to be reduced.

Because we want the best possible show at the lowest practical gain.


“Shine On You Crazy Diamond:” The Best Soundcheck Song EVER

Everything takes its place at an unhurried pace.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

diamondWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Back in the days when I got to work with Floyd Show, I always preferred it when the night would start with “Shine On You Crazy Diamond.” Even when we’d had time to do extensive work on the show’s audio during the day, the luxury of “easing in” to the actual performance was something I savored.

Sure, nothing quite compares with the rush of, say, having the first set be “Dark Side Of The Moon.” The intro plays, building to a feverish peak, and then “shwOOM!” The guitars take you into “Breathe.” It’s really cool when it works, but there’s always that nagging fear in the back of your mind: “What if something doesn’t quite happen correctly?” Anything short of a catastrophic failure is insufficient to allow the show to stop, so a problem means that the impact of the show-open is wrecked…AND you’re going to have to fix things in a big hurry.

Anyway.

“Shine On You Crazy Diamond” is, in my opinion, THE template for a “soundcheck” song. Soundcheck songs are fantastic tools to have handy, because (let’s face it), the small-venue world is full of scenarios where the only option is grab-n-go. Combining an actual first song with a soundcheck lets you keep the show moving, maximizing your play time and audience engagement while getting things sorted out on deck. Not all soundcheck tunes are created equal, though, so learning the lessons available from “Shine On” is a Very Good Idea™ when it comes time to craft your own “multitasker’s minuet.”

Take Your Time

Because soundcheck songs naturally happen at the top of a set, the instinct is to start off with a bang. This is unhelpful. A fast tune means that the time available for an audio-human to catch, analyze, act on, and re-evaluate any particular problem is hugely compressed. Several musical phrases can go by while a tech tries to get sorted out during a lively song. The more phrases that go by without a fix, the more “wrong” the show seems to sound. A fast song, then, tends to push the show opening towards sounding bad. (You don’t want your show-open to sound bad.)

“Shine On,” of course, answers this issue in exactly the right way. It’s a leisurely piece – downright dreamy, actually – which means that the person managing the PA noises doesn’t have to rush around. They can focus, listen, and act deliberately. If something is wrong, it’s entirely possible to get a handle on the issue within a couple of musical phrases. Even very sticky problems can usually be wrangled by the end of the song, which allows the show to continue smoothly without stopping.

If you want to come out of the gate like a racecar, you need a proper soundcheck. If you’re going to do things on the fly, please fly slowly.

Everything Has Its Own Space

Another excellent feature of “Shine On You Crazy Diamond” is that it spends a long time being a series of solos. You get to hear the keys, then a guitar chimes in up front, then the other guitar plays for a bit, then you get some more keys, and then everything fires together with the drums. After that, you get some uncluttered drums along with another guitar solo, and then some vocals that happen over some subdued backing from the band. Next, you get a chance to hear the vocals against the higher-intensity version of the band, and finally, you get some saxophone over both gentle and more “wound-up” backgrounds.

Everything has a time (and quite a lot of it, due to the song being slow) where it is the front-and-center element. For an audio-human, this is tremendous. It gives a very clear indication of whether or not the basic level of the part is in a reasonable place, and it also still manages to say a lot about whether the part’s tonality is going to work in context. Further, this kind of song structure allows us to get as close as possible to a “check everything individually” situation without actually having that option available. The audio human gets time to think about each instrument separately, even though other parts are still playing in the background.

The antithesis of this is the soundcheck song where everything starts playing at once, usually with everybody trying to be louder than everybody else. The tech ends up losing precious time while trying to simply make sense of the howling vortex of noise that just hit them in the face. With nothing “presorted,” the only option is to struggle to pick things out either by luck or by force.

Again, if you want to start at a full roar, you should do that at the shows where you have the opportunity to get the roar figured out in advance. If you don’t have time to take turns getting sorted before the show, then you have to use the show to do that.

Waste Nothing

Some folks treat their soundcheck song as a bit of worthless rubbish. They toss it out to the audience as though it has no value, seemingly in the hopes that the showgoers will ignore it. It’s as though the band is saying “this isn’t real, so don’t pay attention yet.”

But it IS real, and the audience IS paying attention. A soundcheck tune is part of the actual show, and should NOT be a throwaway. It should be a “first-class” song that’s done as well as is possible.

Of course, because it is a soundcheck song, it probably shouldn’t be the tune that relies most on everything going perfectly. Songs used to get around production issues are tools, and you have to use the correct tool for any given job.

“Shine On” is a real song. It’s a very important part of Pink Floyd’s catalog, and was crafted with care. Floyd Show never (when I worked with them) played the tune with the idea of taking a mulligan afterwards, which is also what I would expect from the actual Pink Floyd. If the show was opened with “Shine On You Crazy Diamond,” the show was OPENED. We were not casually testing anything; we were going for it, even as remaining technical issues got sorted out.

You should care about your soundcheck song. It’s a real part of your show, a part that should be intentionally crafted to meet a specific need: Connecting with your audience while a mix comes together.


Engin-eyes

Trust your ears – but verify.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

analysisWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

It might be that I just don’t want to remember who it was, but a famous engineer once became rather peeved. His occasion to be irritated arose when a forum participant had the temerity to load one of the famous engineer’s tracks into a DAW and look at the waveform. The forum participant (not me) was actually rather complimentary, saying that the track LOOKED very compressed, but didn’t SOUND crushed at all.

This ignited a mini-rant from the famous guy, where he pointedly claimed that the sound was all that mattered, and he wasn’t interested in criticism from “engin-eyes.” (You know, because audio humans are supposed to be “engine-ears.”)

To be fair, the famous engineer hadn’t flown into anything that would pass as a “vicious, violent rage,” but the relative ferocity of his response was a bit stunning to me. I was also rather put off by his apparent philosophy that the craft of audio has no need of being informed by senses other than hearing.

Now, let’s be fair. The famous engineer in question is known for a reason. He’s had a much more monetarily successful career than I have. He’s done excellent work, and is probably still continuing to do excellent work at the very moment of this writing. He’s entitled to his opinions and philosophies.

But I am also entitled to mine, and in regards to this topic, here’s what I think:

The idea that an audio professional must rely solely upon their sense of hearing when performing their craft is, quite simply, a bogus “purity standard.” It gets in the way of people’s best work being done, and is therefore an inappropriate restriction in an environment that DEMANDS that the best work be done.

Ears Are Truthful. Brains Are Liars.

Your hearing mechanism, insofar as it works properly, is entirely trustworthy. A sound pressure wave enters your ear, bounces your tympanic membrane around, and ultimately causes some cilia deep in your ear to fire electrical signals down your auditory nerve. To the extent that I understand it all, this process is functionally deterministic – for any given input, you will get the same output until the system changes. Ears are dispassionate detectors of aural events.

The problem with ears is that they are hooked up to a computer (your brain) which can perform very sophisticated pattern matching and pattern synthesis.

That’s actually incredibly neat. It’s why you can hear a conversation in a noisy room. Your brain receives all the sound, performs realtime, high-fidelity pattern matching, tries to figure out what events correlate only to your conversation, and then passes only those events to the language center. Everything else is labeled “noise,” and left unprocessed. On the synthesis side, this remarkable ability is one reason why you can enjoy a song, even against noise or compression artifacts. You can remember enough of the hi-fi version to mentally reconstruct what’s missing, based on the pattern suggested by the input received. Your emotional connection to the tune is triggered, and it matters very little that the particular playback doesn’t sound all that great.

As I said, all that is incredibly neat.

But it’s not necessarily deterministic, because it doesn’t have to be. Your brain’s pattern matching and synthesis operations don’t have to be perfect, or 100% objective, or 100% consistent. They just have to be good enough to get by. In the end, what this means is that your brain’s interpretation of the signals sent by your ears can easily be false. Whether that falsehood is great or minor is a whole other issue, very personalized, and beyond the scope of this article.

Hearing What You See

It’s very interesting to consider what occurs when your hearing correlates with your other senses. Vision, for instance.

As an example, I’ll recall an “archetype” story from Pro Sound Web’s LAB: A system tech for a large-scale show works to fulfill the requests of the band’s live-audio engineer. The band engineer has asked that the digital console be externally “clocked” to a high-quality time reference. (In a digital system, the time reference or “wordclock” is what determines exactly when a sample is supposed to occur. A more consistent timing reference should result in more accurate audio.) The system tech dutifully connects a cable from the wordclock generator to the console. The band engineer gets some audio flowing through the system, and remarks at how much better the rig sounds now that the change had been made.

The system tech, being diplomatic, keeps quiet about the fact that the console has not yet been switched over from its internal reference. The external clock was merely attached. The console wasn’t listening to it yet. The band engineer expected to hear something different, and so his brain synthesized it for him.

(Again, this is an “archetype” story. It’s not a description of a singular event, but an overview of the functional nature of multiple events that have occurred.)

When your other senses correlate with your hearing, they influence it. When the correlation involves something subjective, such as “this cable will make everything sound better,” your brain will attempt to fulfill your expectations – especially when no “disproving” input is presented.

But what if the correlating input is objective? What then?

Calibration

What I mean by “an objective, correlated input” is an unambiguously labeled measurement of an event, presented in the abstract. A waveform in a DAW (like I mentioned in the intro) fits this description. The timescale, “zero point,” and maximum levels are clearly identifiable. The waveform is a depiction of audio events over time, in a visual medium. It’s abstract.

In the same way, audio analyzers of various types can act as objective, correlated inputs. To the extent that their accuracy allows, they show the relative intensities of audio frequencies on an unambiguous scale. They’re also abstract. An analyzer depicts sonic information in a visual way.

When used alongside your ears, these objective measurements cause a very powerful effect: They calibrate your hearing. They allow you to attach objective, numerical information to your brain’s perception of the output from your ears.

And this makes it harder for your brain to lie to you. Not impossible, but harder.

Using measurement to confirm or deny what you think you hear is critical to doing your best work. Yes, audio-humans are involved in art, and yes, art has subjective results. However, all art is created in a universe governed by the laws of physics. The physical processes involved are objective, even if our usage of the processes is influenced by taste and preference. Measurement tools help us to better understand how our subjective decisions intersect with the objective universe, and to me, that’s really important.

If you’re wondering if this is a bit of a personal “apologetic,” you’re correct. If there’s anything I’m not, it’s a “sound ninja.” There are audio-humans who can hear a tiny bit of ringing in a system, and can instantly pinpoint that ring with 1/3rd octave accuracy – just by ear. I am not that guy. I’m very slowly getting better, but my brain lies to me like the guy who “hired” me right out of school to be an engineer for his record label. (It’s a doozy of a story…when I’m all fired up and can remember the best details, anyway.) This being the case, I will gladly correlate ANY sense with my hearing if it helps me create a better show. I will use objective analysis of audio signals whenever I think it’s appropriate, if it helps me deliver good work.

Of course the sound is the ultimate arbiter. If the objective measurement looks weird, but that’s the sound that’s right for the occasion, then the sound wins.

But aside from that, the goal is the best possible show. Denying ones-self useful tools for creating that show, based on the bogus purity standards of a few people in the industry who AREN’T EVEN THERE…well, that’s ludicrous. It’s not their show. It’s YOURS. Do what works for YOU.

Call me an “engineyes” if you like, but if looking at a meter or analyzer helps me get a better show (and maybe learn something as well), then I will do it without offering any apology.