Tag Archives: Subjectivity

Where’s Your Data?

I don’t think audio-humans are skeptical enough.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

traceWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

If I’m going to editorialize on this, I first need to be clear about one thing: I’m not against certain things being taken on faith. There are plenty of assumptions in my life that can’t be empirically tested. I don’t have a problem with that in any way. I subscribe quite strongly to that old saw:

You ARE entitled to your opinion. You ARE NOT entitled to your own set of “facts.”

But, of course, that means that I subscribe to both sides of it. As I’ve gotten farther and farther along in the show-production craft, especially the audio part, I’ve gotten more and more dismayed with how opinion is used in place of fact. I’ve found myself getting more and more “riled” with discussions where all kinds of assertions are used as conversational currency, unbacked by any visible, objective defense. People claim something, and I want to shout, “Where’s your data, dude? Back that up. Defend your answer!”

I would say that part of the problem lies in how we describe the job. We have (or at least had) the tendency to say, “It’s a mix of art and science.” Unfortunately, my impression is that this has come to be a sort of handwaving of the science part. “Oh…the nuts and bolts of how things work aren’t all that important. If you’re pleased with the results, then you’re okay.” While this is a fair statement on the grounds of having reached a workable endpoint through unorthodox or uneducated means, I worry about the disservice it does to the craft when it’s overapplied.

To be brutally frank, I wish the “mix of art and science” thing would go away. I would replace it with, “What we’re doing is science in the service of art.”

Everything that an audio human does or encounters is precipitated by physics – and not “exotic” physics, either. We’re talking about Newtonian interactions and well-understood electronics here, not quantum entanglement, subatomic particles, and speeds approaching that of light. The processes that cause sound stuff to happen are entirely understandable, wieldable, and measurable by ordinary humans – and this means that audio is not any sort of arcane magic. A show’s audio coming off well or poorly always has a logical explanation, even if that explanation is obscure at the time.

I Should Be Able To Measure It

Here’s where the rubber truly meets the road on all this.

There seems to be a very small number of audio humans who are willing to do any actual science. That is to say, investigating something in such a way as to get objective, quantitative data. This causes huge problems with troubleshooting, consulting, and system building. All manner of rabbit trails may be followed while trying to fix something, and all manner of moneys are spent in the process, but the problem stays un-fixed. Our enormous pool of myth, legend, and hearsay seems to be great for swatting at symptoms, but it’s not so hot for tracking down the root cause of what’s ailing us.

Part of our problem – I include myself because I AM susceptible – is that listening is easy and measuring is hard. Or, rather, scientific measuring is hard.

Listening tests of all kinds are ubiquitous in this business. They’re easy to do, because they aren’t demanding in terms of setup or parameter control. You try to get your levels matched, setup some fast signal switching, maybe (if you’re very lucky) make it all double-blind so that nobody knows what switch setting corresponds to a particular signal, and go for it.

Direct observation via the senses has been used in science for a long time. It’s not that it’s completely invalid. It’s just that it has problems. The biggest problem is that our senses are interpreted through our brains, an organ which develops strong biases and filters information so that we don’t die. The next problem is that the experimental parameter control actually tends to be quite shoddy. In the worst cases, you get people claiming that, say, console A has a better sound than console B. But…they heard console A in one place, with one band, and console B in a totally different place with a totally different band. There’s no meaningful comparison, because the devices under test AND the test signals were different.

As a result, listening tests produce all kinds of impressions that aren’t actually helpful. Heck, we don’t even know what “sounds better” means. For this person over here, it means lots of high-frequency information. For some other person, it means a slight bass boost. This guy wants a touch of distortion that emphasizes the even-numbered harmonics. That gal wants a device that resembles a “straight wire” as much as possible. Nobody can even agree on what they like! You can’t actually get a rigorous comparison out of that sort of thing.

The flipside is, if we can actually hear it, we should be able to measure it. If a given input signal actually sounds different when listened to through different signal paths, then those signal paths MUST have different transfer functions. A measurement transducer that meets or exceeds the bandwidth and transient response of a human ear should be able to detect that output signal reliably. (A measurement mic that, at the very least, significantly exceeds the bandwidth of human hearing is only about $700.)

As I said, measuring – real measuring – is hard. If the analysis rig is setup incorrectly, we get unusable results, and it’s frighteningly easy to screw up an experimental procedure. Also, we have to be very, very defined about what we’re trying to measure. We have to start with an input signal that is EXACTLY the same for all measurements. None of this “we’ll set up the drums in this room, play them, then tear them down and set them up in this other room,” can be tolerated as valid. Then, we have to make every other parameter agree for each device being tested. No fair running one preamp closer to clipping than the other! (For example.)

Question Everything

So…what to do now?

If I had to propose an initial solution to the problems I see (which may not be seen by others, because this is my own opinion – oh, the IRONY), I would NOT say that the solution is for everyone to graph everything. I don’t see that as being necessary. What I DO see as being necessary is for more production craftspersons to embrace their inner skeptic. The lesser amount of coherent explanation that’s attached to an assertion, the more we should doubt that assertion. We can even develop a “hierarchy of dubiousness.”

If something can be backed up with an actual experiment that produces quantitative data, that something is probably true until disproved by someone else running the same experiment. Failure to disclose the experimental procedure makes the measurement suspect however – how exactly did they arrive at the conclusion that the loudspeaker will tolerate 1 kW of continuous input? No details? Hmmm…

If a statement is made and backed up with an accepted scientific model, the statement is probably true…but should be examined to make sure the model was applied correctly. There are lots of people who know audio words, but not what those words really mean. Also, the model might change, though that’s unlikely in basic physics.

Experience and anecdotes (“I heard this thing, and I liked it better”) are individually valid, but only in the very limited context of the person relating them. A large set of similar experiences across a diverse range of people expands the validity of the declaration, however.

You get the idea.

The point is that a growing lack of desire to just accept any old statement about audio will, hopefully, start to weed out some of the mythological monsters that periodically stomp through the production-tech village. If the myths can’t propagate, they stand a chance of dying off. Maybe. A guy can hope.

So, question your peers. Question yourself. Especially if there’s a problem, and the proposed fix involves a significant amount of money, question the fix.

A group of us were once troubleshooting an issue. A producer wasn’t liking the sound quality he was getting from his mic. The discussion quickly turned to preamps, and whether he should save up to buy a whole new audio interface for his computer. It finally dawned on me that we hadn’t bothered to ask anything about how he was using the mic, and when I did ask, he stated that he was standing several feet from the unit. If that’s not a recipe for sound that can be described as “thin,” I don’t know what is. His problem had everything to do with the acoustic physics of using a microphone, and nothing substantial AT ALL to do with the preamp he was using.

A little bit of critical thinking can save you a good pile of cash, it would seem.

(By the way, I am biased like MAD against the the crowd that craves expensive mic pres, so be aware of that when I’m making assertions. Just to be fair. Question everything. Question EVERYTHING. Ask where the data is. Verify.)


Let ‘Em Get Away From It

Maximum coverage isn’t always appropriate for small venues.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

arrayWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

I love the idea of a high-end, concert-centric install.

It excites me to think of a music venue where the coverage is so even that every patron is getting the same mix, +/- 3 dB. Creating audio rigs where “there isn’t a bad seat in the house” is a point of pride for concert-system installers, as well it should be.

Maximum coverage isn’t always appropriate, though. It can sometimes even be harmful. The good news is that an educated guess at the truly necessary coverage for live audio isn’t all that hard. It starts with audience behavior.

What Is The Audience Trying To Do?

Another way to put that question is, “What is the audience’s purpose?” At my regular gig, the answer is that they want to hang out, listen pretty informally, and socialize. This is an “averaged” assessment, by the way: Some folks want to focus entirely on the music. Some people barely want to focus on the tunes at all. Some folks would hate to be stuck in their seat. Some folks wouldn’t care.

The point is that there’s a mix of objectives in play.

This differs from going to show at, say, The State Room or, even more so, at Red Butte Garden. My perception of those events is that people go to them – paying a bit of a premium – with the intent to focus on the music.

At my regular gig, where there’s such a diversity of audience intent, perfectly even coverage of all areas in the room is counterproductive to that diversity. It forces a singular decision on everyone in the room. It essentially requires that everybody in attendance has the goal of being primarily focused on the music as a foreground element. This is a bad thing, because denying a large section of the audience their intended enjoyment is likely to encourage them to leave.

If they leave, that hurts us, and it hurts the band. As much as possible, we should avoid doing things that encourage folks to vamoose.

So, I’m perfectly happy to NOT cover everything. The FOH PA is slightly “toed in” to focus its output primarily on the area nearest the stage. The sound intensity is allowed to drop off naturally towards the back of the room, and there’s no attempt at all to fill the coverage gap off to the stage-left side. People often seem to congregate there, and my perception is that many of them do it to take a break from being in the direct fire of the PA. They can still hear the show, but the high-frequency content is significantly rolled off (at least for whatever is actually “in” the audio rig).

If I knew that almost everybody in the room was primarily focused on the music, I would take steps to cover the room more evenly. That’s not the case, though, so there are “hot” and “cool” coverage zones.

Cost/ Choice Parametrization

Another way to view the question of how much coverage is appropriate is to try to define the value that an attendee placed on being at a show, and how much choice they have in terms of their position at the show. This is another sort of thing that has to be averaged. Not all events (or people) in a certain venue are the same, so you have to look at what’s most likely to happen.

When you state the problem in terms of those parameters, you get something like this:

coveragenecessity

If the cost of being at the show is high (in terms of money, effort spent, overall commitment required, etc.) and the choice of precisely where to take in the show is low (say, assigned seating), then it’s very important to have consistent audio coverage for everyone. If people are paying hundreds of dollars and traveling long distances to see a huge band’s farewell or reunion, and they’re stuck in one seat at a theater, there had better be good sound at that seat!

On the other hand, it’s not necessary to cover every square inch of an inexpensive, “in town” show, where folks are free to move around. If the coverage isn’t what someone wants, they can move to where it is what they want – and, if they can’t get into the exact coverage area they desire, it’s not a huge loss. For a lot of small venues, this is probably what’s encountered most often.

Now, please don’t misconstrue what I’m saying. What I’m definitely NOT saying is that we should just “punt” on some gigs.

No.

As much as possible, we should assume that the most important show of our careers is the one we’re doing now.

What I’m saying is that we need to spend our effort on things that matter. We have to have a priorities list. If people want (and also have) options available for how they experience a show, then there’s no reason for us to agonize about perfect coverage. As I said above, academically perfect PA deployment might even be bad for us. They might not even want to be in the direct throw of our boxes, so why force them to be? In the world of audio, we have finite resources and rapidly diminishing returns. We have to focus on the primary issues, and if our primary issue is something OTHER than completely homogenous sound throughout the venue, then we need to direct our efforts appropriately.


Does It Have To Be This Loud?

A love-letter to patrons of live music.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

loveWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Dear Live-Music Patron,

Occasionally, you have a question to ask an audio human. That question isn’t often posed to me personally, although, in aggregate, the query is probably made multiple times in every city on every night. The question isn’t always direct, and it can morph into different forms – some of which are statements:

“I can’t hear anything except the drums.”

“The guitar on the right is hurting my ears.”

“It’s hard to talk to my girlfriend/ boyfriend in here.”

“Can you keep everything the same, but turn the mains down?”

“Can you make it so the mic doesn’t make that screech again?”

And so on.

Whenever the conversation goes this way, there’s a singular question lying at the heart of the matter:

“Does it have to be this loud?”

There are a number of things I want to say to you regarding that question, but the most important bit has to come first. It’s the one thing that I want you to realize above everything else.

You’re asking a question that is 100% legitimate.

You may have asked it in one way or another, only to be brushed off. You may have had an exasperated expression pointed your way. You may have been given a brusque “Yes” in response. You may have encountered shrugging, swearing, eye-rolling, sneering, or any number of other responses that were rude, unhelpful, or downright mean.

But that doesn’t mean that your question is wrong or stupid. You’re right to ask it. It’s one of the minor tragedies in this business that production people and music players talk amongst themselves so much, and yet almost never have a real conversation with you. Another minor tragedy is that us folks who run the shows are usually not in a position to have a nuanced discussion with you when it would actually be helpful.

It’s hard to explain why it’s so loud when it’s so loud that you have to ask if “it has to be this loud.”

So, I want to try to answer your question. I can’t speak to every individual circumstance, but I can talk about some general cases.

Sometimes No

I am convinced that, at some time in their career, every audio tech has made a show unnecessarily loud. I’ve certainly done it.

As “music people,” we get excited about sonic experiences as an end in themselves. We’re known for endlessly chasing after tiny improvements in some miniscule slice of the audible spectrum. We can spend hours debating the best way to make the bass (“kick”) drum sound like a device capable of extinguishing all multicellular life on the planet. The sheer number of words dedicated to the construction of “massive” rock and roll guitar noises is stunning. The amount of equipment and trickery that can be dedicated to, say, getting a bass guitar to sound “just so” might boggle your mind.

It’s entirely possible for us to become so enraptured in making a show – or even just a small portion of a show – sound a certain way that we don’t realize how much level we’re shoveling into the equation. We get the drums cookin’, and then we realize that the guitars are a little low, and then the bass comes up to balance that out, and then the vocals are buried, so we crank up the vocals and WHAT? I CAN’T HEAR YOU!

It does happen. Sometimes it’s accidental, and sometimes it’s deliberate. Some techs just don’t feel like a rock show is a rock show until they “feel” a certain amount of sound pressure level.

In these cases, when the audio human’s mix choices are the overwhelming factor in a show being too loud, the PA really should be pulled back. It doesn’t have to be that loud. The problem and the solution are simple creatures.

But Sometimes Yes

The thing with live audio is that the problems and the solutions are often not so simple as what I just got into. It’s very possible, especially in a small room, for the sound craftsperson’s decisions to NOT be the overwhelming factor in determining the volume of a gig. I – and others like me – have spent lots of time in situations where we’ve had to deal with an unfortunate consequence of the laws of physics:

The loudest thing in the room is as quiet as we can possibly be, and quite often, a balanced mix requires something else to be much louder than that thing.

If the instrumentalists (drums, bass, guitars, etc) are blasting away at 110 dB without any help from the sound system, then the vocals will have to be in that same neighborhood in order to compete. It’s a conundrum of either being too loud with a flat-out awful mix, or too loud with a mix that’s basically okay. In a case like that, an audio human just has to get on the gas and wait to go home. Someone’s going to be mad at us, and it might as well not be the folks who are into the music.

There’s another overarching situation, though, and that’s the toughest one to talk about. It’s a difficult subject because it has to do with subjectivity and incompatible expectations. What I’m getting at is when some folks want background music, and the show is not…can not be presented as such.

There ARE bands that specialize in playing “dinner” music. They’re great at performing inoffensive selections that provide a bed for conversation at a comfortable volume. What I hope can be understood is that this is indeed a specialization. It’s a carefully cultivated, specific skill that is not universally pursued by musicians. It’s not universally pursued because it’s not universally applicable.

Throughout human history, a great many musical events, musical instruments, and musical artisans have had a singular purpose: To be noticed, front and center. For thousands of years, humans have used instruments like drums and horns as acoustic “force multipliers” – sonic levers, if you will. We have used them to call to each other over long distances, or send signals in the midst of battle. Fanfares have been sounded at the arrivals of kings. On a parallel track, most musicians that I know do not simply play to be involved in the activity of playing. They play so as to be listened to.

Put all that together, and what you have is a presentation of art that simply is not meant to be talked over. In the cases where it’s meant to coexist with a rambunctious audience, it’s even more meant to not be talked over. From the mindset of the players to the technology in use, the experience is designed specifically to stand out from the background. It can’t be reduced to a low rumble. That isn’t what it is. There’s no reason that it has to be painfully loud, but there are many good reasons why a conversation in close proximity might not be practical.

So.

Does it have to be this loud?

Maybe.


Interface Importance

Packing lots of control into a small space is possible, but there’s a tradeoff.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

interfaceWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Let me tell you a story.

Last Sunday, I was running the audio for my church. The building we’re in has a brand new AV system that we tie into, and lots of people can get their hands on that system during the week. That being the case, every service is a small adventure in “how much gain is applied to the signal, post our mixer?” Some weeks it’s +30 dB, some weeks it’s zero.

Anyway.

The rig doesn’t necessarily stay substantially the same from service to service, so every time I’m there I tend to “futz.” I sit there and go, “does the wireless headset really need to sound like that?” and start making subtle changes. I’m always trying to get that little bit of resonance to go away, or maybe squeak out one more dB of gain before feedback.

The key word up there is subtle. Doing all kinds of weird and wild finagling during a church service (or any “presentation AV” style gig) is a serious no-no. The goal is to marry excellent production values with invisible production process.

Well, something happened that made me not very invisible.

The insert EQ on the pastor’s headset is a an old Feedback Destroyer by Behringer. It’s one of their best products. Ironically, it’s incredibly mediocre at automatically killing feedback, but it’s stupenfuciously (I stole that word from Penny Arcade) good at being an insanely flexible parametric EQ. I haven’t found anything else like it for the money. It does, however, live in a sort of odd world, interface wise. It comes from a time before high-res, miniature displays were a practical and affordable sort of creature. You communicate with the thing via a single knob/ jogwheel dealio and an array of toggle buttons that connect that single knob to various parameters. The thing communicates with you through lights in the toggle buttons, and also with a delightfully “1980’s vintage” sort of calculator-esqe LED display. The display has two numerical characters, a special character to display plus or minus signs, and a set of on/ off indicators that tell you what the number you’re looking at means. Press a button, and you’re looking at numbers that mean decibels. Press another, and the display is indicating a certain number of 60ths of an octave. (Bandwidth, in other words.)

This is all delightfully campy, to an extent. Where it can bite you, though, is when it’s not clear what the display is showing you. It’s entirely possible to be in the mode where the wheel selects a different filter, then make an absent-minded button press, and now be in the mode where the wheel selects an entirely different device-wide preset. The hilarity becomes even more unbridled when the filter you had selected and the preset have the same number.

Maybe you can see where this is going.

So, the pastor is talking to the kids, and I’m working through the filters to see where they are and maybe fix some low-mid that I don’t like. I get to filter one. I take a look at the frequency it’s set to, and then accidentally press the “Filter Select” button twice. This puts me in the mode where the wheel selects a complete preset, and I’m already on preset one. The display looks the same, and I don’t notice the absence of an indicator light on “Filter Select.” A fraction of a second after I roll the wheel and “2” appears on the display, I realize my mistake – but it’s too late. I watch with mute horror as the EQ de-instantiates all the filters standing between me and hard feedback.

I yank the pastor’s fader down just as the system starts to take off, knocking about 10 dB away from the level of his speech in the room. I quickly recall the first preset on the Feedback Destroyer, and push the fader back up. Exactly what happened might not have been obvious to anyone else, but the fact that SOMETHING weird had occurred was glaringly obvious.

So…what does all that distill into? Well:

The more abstract an interface, the more likely it is to be confusing.

Less Interface Doesn’t Necessarily Mean “Easy”

When you’re buying gear, it can be tempting to fall into the trap of believing that fewer buttons and knobs means simpler to use. This isn’t necessarily true. It CAN be true, if fewer buttons and knobs means that fewer operational parameters are user-controllable. For instance, there are classic dynamics processors (like the LA-2A) that have most of their operational parameters in a fixed state. An average user can’t change the attack and release times. Only two compression ratios are available. Control over the audio parameters of the device comes down to a toggle switch and two knobs, and each one of those controls does exactly one thing at all times.

An LA-2A is very simple to use. Inflexible, but simple.

You can contrast that with the difference between something like an MG166CX and an X32 Producer. That analog Yamaha has a lot more knobs than the X32. Its control surface is pretty dang crowded.

But the 166CX is a far less complicated animal than Behringer’s digital machine. If we’re talking about using a significant and comparable fraction of each console’s capabilities, I can assure you that driving an X32 is much more demanding of an operator. Even for some simple things, the X32 requires a greater level of awareness. For instance, the Yamaha has lots of preamp gain knobs. One for each preamp. The first preamp gain knob shows you the gain being applied by the first preamp, the second one shows preamp number two, and so on. The Behringer, on the other hand, has exactly one control dedicated to preamp gain – but that single control can relate to any one of 16 channels (or 32 if you connect a digital stagebox). What that gain control is showing you is dependent upon what channel you have selected, so you have to keep that straight in your head while you’re working.

Then, there’s the matter of those knobs below the Behringer’s display. They’re “soft” knobs, because what they control changes based upon what channel you have selected…AND what the screen is displaying. The second knob from the left might control an EQ filter’s center frequency one moment, and a compressor’s threshold just two seconds later. This is how interface abstraction can cause a lot of confusion. The more things that a single interface element can control, the greater the possibility that you may lose a handle on exactly what that element is controlling at a particular time. If you’re used to the idea that one knob does one thing, or even just a class of similar things, you can get flustered.

“Whaddya mean that’s not the compressor’s output gain? That knob is the gain for EQ band #2! It should be a gain control on this screen, too.”

“It’s the gain for EQ band #2 on the EQ screen. This is the compressor screen, so the knob controls the threshold now. That’s how the console designer set things up.”

“You people live in a world without logic or reason!”

Anyway.

While an X32 Producer’s layout is rather more sparse than an MG166CX, the amount of control available is actually incredibly dense. Furthermore, you have to pay attention to the state that the console is in if you want to work on the correct thing. It’s not just a matter of having your finger on the right control. That control has to be ready to talk to the correct parameter.

And this is a GOOD THING. The amount of audio control available in an X32 producer is, when compared to the Yamaha, immense. It’s almost on the order of the difference between holding a power-drill battery and a thunderbolt. No, you may not trade me a 166CX for my X32, thanks.

Interface abstraction is not bad. It lets us build compact, relatively inexpensive devices that have functionality which rivals what you find on enormous, spendy pieces of gear. I am a great lover of the “capability explosion” that has engulfed the world of small-time production. We’re at the point where the limiting factors on what we can do have mostly been relegated to what will physically fit in limited venue space. I love it, and I do not want to go back. I personally have no need for “one knob = one function on one channel” sorts of control systems. The abstraction doesn’t bother me, even if I do have a hilarious-in-hindsight brain fart every so often.

(By the way: A development that’s helping to keep interface abstraction in check is that of informative, high-resolution displays. They help a lot in keeping changing control states unambiguous, because they can display status information clearly and in natural language.)

However, an abstract interface may not work for you. If you’re new to this whole thing, or just aren’t experienced in the kind of device management required, you might need to start off in “the forest of dedicated knobs and switches.” There’s no shame in it – heck, some of the industry’s top production craftspeople wouldn’t be caught dead without a large-frame control surface for sound or lights. There are folks who could handle a great deal of abstraction, and simply choose not to. If they’re getting results that make bands and fans happy, that’s what really matters.

So, make whatever choice of gear that you want. As you’re making that choice, simply be aware that what looks simple may not be. A reduced number of visible, physical controls is not a guaranteed indicator of device simplicity. You have to dig deeper, and find out what’s hidden under the hood.


Loud Doesn’t Create Excitement

A guest post for Schwilly Family Musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

amped

The folks in the audience have to be “amped up” about your songs before the privilege of volume is granted.

The full article is here.


Why I Am (Not) Interested In The Industry Standard

Industry standards are helpful reference points, but are not necessarily the best possible approach.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

57grillsmallWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Remember my article about a patch-scheme for a “festival style” show? It actually raised an eyebrow or two. A fellow audio-human (who works on much, much, much larger shows than I do) asked me why my patch list was backwards from what everybody else does. His concern was that, in the festival situations he finds himself in, my “upside down” patch would monkeywrench things if accommodated. It would just be so much easier for everyone if I followed the industry standard of (I guess?) starting with the drums – “kick is channel 1,” in other words.

My response was that, if I had things laid out one way, and a guest engineer came in who wanted them to be another way, then I would be happy to set up any softpatch desired. What I neglected to add at the time was that, if I was “that one guy” where everyone else wanted a different order, I would be happy to just use the standard patch. It wouldn’t ruin my day at all, and it would make things easier for everybody else.

To be open and frank, though, there was something else I wanted to say. I censored myself because I think there’s a place for diplomacy and courtesy, especially when the conversation venue (Facebook comments) isn’t really good for nuance.

What I wanted to say was, “Because my way is better. Why would you put the drums first? They’re the bottom of the priority list.” (The drums are important, but in a small-venue context they usually need the least help from the PA to be in the right spot.)

What was said and unsaid in that conversation is a microcosm of how I feel about industry standards. There are industry standard mics, techniques, PA styles, stage layouts, and whatever else, and they exist for good reasons. Knowing what those reasons are is a good thing, because it’s part of understanding the craft. At the same time, though, industry standards rarely equate to “the best.” They tend to equate to “works acceptably in a wide range of situations.”

58, 57, IBM

Back when Apple Computer was struggling for acceptance, there was a saying: “Nobody every got fired for buying IBM.” IBM was the industry standard for machines used in an office environment, and even though the Macintosh computers at the time were leaps and bounds ahead in terms of user-friendliness, people kept buying IBM and compatible devices.

Why?

Because IBM was known. Large numbers of people, from the users to the admins, had experience with them. Everybody knew what to expect. They knew that appropriate software would be available, or could be developed by folks that were easy to find. They knew the parts would be there. They knew they could get work done with IBM, even if the computers weren’t revolutionary. They knew that IBM was readily respectable by everyone that they wanted to impress.

In the same way, you could say that “Nobody ever got fired for buying SM-58s and SM-57s.” They’re industry standard mics because they’re built to withstand live shows, basically sound like what they’re pointed at, and literally everybody can get them to work in a reasonable way. They’ve been around forever, and have been used by everybody, their dog, and their dog’s fleas. Even if somebody doesn’t know the model numbers, asking them to draw a picture of a vocal mic and an instrument mic will probably get you an SM-58 and an SM-57.

But they’re not the best at all times. I’ve heard a lot of 58s that imparted far too much low-mid garble to a singer’s voice, and I’ve never once easily gotten as much gain-before-feedback out of a 58 as I have an ND767a. I’ve miced up tons of amplifiers with all kinds of mics that weren’t SM-57s, and I’ve been perfectly happy about 99% of the time. I’ve done the same with drums. If “sounds decent” is the main priority, then I have a bunch of mics that do that AND take up less space than a big ol’ 57. There are other mics out there that work better for me, in terms of the total solution offered.

This isn’t to say that great things can’t happen with the SM series! I once heard an artist in a coffee shop with a keyboard amp and a 58-style mic. It was the most perfect setup for her voice that you could imagine. I wasn’t expecting what I heard, but she made it work beautifully. Sometimes, “industry standard” and “perfect for this particular application” DO line up.

My point is, though, that in a broad sense the “hidden secret” of being industry standard means being “extraordinarily average.” Thoroughly inoffensive. Safe. Something people won’t be fired for specifying and purchasing.

There’s nothing wrong with that, but for people like me…well, it’s kinda boring.

Sometimes You Need To Be Bored

That last sentence might seem a bit incendiary, depending on who you are. It’s very important to note that being un-boring is a luxury that’s unavailable to many in this business.

A good example is what happens when a venue wants to spend time working with acts that regularly tour at the regional level or above. To be acceptable to those acts (especially if they bring production techs but only minimal gear) requires that the PA and lighting rigs be easy to handle by most folks. The personnel working for the house might be excited about the new mixing consoles that lack a physical control surface, but that’s not something that everybody is prepared to accept. There are plenty of audio humans who just aren’t ready for the idea of having no physical controls at all, whereas probably every sound tech is fine with a console that has a control surface. That’s why control surfaces are still the industry standard. The new surfaceless consoles are nifty, but not for everybody, so a bit of “boring-ness” is required in order for the venue to play well with others.

Industry standards are accepted everywhere, which makes them a safe bet. Non-standards are “risky,” because they tend to conform to the desires of a smaller number of people. Risky is often exciting, however, because that’s where innovation occurs. Iterating on the standard makes the standard more refined, but it rarely produces breakthroughs. It’s entirely possible to, say, “bend the rules” on mixing console cost vs. functionality if you’re willing to do weird things (like dispense with a control surface). Some people will get it, and some people will think you’re crazy. Catering to your own brand of crazy is acceptable if, like me, a guest engineer even being in the room only happens about 0.8% of the time. It’s not acceptable at all if a band tech is going to be “driving” on a regular basis.

Why I’m Not Particularly Interested In The Industry Standard

I personally tend to shrug my shoulders at industry standards for the same reason that people shrug their shoulders in general: There’s almost nothing exciting about what’s been done a million times. Since I currently don’t have to meet riders or provide an easy environment for other techs to work in, I have the luxury of basically doing whatever I want as long as it works.

I love giving “upstarts” and bargain items a chance, because it’s fun to see just how far a piece of gear can go if you spend some time with it.

I don’t fight feedback with per-mix graphic EQs, because the idea of hacking up a whole mix to solve a problem with one input seems crazy to me.

I use a homebrew console because I wanted to have a virtual, independent monitor-world, and nobody made a traditional console I could afford that would do that in the way I wanted.

I don’t use a control surface for mixing because I’ve never cared about moving a whole bunch of faders at once.

I’ve never personally owned an SM-58 or 57, because they just aren’t interesting to me.

I’ve stuffed a cheap measurement mic inside a kick drum on several occasions, because I wanted to see how it would work. (It was actually pretty okay.)

And I just generally roll my eyes at how so much of show production, which used to be a kind of “outlaw” business that pushed boundaries and did things for the fun of it, has become a beige, corporatized affair of trying to basically be like everybody else. It’s like cars, you know? They used to be cool, distinctive works of art, and now every car company is essentially making the same three boring-as-dirt sedans, three bland SUVs, and three unremarkable pickup trucks, because it’s all run by “money” people now who are terrified of not being more profitable next quarter and thus will never do anything interesting YOU GUYS LET ME KNOW IF I’M RAMBLING, ‘KAY?

Now, you can bet that, if I ever went to work at an AV company or production provider, I would be willing to conform to industry standards. In that environment, that would be the appropriate thing to do.

But right now, I have the freedom to be weird and have fun – so I intend to enjoy myself.

I’ll say it again. “Industry standard” doesn’t necessarily mean “the best.” It just means “people will accept this about 95% of the time.”


The Calculus Of Music

There’s a lot of math behind the sound of a show, but you don’t have to work it out symbolically.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This post is the fault of my high-school education, my dad, and Neil deGrasse Tyson.

In high-school, I was introduced to calculus. I wasn’t particularly interested in the drills and hairy algebra, but I did have an interest in the high-level concepts. I’ve kept my textbook around, and I will sometimes open it up and skim it.

My dad is a lover of cars, and that means he gets magazines about cars and car culture. Every so often, I’ll run across one and see what’s in the pages.

About a month ago, I was on another jaunt through my calculus book when I happened upon a car-mag with an article by Neil deGrasse Tyson. (You know Dr. Tyson. He’s the African-American superstar astrophysicist guy. He hosted and narrated the new version of “Cosmos.”) In that article was a one-line concept that very suddenly connected some dots in my head: Dr. Tyson pointed out that sustained speed isn’t all that exciting – rather, acceleration is where the fun is.

Acceleration.

Change.

The rate of change.

Derivative calculus.

Exciting derivative calculus makes for exciting music.

What?

Let me explain.

Δy/Δx: It’s Where The Fun Is!

The first thing to say here is that there’s no need to be frightened of those symbols in the section heading. The point of all this is not to say that everybody should reduce music to a set of equations. I’m not suggesting that folks should have to “solve” music in a symbolic way, as a math problem. What I am saying is that mathematical concepts of motion and change can be SUPER informative about the sound of a show. (Or a recording, too.)

I mean, gosh, motion and change. That sounds like it’s really important for an art form involving sine waves. And vibrating stuff, like guitar strings and loudspeakers and such.

Anyway.

Those symbols up there (Δy/Δx) reflect the core of what derivative calculus is concerned with. It’s the study of how fast things are changing. Δy is, conventionally, the change in the vertical-axis value, whereas Δx is the change in the horizontal-axis value. If you remember your geometry, you might recall that the slope of a line is “rise over run,” or “how much does the line go up or down in a given horizontal space?” Rise over run IS Δy/Δx. Derivative calculus is nothing more exotic than finding the slopes of lines, but the algebra does get a bit hairy because of people wanting to get the slopes of lines that are tangent to single, instantaneous points on a curve YOUR EYES ARE GLAZING OVER, I KNOW.

Let’s un-abstractify this. (Un-abstractify is totally a word. I just created it. Send me royalties.)

Remember that article I wrote about the importance of transients? Transients are where a change in volume is high, relative to the amount of time that passes. An uncompressed snare-drum note has a big peak that happens quickly. It’s the same for a kick-drum hit. The “thump” or “crack” happens fast, and decays in a hurry. The difference in sound-pressure from “silence” to the peak volume of the note is Δy, and the time that passes is Δx. Think about it – you’ve seen a waveform in an audio-editor, right? The waveform is a graph of audio intensity over time. The vertical axis (y) is the measure of how loud things are, and the horizontal axis (x) is how much time has passed. Like this:

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

For music to be really exciting, there has to be dramatic change. For music to be calming, the change has to be restrained. If you want something that’s danceable, or if you want something that has defined, powerful impact regardless of danceability, you’ve got to have room for “big Δy.” There has to be space for volume curves that have steep slopes. The derivative calculus has to be interesting, or all you’ll end up with is a steady-state drone (or crushingly deafening roar, depending on volume) that doesn’t take the audience on much of a ride. (Again, if you want a calming effect, then steady-state at low-volume is probably what you want.) This works across all kinds of timescales, by the way. Your music might not have sharp, high-speed transients that take place over a few milliseconds, but you can still move the audience with swells and decrescendos that develop over the span of minutes.

Oh, and that graphic at the top of the page? That’s actually a roughly-traced vocal waveform, with some tangent-lines drawn in to show the estimated derivatives at those points. The time represented is relatively small – about one second. Notice the separation between the “hills?” Notice how steep the hills are? It turns out that the vocal in that recording is highly intelligible, and I would strongly argue that a key component in that intelligibility is a high rate of change in the right places. Sharp transitions from sound to sound help to tell you where words begin and end. When it all runs together, what you’ve got is incoherent mumbling. (This even works for text. You can read this, because the whitespace between words creates sharp transitions from word to word. This,ontheotherhand…)

Oh, and from a technical standpoint, headroom is really important for delivering large “Δy” events. If the PA is running at close to full tilt, there’s no room to shove a pronounced peak through it all. If you want to reproduce sonic events involving large derivatives, you have to have a pretty healthy helping of unused power at your disposal.

Now, overall level does matter as well, which leads us into another aspect of calculus.

Integral Volume

Integral calculus contrasts with derivative calculus, in that integration’s concern is with how much area is under the curve. From the perspective of an audio-human, the integral of the “sonic-events curve” tells you a lot about how much power you’re really delivering to those loudspeaker voice-coils. Short peaks don’t do much in terms of heating up coil windings, so loudspeakers can tolerate rather high levels over the short term. Long-term power handling is much lower, because that’s where you can get things hot enough to melt.

From a performance perspective, integration has a lot to say about just how loud your show is perceived to be. I’ve been in the presence of bands that had tremendous “derivative calculus” punching power, and yet they didn’t overwhelm the audience with volume. It was all because the total area under the volume curve was well managed. The long-term level of the band was actually fairly low, which meant that people didn’t feel abused by the band’s sound.

This overall concept (which includes the whole discussion of derivatives) is a pretty touchy subject in live audio. That is, it can all be challenging to get right. It’s situationally dependent, and it has to be “just so.” Too much is a problem, and too little is a problem. For example, take this blank graph which represents a hypothetical, bar-like venue where the band hasn’t started yet:

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

If the band volume’s area under the curve is too small, they’ll be drowned out by the talking of the crowd. Go too high, though, and the crowd will bail out. It’s a balancing act, and one that isn’t easy to directly define with raw numbers. For instance, here’s an example of what (I think) some reggae bands might look like over the span of several seconds:

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The “large Δy” events reach deep into the really-loud zone, but they’re very brief. Further, there are places where the noise floor peeks through significantly. This ability for the crowd to hear themselves talking helps to send the message that the band isn’t too loud. Overall, the area under the curve is probably halfway to three-quarters into the “comfortable volume” zone. Now, what about a “guitar” band:

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The peaks don’t go up quite as far. In terms of sustained level, the band is probably also halfway to three-quarters into the comfortable zone – and yet some folks will feel like the band is a bit loud. It’s because the sustained roar of the guitars (and everything else) is enough to completely overwhelm the noise floor. The crowd can’t hear themselves talk, which sends the message that the band’s intensity is higher than it is in terms of “pure numbers.”

As an aside, this says a lot about the problems of the volume war. At some point, we started crushing all the exciting, flavorful, “large Δy” material in order to get maximum area under the curve…and eventually, we started to notice just how ridiculous things were sounding.

And then there’s one of my pet peeves, which is the indie-rock idiom of scrubbing away at a single-coil-pickup guitar’s strings with the amp’s tone controls set for “maximum clang.” It creates one of the most sustained, abrasive, yet otherwise boring noises that a person can have the displeasure of hearing. Let me tell you how I really feel…

Anyway.

Excitement, intelligibility, and appropriate volume levels are probably just a few of the things described by the calculus of music. I’ll bet there’s more out there to be discovered. We just have to keep our cross-disciplinary antennae extended.


Tuesday Thoughts

Just some ideas to chew on.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

If the singer is being drowned it is better to partially drain the bathtub than to buy flippers and a snorkel.

Meditate upon this.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

It’s not a “binary” choice. There are plenty of grey shades. Even so…

At some point, you will probably have to figure out what means more to you: The craft, or the money.

Only the very lucky get all they want of both.


If It Ain’t Broken…

…don’t fix it. If it seems like it’s broken, it may not be.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

There certainly can be a point where you have to “fix” a band.

Be warned, however, that even the very experienced can make a DISASTROUSLY BAD call about where that point actually is. When you’re tempted to make that call, start by assuming that you’re wrong and try to figure out what you’ve missed.

Then stop and think about it some more.

Trying to remake a band’s sound into your own sound is almost never the right idea.

Especially if you haven’t been asked to.


Yes, You Can Master In A Car

The primary person that your workflow needs to work for, and needs to be approved by, is you.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This article has almost nothing to do with live music. The general concept of your own workflow having to work for you, first and foremost, applies universally – but mastering is really a “recorded music” thing.

So, why talk about it?

First, it’s my site and I can do as I please on it, even if doing as I please isn’t 100% connected with the main theme of the site. 🙂 (Hey, it’s a superficial reason, but I can be honest about it.)

Second, a friend of mine was getting some criticism over mastering in a car. I felt that the criticism was unwarranted.

Third, I get riled up when people say “you can’t do ‘x’ in way ‘z’.” I’m not against all dogma by any means, but my blood pressure spikes when people hand down dogmatic statements that just aren’t true.

Last, I think it’s very healthy to have a multidisciplinary understanding of audio. I’ve been lucky enough to have both live and studio experience, and taking note of how those worlds interact has been very helpful to me.

Now that we’ve got those bits on the table, let me open by saying:

Mastering Is Not Arcane Wizardry

I have to admit that I’ve done a disservice to numerous people whom I’ve discussed mastering with. In the name of being respectful to mastering engineers, I’ve presented the work of mastering as a kind of magic. On several occasions, I have said that mastering is a sort of “black art” that’s hard to understand and requires an audio-human of supernatural power to perform correctly. As I mentioned, I did this out of a desire to defer to people who specialize in that craft.

I realize now that this was “A Very Wrong Thing To Do,” and I apologize for it. It’s not at all wrong to have a sense of wonder and appreciation at good work, but the problem with presenting that work as an arcane practice is that the true appreciation of it is stifled. When anything in audio is seriously described as magic, there’s a strong tendency for that thing to be either put on a pedestal or dismissed. Disciplines and sub-disciplines are no longer viewed as skills and understanding that can be honed and understood rationally. Instead, they are seen as innate abilities that can’t be explained, or are handwaved away as “you have to buy a very expensive piece of equipment to do that.”

Thus, let me say it again: Mastering is not arcane wizardry. I don’t mean any disrespect to mastering engineers by saying that. In fact, I mean great respect to practitioners of mastering, because their discipline is very important to the success of recorded music.

If mastering isn’t magical, then what is it? Well, as I have come to believe:

Mastering is the process where a sound recording receives final preparation for its consumer-delivery medium.

…and that’s all.

Don’t get me wrong! It’s not that mastering is always simple and straightforward. It’s not that there aren’t significant artistic decisions involved. It’s just that the point of mastering – the purpose of doing it – is quite easy to describe. A mastering engineer’s job is to put the final layer of polish and adjustment on a recording so that it’s as enjoyable as possible for the listener (or, that the recording takes the listener on the ride intended by the artist).

As with pretty much all kinds of craft, the final “fit and finish” is a precision step. It’s critical to the success of the thing being made, and has to be taken seriously. It’s for this reason that mastering-specific gear is often specialized and high-dollar. It’s for this reason that dedicated “mastering rooms” are apt to sound so beautiful and feature the high-performance loudspeakers that they do. In mastering, small adjustments matter in a big way. It’s a tremendous boon to the engineer when his or her equipment and listening environment are fully trustworthy, able to make and reveal small changes, and can do so in a predictable, repeatable manner.

Here’s the thing:

If you can get all that to happen in a car, then you’ve gotten it to happen. If a laptop, headphones, and car-stereo auxiliary input are hitting all those aforementioned marks, then there’s nothing to argue about.

Translation And Trustworthiness

In both live-sound and recorded audio, the issue of “translation” is paramount. What’s happening in the studio or on stage has to be presented to an audience in a way that’s in line with the artist’s goals. The musicians are trying to deliver a certain kind of experience to the listeners, and the engineer’s job is to package and deliver the experience. Whether the experience is consumed via playback or realtime performance is irrelevant to that. (It’s VERY relevant to the equipment and techniques used, of course.)

In recorded music, the problem of translation is primarily that of the unknown playback system. The person that’s listening to a song could be using a staggeringly wide range of things to actually hear the recorded audio. The recording might be playing back in a high-end listening room. It might be playing back on earbuds that cost $5. The listener might have cranked the bass. The listener might have nothing below 100 Hz.

…and you just don’t know.

If that’s the case, then mastering is the last opportunity to ensure that, as much as possible, the critical parts of the song’s experience are preserved for all those different listeners. In order for that to be doable in a reasonable amount of time, a mastering engineer has to have a playback system that they can trust. They have to be in a situation where, when their gear says “this sounds right,” the music actually DOES sound right. In the “classic” mastering room, this means the neutral canvas of carefully tuned acoustics and precision loudspeakers – but those things are just means to an end: Reliable production of music that translates well.

So, what if a car, a laptop, and a favorite pair of headphones are what do that for a particular engineer? What if that setup is what reliably tells you that the music sounds right? I’m not talking about taking a famous mastering guru, yanking them out of their favorite room, and seeing if they can work in the car. That would be an invalid test, because their room and workflow has been crafted to work for THEM. I’m talking about YOU. Yes, an “academically acceptable” workflow and setup are helpful. The process does indeed matter.

But the results matter more than the process.

And here’s where I go on a bit of a rant.

What If Trying To Translate Everywhere Is A Waste Of Time?

See, if you really want something to translate beautifully, your best chance at that is to master directly to the target playback system. It’s just logical. If the vast majority of the song’s listeners are going to hear the tune in a car, it makes a lot of sense to make the song sound good in…you know…a car. If the primary audience is a bunch of on-the-go, young, earbud-wearing folks, why not focus the primary effort on making the song shoe-rippingly awesome in earbuds? Yes, it’s worth making sure that the record is okay in other situations, but why take an outside-in approach? Why not start from the majority audience and go from there?

I’m NOT advocating mastering around every tiny peak and dip in a specific car stereo or set of headphones. That’s a classic cause of problems with translation. What I am saying is that insisting on a traditional mastering room for the sake of itself isn’t as rational as it might seem. I’m also saying that trying to make a “neutral canvas” playback situation work everywhere may not be the best workflow for everyone. (For the folks who have learned to be adept at it, it’s really good. No argument there.)

I mean, my understanding is that the movie industry has this all figured out. They certainly benefit from a “closed door” business, where consumers experience their product through a limited number of exhibitioners. Therein, though, lies the beauty: They can set standards for what a theater is supposed to sound like, and how loud the sound system can go, and how it should all be tuned, and then work on their dubbing mixes in rooms that meet those very same standards. They focus on working for the exhibitioners that play by their rules, maybe give a cursory nod to everybody else (or not, I’m not really sure), and basically eliminate all the vagaries that haunt music production. It’s kinda brilliant, actually.

Of course, recorded music is anything but a “closed door” industry, so we aren’t going to be reducing our number of unknowns anytime soon.

Anyway.

The point is that your workflow has to work for you. If it lets you get results that you and your clients are proud of, then you’re fine. If the guy at the big production facility sneers at you, so what? If you can get work, and do it justice by mastering in a car, then yes…

…you can master in a car.