Tag Archives: Acoustics

ITRD

It’s the room, dude.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

It’s the room, dude.

You’ve mixed this band before. They were great. Now you’re somewhere else, and it’s just awful. You can’t make anything out; The intelligibility is somewhere south of “I can understand every fourth word, sort of.”

It’s, the room, dude.

You’ve used every EQ you have in a manner that could best be described as “neurosurgery with an artillery weapon.” The input channel EQs are carved up. The output channel curves look like the Himalayas. You’ve also inserted graphics on the outputs. The settings are not something you will share on Instagram. The show will NOT behave.

It’s the room, dude.

You’ve moved the speakers. You’ve tilted and twisted them, trying to miss the walls and ceiling just a little more. You could get a job as a civil engineer who designs bridges, because of your working knowledge of bizarre, load-bearing constructions. The system still sounds like the entirety of World War II being fought in an airplane hangar.

It’s the room, dude.

You’ve bought every toy and tweaker that the good folks at the gear retailer could sell you. You’ve got automatic feedback filters, frequency-dependent compression, wild-donkeyed gating, and a rack full of boutique, 500-series thingamabobs. It still sounds like you can’t mix your way out of a paper bag that’s been sitting outside in the rain for a month.

It’s the room, dude.

The lead singer gets your attention as soundcheck draws to a close. “Could you please pull down the reverb?” they ask. Nothing is going to any reverb processor that you have available.

It’s the room, dude.

The musicians are pretty happy. You have the monitors wound up to a level that frightens small children. You have the FOH mid-highs high-passed at 1 kHz. (I have done this in real life.) The sound in the seats is still a sort of indistinct, muddy garble.

It’s the room, dude.

Once you have tens or hundreds of arrivals of a single sonic event, you will never get the transients unsmeared. Once the low-mid builds into a seconds-long reverberant mash, you will never dig your way out. Once monitor-world hits that nice, huge, flat backstop behind the players, you will never get monitor-world out of FOH. Once the vocalist’s smashing crescendo slaps that back wall and starts racing home to their face, you will never stop them from getting walloped right in the chops with the world as it was 200ms ago.

It’s the room, dude. It’s the room.


How To Spend A Ton Of Money

Really loading up your credit cards is easily done. Just keep trying to solve problems by modifying variables unrelated to those problems.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

differentmicpresWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The room was an acoustically hostile firestorm of reflections and standing waves.

The band’s backline was barely functional.

The guitar amps had all the midrange dialed out.

A really expensive console with different mic pres would have TOTALLY fixed all that.

Right?


What Just Changed?

If an acoustical environment changes significantly, you may start to have mysterious problems.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

screenWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Just recently, I was working a show for Citizen Hypocrisy. I did my usual prep, which includes tamping down the “hotspots” in vocal mics – gain before feedback being important, and all that.

Everything seemed copacetic, even the mic for the drumkit vocal. There was clarity, decent volume, and no ringing. The band got in the room and set up their gear. This time around, an experiment was conducted: A psychedelic background video was set to play in a loop as a backdrop. We eventually did a quick, “just in time” soundcheck, and we were off.

As things kicked into gear, I noticed something. I was getting high frequency feedback from one of the mics, it wasn’t running away by any means, but it was audible and annoying. I thought to myself, “We did just give DJ’s microphone a push in monitor world. I probably need to pull the top-end back a bit.” I did just that…but the feedback continued. I started soloing things into my headphones.

“Huh, guitar-Gary’s mic seems okay. DJ’s mic is picking up the feedback, but it doesn’t seem to be actually part of that loop. That leaves…*unsolos DJ’s mic, solos drum-Gary’s mic* Well, THERE’S the problem.”

The mic for vocals at the drumkit? It was squeaking like a pissed-off mouse. I hammered the offending frequency with a notch filter, and that was it.

But why hadn’t I noticed the problem when I was getting things baselined for the night? Gary hadn’t changed the orientation of the mic so that it was pointing at the drumfill, and neither the input gain nor send gains had changed, so why had the problem cropped up?

The answer: Between my setup and actually getting the show moving in earnest, we had changed the venue’s acoustical characteristics, especially as they concerned the offending microphone. We had deployed the screen behind the drums.

Rolling With The Changes

Citizen Hypocrisy was playing at my regular gig. Under conditions where we are NOT running video, the upstage wall is a mass of acoustical wedge foam. For most purposes, high-mid and high frequency content is soaked up by the treatment, never to be heard again. However, when we are running video, the screen drops in front of the foam. For high frequencies, the screen is basically a giant, relatively efficient reflector. My initial monitor-EQ solution was built for the situation where the upstage wall was an absorber. When the screen came down, that solution was partially invalidated. Luckily, what had to be addressed was merely the effective gain of a very narrow frequency range. We hadn’t run into a “showstopper bug,” but we had still encountered a problem.

The upshot of all this is:

Any change to a show’s acoustical environment, whether by way of surface absorption, diffusion, and reflectance, or by way of changes in atmospheric conditions, can invalidate a mix solution to some degree.

Now, you don’t have to panic. My feeling is that we sometimes overstate the level of vigilance required in regards to acoustical changes at a show. You just have to keep listening, and keep your brain turned on. If the acoustical environment changes, and you hear something you don’t like, then try to do something about it. If you don’t hear anything you don’t like, there’s no reason to find something to do.

For instance, at my regular gig, putting more people into the room is almost always an automatic improvement. I don’t have to change much (if anything at all), because the added absorption makes the mix sound better.

On the reverse side, I once ran a summer show for Puddlestone where I suddenly had a “feedback monster” where one hadn’t existed for a couple of hours. The feedback problem coincided with the air conditioning finally getting a real handle on the temperature in the room. My guess is that some sort of acoustical refraction was occurring, where it was actually hotter near the floor where all the humans were. For the first couple of hours, some amount of sound was bending up and away from us. When the AC really took hold, it might have been that the refraction “flattened out” enough to get a significant amount of energy back into the mics. (My explanation could also be totally incorrect, but it seems plausible.) Obviously, I had to make a modification in accordance with the problem, which I did.

In all cases, if things were working before, and suddenly are no longer working as well, a good question to ask yourself is: “What changed between when the mix solution was correct, and now, when it seems incorrect?” It’s science! You identify the variable(s) that got tweaked, and then manage the variables under your control in order to bring things back into equilibrium. If you have to re-solve your mix equation, then that’s what you do.

And then you go back to enjoying the show.

Until something else changes.


Mysteriously Clean

“Clean sound” has to do with more than just volume. Where that volume goes is also important.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

PA030005Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

So – you might be wondering what that picture of V-drum cymbals has to do with all this. I’ll gladly tell you.

Just a couple of weeks ago, the band Sake Shot was playing at my regular gig. They were the opening act, and the drummer decided that the changeover would be facilitated by the simplicity and speed of just pulling his E-kit off the deck.

During Sake Shot’s set, Brian from The Daylates walked up to FOH (Front Of House) control. After saying hello, he made a single comment that caused me to do some thinking. What he said was: “The drums sound great. It’s so clean!”

He was absolutely correct, of course. The drums were very clear, and highly separated from the other sources on stage. If the sound of the drums had been a photograph, the image would have been razor sharp. The question was, “Why?” It wasn’t just volume. The mix was somewhat quieter than some other rock bands I’ve done, but we were definitely louder than a jazz trio playing a hotel lobby (if you get my drift). No…there were other factors in play besides how much SPL (Sound Pressure Level) was involved.

I’ll start out by putting it this way: It’s not just how much volume there is. It’s also about where that volume goes.

Let me explain.

Drums, Drums, Everywhere

If you were to take a measurement microphone and walk around an acoustic drumkit, I’m reasonably sure that the overall plot of SPL levels would look something like this:

drumkitpolar

Behind the drummer, you might lose about 6 dB (or maybe not even that much), but overall, the drums just go everywhere. Sound POURS from the kit in all directions. In other words, the drumkit is NOT directional in any real way. This has a number of consequences:

1) Sound (and LOTS of it) travels forward from the kit, into the most sensitive part of the downstage vocal mics’ polar patterns. What’s wanted in those vocal mics is, of course, vocals. Anything that isn’t vocals that makes it into the mic is “noise,” which partially washes out the desired vocal signal.

2) The same sound that just hit the vocal mics continues forward to arrive at the ears of the audience.

3) That same sound also travels through the PA, courtesy of the vocal mics. Especially in a system that uses digital processing of some kind, latency is introduced. The sonic event being reproduced by the PA arrives slightly later than the acoustical event.

4) The sound traveling in directions other than straight towards the audience is – in a small venue – extremely likely to meet some sort of boundary. Some of these boundaries may have significant acoustical absorption qualities, and some of them may have almost no absorption at all. The boundaries that mostly act as reflectors (hard walls, hard ceilings, hard floors, etc) cause the sound to re-emit into the room, and that re-emitted sound can travel into the audience’s ears. These reflections also arrive later than the direct acoustical radiation from the kit. The reflections may exist in the closely packed, smooth wash of reverberation, or they might manifest as distinct “slaps” or “flutter.”

The upshot is that you have sonic events with multiple arrivals. One particular snare hit makes several journeys to the ears of the audience members, and what would otherwise be a nice, clean “crack” becomes smeared in time to some extent. Each drum transient gets sonically blurred, which means inter and intra-drum events become harder to discern from each other. (Inter-drum events are hits on different drums, whereas intra-drum events are the beginnings and ends of sounds produced by one hit on one drum.)

In short, the reflected sound of the drumkit partially garbles the direct sound of the kit. On top of that, the drum sound is now partially garbling the vocals.

This isn’t necessarily a disaster. Bands and techs deal with it all the time, and it’s possible to get perfectly acceptable sonics with an acoustic drumkit in a small venue. The point of this article isn’t to sell electronic drums to everybody. Even so, the effects of an acoustic kit’s sound careening around a room can’t be ignored.

Directivity Matters

Now then.

What was different enough about Sake Shot’s set to make Brian say that the sound was really clean?

It really wasn’t the SPL involved. When it came right down to it, the monitor rig and PA system were creating enough level to make the V-drums sound reasonably like a regular kit. The key was where that SPL was going…directivity, in other words.

Most pro-audio loudspeakers are far more directional than a drumkit. Sure, if you walk around the back of a PA speaker, you’ll still hear something. Even so, the amount of “spill” is enormously reduced. Here’s my estimate of what the average SPL coverage of an “affordable, garden-variety” pro-audio box looks like.

papolar

This is exceptionally important in the context of my regular gig, because the upstage and stage-right walls, along with a portion of the stage ceiling, are acoustically treated. Not only do the downstage monitors fire into the parts of the vocal mic patterns that are LEAST sensitive, they also fire into a boundary which is highly absorptive. Further, the drum monitors fire into the drummer’s ears, and partially into the absorptive back wall. There’s a lot less spill that can hit the reflective boundaries in the room.

What this means is that the non-direct arrivals of the E-kit’s sounds were – relative to an acoustic kit – very low in relation to the direct arrivals from the FOH PA. Further, there was very little “wash” in the vocal mics. All this added up to a sound that was very clean and defined, because each transient from the drums had a sharply defined beginning and end. This makes it much easier for a listener to figure out where drum sounds stop, and where other things (like vocal consonants) begin. Further, the vocal mics were generally delivering a rather higher signal-to-noise ratio than they otherwise might have been, which cleaned up the vocals AND the sound of the drums.

All the different sounds from the show were doing a lot less “running into each other.”

As such, the mysteriously clean sound of the show wasn’t so mysterious after all.


Transdimensional Noodle Baking

When you start messing with the timing and spatial behavior of sound, weird and wonderful things can happen.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

One of my favorite lines from the movie “The Matrix” is the bit delivered by The Oracle after Neo accidentally knocks over a vase:

“What’ll really bake your noodle later on is, would you still have broken it if I hadn’t said anything?”

That’s where I get the whole “noddle baking” thing. It’s stuff that really messes with your mind when you consider the implications. Of course, our noodles don’t get baked as often as they used to. We’re a pretty sophisticated lot, not at all astounded by ideas like how looking up at the stars is actually looking back in time. We’re just used to it all. “Yeah, yeah, the light from the stars has been traveling to us for millions of years, we’re seeing the universe as it was before our civilization was really a thing, yadda, yadda…”

Even so, there’s still plenty of room for us audio types to have our minds sauteed – probably because the physics of audio is so accessible to us. Really messing around with light waves is tough, but all kinds of nerdery is possible when it comes to sound. Further, the implications of our messing about with audio are surprisingly weird, whacky, wonderful, and even downright bizarre.

Here, let me demonstrate.

Time And Distance Are Partially Interchangeable

Pretty much every audio human who gets into “the science” can quote you the rule of thumb: Sound pressure waves propagate away from their source at roughly 1 foot/ millisecond, or 1 meter/ 3 milliseconds. (Notice that I said “roughly.” Sound is actually a little bit faster than that, but the error is small enough to be acceptable in cases where inches or centimeters aren’t of concern.) In a way that’s very similar to light, time and distance can be effectively the same measurement. When you hear a loudspeaker that’s 20 feet away, you aren’t actually hearing what the box sounds like now. You’re hearing what the box sounded like 20 milliseconds ago.

Now, we tend to gloss over all that. It’s just science that you get used to remembering. Think about what it means, though: To an extent, you can move a loudspeaker stack relative to the listener without physically changing the position of the boxes. Assuming that other sound sources (which the brain can use for a timing reference) stay the same, adding delay time to a source effectively makes the source farther away to the observer.

…and yet, the physical location of the source hasn’t changed. When you think about it, that’s pretty wild. What’s even wilder is that the loudspeaker’s coverage and acoustical interaction with the room remain unchanged, even though the loudspeaker is now effectively further away. Think about it: If we had physically moved the loudspeaker away from the listener(s), more people would be in the coverage pattern. The coverage area expands with distance, but only when the distance is physical. Similarly, if we had moved the loudspeaker away by actually picking it up and placing it farther away, the ratio of direct sound to reverberant sound would have changed. The reverberant field’s SPL (Sound Pressure Level) would have been higher, relative to the SPL of the pressure wave traveling directly to the listeners. By using a delay line, though, the SPL of the sound that arrives directly is unchanged…even though the sound is effectively farther from the audience.

Using a digital delay line, we can sort of “bend the rules” concerning what happens when the speakers are farther away from the audience.

Whoa.

It’s important to note, of course, that the rules are only “bent.” Also, they’re only bent in terms of select portions of how humans perceive sound. Whether or not the loudspeakers are physically moved or simply moved in time, the acoustical interactions with other sound sources DO change. This can be good or bad. Also, a loudspeaker that’s farther from a listener should have lower apparent SPL than the same box with the same power at a closer distance.

But that’s not what happens with a “unity gain” delay line.

And that’s another noodle-baker. The loudspeaker is perceptually farther from the listener, yet it has the same SPL as a nearby source.

Whoa.

(There is no spoon. It is yourself that bends.)

The Strange Case Of Delay And The Reference Frame

That bit above is nifty, but it’s actually pretty basic.

You want something really wild?

When we physically move a loudspeaker, we are most likely reducing its distance to some listeners while increasing its distance to others. (Obvious, right?) However, when we move a loudspeaker using time delay, the loudspeaker’s apparent distance to ALL listeners is increased. No matter where the listener is, the loudspeaker is pushed AWAY from them. It’s like the loudspeaker is in a bubble of space-time that is more distant from all points outside the bubble. Your frame of reference doesn’t matter. The delayed sound source always seems to be more distant, no matter where you’re standing.

Now THERE’S some Star Trek for ya.

If you’re not quite getting it, do this thought experiment: You put a monitor wedge on the floor. You and a friend stand thirty feet apart, with the monitor wedge halfway between the two of you. The distance from each listener to the wedge is 15 feet, right? Now, a delay of 15 ms is applied to the signal feeding the wedge. Remember, both of you are listening to the same wedge. Thus, in the sense of time (that is, ignoring other factors like SPL and direct/ reverberant ratio) each of you perceives the wedge as having moved to where the other person is standing. This is because, again, both of you are hearing the same signal in the same wedge. Both of you are registering a sonic event that has been made 15 ms “late.” It doesn’t matter where one of you is listening from – the wedge always “moves” away from you.

(Cue the theme to “The Twilight Zone.”)

Let me re-emphasize that other, important cues as to the loudspeaker’s location are not present. If you try this experiment in real life, you won’t truly experience the sound of a wedge that’s been moved 15 feet away. The acoustic interaction and SPL factors simply won’t be present. What we’re talking about here is the time component only, divorced from other factors.

You can use this “delay lines let you put your loudspeakers in a weird pocket-dimension” behavior to do cool things like directional subwoofer arrays.

For instance, here’s a subwoofer sitting all by itself. It’s pretty close to being omnidirectional at, say, 63 Hz.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

That’s great, and all, but what if we don’t want to toss all that low end around on the stage behind the sub? Well, first, we can put another sub in front of the first one. We put sub number 2 a quarter-wavelength in front of sub 1. (We do, of course, have to decide which frequency we’re most concerned about. In this case, it’s 63 Hz, so all measurements are relative to that frequency.) For someone standing in front of our sub-array, the front sub is about 4 ms early. By the same token, the folks on stage hear sub 2 as being roughly 4 ms late.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Here’s where the “delay always pushes away from you” mind-screw becomes useful. If we delay subwoofer 2 by a quarter-wavelength, the folks on stage and the folks in the audience will BOTH get the effect of sub 2 being farther away. Because the sub is a quarter-wave too early for the audience, the delay will precisely line it up with subwoofer 1. However, because the second sub is already “late” for the folks on stage, pushing the subwoofer a quarter-wave away means that it’s now “half a wave” late. Being a half-wave late is 180 degrees out of phase, which means that our problem frequency will cancel when you’re standing behind the array. The folks in front get all the bottom end they want, and the performers on stage aren’t being rattled nearly as much.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Radical, dude.

The Acoustical Foam TARDIS

For my final trick, let me tell you about how to make a room acoustically larger by filling it up more. It’s very Dr. Who-ish.

What I’m getting at is making the room more absorptive. To make the walls of the room absorb more sound, we have to add something – acoustic foam, for instance. By adding that foam (or drape, or whatever), we reduce the amount of sound that can strike a boundary and re-emit into the space. In a certain way, reducing the SPL and high-frequency content of reflections makes the room larger. Reduced overall reflection level is consistent with audio that has had to travel farther to reach the listener, and high-frequency content is particularly attenuated as sound travels through air.

So, in an acoustic sense, reducing the room’s actual physical volume by adding absorptive material actually makes the room “sound” bigger. Of course, this doesn’t always register, because we’re culturally used to the idea that large spaces have a “loud” reverberant field. We westerners tend to build big buildings that are made of things like stone, metal, glass, and concrete – which makes for a LOT of sound reflectance.

It might be a little bit better to say that increased acoustical absorption makes an interior sound more and more like being outside. Go far enough with your acoustic treatment (whether or not this is a good idea is beyond the scope of this article), and you could acoustically go outdoors by entering the building.

Whoa.