Tag Archives: Experimentation

The Puddle Mountain Arc

If you have the space and technical flexibility, a semicircular stage layout can be pretty neat.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

monitorsWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Just last week, my regular gig hosted a show for The Puddle Mountain Ramblers. During the show advance, Amanda proposed an idea.

What if we set up the stage so that the layout was an arc, instead of a straight line?

I thought that was a pretty fine idea, so we went with it. The way it all came together was that fiddle, bass, and banjo were on the stage-right side, the drums were upstage center, and guitar plus another fiddle were on the stage-left side. The setup seemed very effective overall.

Why?

Visibility, Separation, and Such

The main reason for the setup was really to facilitate communication. PMR is a band that derives a good deal of comfort and confidence from the members being able to see what each other player is doing. Also, it’s just generally nice to be able to make eye contact with someone to let them know that it’s their turn for a solo. Setting up in an arc makes this much easier, because you can get essentially unobstructed sightlines from each player to every other player. An added benefit is that all the players are closer together on average, which reduces the difficulty of reading faces, identifying hand movements, and keeping time. (An arc is geometrically more compact than a line. In a linear configuration, the farthest that any two players can be from each other is the entire length of the line. Bend that same line into a circle or circle-segment, and the farthest that any two players can be from each other is the line length divided by pi. That’s a pretty significant “packing.”)

Another benefit of the configuration is (potentially) reduced drum bleed. In a traditional setup, an upstage drumkit is pretty much “firing” into the most sensitive part of all the vocal and instrument mics’ pickup patterns. In an arc layout, with the drums at the center, the direct sound from the kit enters any particular mic at some significant off-axis angle. This bleed reduction can also extend to other vocals and instruments, especially because the mics can easily be at angles greater than 90 degrees relative to other sources.

Of course, it’s important to note that – especially with wide-pattern mics, like SM58s and other cardioids – compacting the band may undo the “off-axis benefit” significantly. This is especially true for bleed from whatever source is commonly at the midpoint of the arc’s circumference, like a drumkit probably would be. For the best chance of bleed reduction, you need tighter-patterned transducers, like an ND767a, or Beta 58, or e845, or OM2, or [insert your favorite, selectively patterned mic here]. Even so, the folks closest to, and at the smallest angle from the drumkit should be the strongest singers in the ensemble, and their miced instruments should be the most able to compete with whatever is loud on deck.

A third “bit of nifty” that comes from an arc setup is that of reduced acoustical crosstalk from monitor wedge to monitor wedge. With all the wedges firing away from each other, instead of in parallel paths, the tendency for any one performer to hear the wedges adjacent to them is reduced. Each monitor mix therefore has more separation than it otherwise might, which can keep things “cleaner” overall. It may also reduce gain-hungry volume wars on the deck.

Downsides

There are some caveats to putting a band on stage in a circle-segment.

The first thing to be aware of is that you tend to lose “down center” as a focal point. It’s not that you can’t put someone in there, but you have to realize that the person you’ve put down-center will no longer get the visibility and communication benefits of the arc. Also, a down-center wedge will probably be very audible to the performers standing up-center from that monitor, so you’ll have to take that into account.

The more isolated that monitor-mix sources become from one another, the more important it becomes that each monitor mix can be customized for individual performers. If you were on in-ears, for instance (the ultimate in isolated monitor feeds), separate mixes for each individual would be almost – if not entirely – mandatory. Increasing the mix-to-mix acoustical isolation pushes you towards that kind of situation. It’s not that shared mixes can’t be done in an arc, it’s just that folks have to be inclined to agree and cooperate.

A corollary to the above is that the show complexity actually tends to go up. More monitor mixes means more to manage, and an arc layout requires more thinking and cable management than a linear setup. You have to have time for a real soundcheck with careful tweaking of mixes. Throw-n-go really isn’t what you want to do when attempting this kind of layout, especially if you haven’t done it before.

Another factor to consider is that “backline” shouldn’t actually be in the back…unless you can afford to waste the space inside the arc. If at all possible, amps and instrument processing setups should utilize the empty space in front of everybody, and “fire” towards the performers (unless it’s absolutely necessary for the amps to combine with or replace the acoustical output of the PA).

If these considerations are factors you can manage, then an arc setup may be a pretty cool thing to try. For some bands, it can help “square the circle” of how to arrange the stage for the best sonic and logistical results, even if pulling it all off isn’t quite as easy as “pi.”

I’ll stop now.


Gain Vs. Bandwidth

Some preamps do pass audio differently when cranked up, but you probably don’t need to worry about it.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

gainvbandwidthWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

After my article on making monitor mixes not suck, a fellow audio human asked me to address the issue of how bandwidth changes with gain. Op-amps, which are very common in modern audio gear, have a finite bandwidth. This bandwidth decreases as gain increases.

A real question, then, is how much an audio tech needs to worry about this issue – especially in the context of microphone preamps. Mic pres have to apply a great deal of gain to signals, because microphones don’t exactly spit out a ton of voltage. Your average, dynamic vocal mic probably delivers something like two millivolts RMS with 94 dB SPL occurring at the capsule. Getting that level up to 0 dBu (0.775 Vrms) is a nearly 52 decibel proposition.

That’s not a trivial amount of gain, at least as far as audio is concerned. For instance, if we could get 52 dB of gain over the 1 watt @ 1 meter sensitivity of a 95 dB SPL loudspeaker, that speaker could produce 147 dB SPL! (That’s REALLY LOUD, if you didn’t know.) While there are loudspeaker systems that can produce that kind of final output, they have to start at a much higher sensitivity. A Danley Labs J3-64 is claimed to be able to produce 150 dB SPL continuous, but its sensitivity is 112 dB. The “gain beyond sensitivity” is a mere 38 dB. (“Mere” when compared to what mic pres can do. Getting 38 dB above sensitivity is definitely “varsity level” performance for a loudspeaker.)

Anyway.

In the face of a question like this, my response of late has become that we should try to measure something. There are, of course, gajillions of people willing to offer anecdotes, theories, and mythology, but I don’t find that to be very satisfying. I much prefer to actually see real data from real testing.

As such, I decided to grab a couple of mic-pre examples, and “put them on the bench.”

Setting Up The Experiment

The first thing I do is to set up a DAW session with the interface running at 96 kHz. I also set up an analyzer session with the same sampling rate.

The purpose of this is to – hopefully – be able to clearly “see” beyond the audible spectrum. Although my opinion is that audio humans don’t have to worry about anything beyond the audible range (20 Hz to 20 kHz) in practice, part of this experiment’s purpose is to figure out how close to audible signals any particular bandwidth issue gets. Even if frequencies we can hear remain unaffected, it’s still good to have as complete a picture as possible.

The next thing I do is generate a precisely 15 second long sample of pink noise. The point of having a sample of precisely known length is to make compensating for time delays easier. The choice of a 15 second length is just to have a reasonably long “loop” for the analyzer to chew on.

At this point, it’s time to take a look at how the analyzer handles a transfer-function calculation where I know that both “sides” are the same. The trace I get is a touch jumpy, so I bump up the averaging to “2.” This settles the trace nicely.

steadytrace

At this point, it’s time to connect the noise to a mic pre. I do this from my Fast Track interface’s headphone amp through an active DI, because I want to be absolutely sure that I’m ultimately running through high-gain circuitry. Yes – it’s true that the DI might corrupt the measurement to some degree, but I think I have a partial solution: My reference point for all measurements will be the test noise played through the DI, with the mic pre at the minimum gain setting. Each test will use the original noise, so that any “error factors” associated with the signal path under test don’t stack up.

Preamp 1: Fast Track Ultra 8R

My M-audio Fast Track Ultra 8R is what I would call a reasonably solid piece of pro-sumer equipment. My guess is that the preamps in the box are basically decent pieces of engineering.

The first thing to do is to get my low-gain reference. I set the noise output level so that the input through the preamp registers about -20 dBFS RMS, and record the result. I’m now ready to proceed further.

My next order of business is to put my test noise through at a higher gain. I set the gain knob to the middle of its travel, which is about +10 dB of gain from the lowest setting. I roll down the level going to the pre to compensate.

The next test will be with the gain at the “three-o-clock” position. This is about +25 dB of gain from the reference.

The final test is at maximum gain. This causes an issue, because so much gain is applied that the output compensation is extreme. In the end, I opt to find a compromise by engaging the mic preamp’s pad. This allows me to keep the rest of the gain structure in a basically “sane” area.

At this point, I check the alignment on the recorded signals. What’s rather odd is that the signal recorded through the pad seems to have arrived a few samples earlier than the signals recorded straight through. (This is curious, because I would assume that a pad would INCREASE group delay rather than reduce it.)

timingwithpad

No matter what’s going on, though, the fix is as simple as nudging the max-gain measurement over by 10 samples, or 0.1ms.

Preamp 2: SL2442-FX

The first round of testing involved a preamp that I expect is pretty good. A more interesting case comes about when we test a device with a not-so-stellar reputation: A mic pre from an inexpensive Behringer console. My old Behringer SL2442-FX cost only a bit more than the Fast Track did, and the Behringer has a LOT more analog circuitry in it (as far as I can tell). My guess is that if I want to test a not-too-great mic pre, the Behringer is a good candidate.

(To be fair, in the situations where I’ve used the Behringer, I haven’t been unhappy with the preamps at all.)

I use the same DI to get signal to the Behringer. On the output side, I tap the console’s insert point so as to avoid the rest of the internal signal path. I want to test the preamp, not the whole console. The insert connection is fed to the line input of the Fast Track, which appears to bypass the high-gain circuitry in the Fast Track mic pre.

In basically the same way as I did the Fast Track, I get a reference by putting the test noise through the preamp at its lowest setting, aiming for an RMS level of -20 dBFS. My next test is with the gain at “half travel,” which on the Behringer is a difference of about 18 dB. The “three-o-clock” position on the Behringer preamp corresponds to a gain of about +30 dB from the lowest point. The final test is, as you might expect, the Behringer at maximum gain.

A quick check of the files revealed that everything appeared to be perfectly time-aligned across all tests.

The Traces

Getting audio into the analyzer is as simple as running the Fast Track’s headphone out back to the first two inputs. Before I really get going, though, I need to verify that I’m measuring what I think I’m measuring. To do that, I mute the test noise, and push up the levels on the Fast Track Reference and Fast Track +10 dB tracks. I pan them out so that the reference is hard left, and the +10 dB measurement is hard right. I then put a very obvious EQ on the +10 measurement:

testeq

If the test rig is set up correctly, I should see a transfer function with a similarly obvious curve. It appears that my setup is correct:

eqverification

Now it’s time to actually look at things. The Fast Track +10 test shows a curve that’s basically flat, albeit with some jumpiness below 100 Hz. (The jumpiness makes me expect that what we’re seeing is “experimental error” of some kind.)

fasttrack+10transfer

The +25 dB test looks very much the same.

fasttrack+25transfer

The maximum gain test is also about as flat as flat can be.

fasttrackmaxtransfer

I was, quite frankly, surprised by this. I thought I would see something happening, even if it was above 20 kHz. I decide to insert an EQ to see if the test system is just blind to what’s going on above 20 kHz, despite my best efforts. The answer, to my relief, is that if the test were actually missing something outside the audible range, we would see it:

fasttrackfakedrolloff

So – in the case of the Fast Track, we can conclude that any gain vs. bandwidth issues are taking place far beyond the audible range. They’re certainly going on above the measurable range.

What about the Behringer?

The +18 dB transfer function looks like this, compared to the minimum gain reference:

beh+18transfer

What about the +30 dB test?

Maybe I missed something similar on the Fast Track, but the Behringer does seem to be noisier up beyond 30 kHz. The level isn’t actually dropping off, though. It’s possible that the phase gets “weird” up there when the Behringer is run hard – even so, you can’t hear 30 kHz, so this shouldn’t be a problem in real life.

beh+30transfer

Now, for the Behringer max gain trace.

This is interesting indeed. The Behringer’s trace is now visibly curved, with some apparent dropoff below 50 Hz. On the high side, the Behringer is dropping down after 20 kHz, with obvious noise and what I think is some pretty gnarly distortion at around 37 kHz. The trace also shows a bit of noise overall, indicating that the Behringer pre isn’t too quiet when “cranked.”

behmaxtransfer

At the same time, though, it has to be acknowledged that these deficiencies are easy to see when graphed, but probably hard to actually hear. The distortion is occurring far above what humans can perceive, and picking out a loss of 0.3 dB from 10 kHz to 20 kHz isn’t something you’re likely to do casually. A small dip under 50 Hz is fixable with EQ (if you can even hear that), and let’s be honest – how often do you actually have to run a preamp at full throttle? I haven’t had to in ages.

Conclusion

This is not a be-all, end-all test. It was definitely informal, and two different preamps are not exactly a large sample size. I’m sure that my methodology could be tweaked to be more pure. At the very least, getting precisely comparable gain values between preamps would be a better bit of science.

At the same time, though, I think these results can suggest that losing sleep regarding gain vs. bandwidth isn’t worthwhile. A good, yet not-boutique-at-all preamp run at full throttle was essentially laser flat “from DC to dog whistles.” The el-cheapo preamp looked a little scary when running at maximum gain, but that’s the key – it LOOKED scary. The graphed issues actually causing a problem with a show seems unlikely to me, and again, there’s the whole issue of whether or not you actually have to run the preamp wide open on a regular basis.

If I had my guess, I’d say that gain vs. bandwidth is worth being aware of at an academic level, but not something to obsess about in the field.


If It Doesn’t Work, I Don’t Want To Do It

Not doing things that are pointless seems like an obvious idea, but…

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This is going to sound off-topic, but be assured that you haven’t wandered onto the wrong site.

I promise.

Just hear me out. It’s going to take a bit, but I think you’ll get it by the end.

**********

I used to have a day-job at an SEO (Search Engine Optimization) company. If you don’t know what SEO is, then the name might lead you to believe that it’s all about making search engines work better. It isn’t. SEO should really be called “Optimizing Website FOR Search Engines,” but I guess OWFSE wasn’t as catchy as SEO. It’s the business of figuring out what helps websites to turn up earlier in search results, and then doing those things.

It’s probably one of the most bull[censored] businesses on the entire planet, as far as I can tell.

Anyway.

Things started out well, but after just a few months I realized that our product was crap. (Not to put too fine a point on it.) It wasn’t that anyone in the company wanted to produce crap and sell it. Pretty much everybody that I worked with was a “stand up” sort of person. You know – decent folks who wanted to do right by other folks.

The product was crap because the company’s business model was constrained such that we couldn’t do things for our customers that would actually matter. Our customers needed websites and marketing campaigns that set them apart from the crowd and made spending money with them as easy as possible. Those things are spendy, and require lots of time to implement well. The business model we were constrained to was “cheap and quick” – which we could have gotten away with if it was the time before the dotcom bubble popped. Unfortunately, the bubble had exploded into a slimy mess about 12 years earlier.

So, our product was crap. I spent most of my time at the company participating in the making of crap. When I truly realized just how much crap was involved, things got relatively awful and I planned my escape. (It was even worse because a number of us had ideas for fixes, ideas that were supported by our own management. However, our parent company had no real interest in letting us “pivot,” and that was that.)

But I learned a lot, and there were bright spots. One of the brightest spots was working with a product manager who was impervious to industry stupidity, had an analytical and reasonable mind, and who once uttered a sentence which has become a catchphrase for me:

“If it doesn’t work, I don’t want to do it.”

Is that not one of the most refreshing things you’ve ever heard? Seriously, it’s beautiful. Even with all the crap that was produced at that company, that phrase saved me from wading through some of the worst of it.

…and for any industry that suffers from an abundance of dung excreted from male cows, horses, or other work animals, it’s probably the thing that most needs to be said.

…and when it comes to dung, muck, crap, turds, manure, or just plain ca-ca, the music business is at least chest-deep. Heck, we might even be submerged, with the marketing and promo end of the industry about ten feet down. We need a flotation device, and being able to say “If it doesn’t work, I don’t want to do it,” is at least as good as a pair of water-wings.

The thing is, we’re reluctant to say (and embrace) something so honest, so brutally gentle and edifice-detonatingly kind.

We’ve Got To Do Stuff! Even If It’s Stupid!

I think this problem is probably at its worst in the US, although my guess is that it’s somehow rooted in the European cultures that form most of America’s behavioral bedrock. There’s this unspoken notion (that nobody would openly admit to embracing, even though we constantly embrace it by reflex) that the raw time and effort expended on something is what matters.

I’ll say that again.

We unconsciously believe that the raw time and effort expended on an endeavor is what matters.

We say that we love results, and we kinda do, but what we WORSHIP is effort – or the illusion thereof. The doing of stuff. The act of “being at work.”

In comparison, it barely matters if the end results are good for us, or anyone else. We tolerate the wasting of life, and the erosion of souls, and all manner of Sisyphean rock-pushing and sand-shoveling, because WE PUNCHED THE CLOCK TODAY, DANGIT!

If you need proof of this, look at what has become a defining factor in the ideological rock-throwing that is currently occurring in our culture. Notice a pattern? It’s all about work, and who’s doing enough of it. It’s figuring out how some people are better than other people, because of how much effort they supposedly expend. The guy who sits at the office for 12 hours a day is superior to you, you who only spend 8 hours a day in that cube. If you want to be the most important person in this culture, you need to be an active-duty Marine with two full-time jobs, who is going to college and raising three children by themselves. Your entire existence should be a grind of “doing stuff.” If you’re unhappy with your existence, or it doesn’t measure up to someone else’s, you obviously didn’t do enough stuff. Your expenditure of effort must be lacking.

I mean, do you remember school? People would do poorly on a test, and lament that they had spent [x] hours studying. Hours of their lives had been wasted on studying in a way that had just been empirically proven to be ineffective in some major aspect…yet, they would very likely do exactly the same thing again in a week or so. The issue goes deeper than this, but at just one level: Instead of spending [x] hours on an ineffective grind, why not spend, say, [.25x] hours on what actually works, and just be done?

Because, for all our love of results, we are CULTURALLY DESPERATE to justify ourselves in terms of effort.

I could go on and on and on, but I think you get it at this point.

What in blue blazes does this (and its antithesis) have to do with the music business?

Plenty.

Not Doing Worthless Crap Is The Most Practical Idea Ever

For the sake of an example, let’s take one tiny little aspect of promo: Flyering.

Markets differ, but I’m convinced that flyers (in the way bands are used to them) are generally a waste of time and trees. Even so, bands continue to arm themselves with stacks of cheap posters and tape/ staples/ whatever, and spend WAY too much time on putting up a bunch of promo that is going to be ignored.

The cure is to say, “If it doesn’t work, I don’t want to do it,” and to be granular about the whole thing.

What I mean by “granular” is that you figure out what bit of flyering does work in some way, and do that while gleefully forgetting about the rest. Getting flyers to the actual venue usually has some value. Even if none of the actual show-goers give two hoots about your night, getting that promo to the room sends a critical message to the venue operators – the message that you care about your show. In that way, those three or four posters that would go to the theater/ bar/ hall/ etc. do, in fact, work. As such, they’re worth doing for “political” reasons. The 100 or so other flyers that would go up in various places and may as well be invisible? They obviously don’t work, so why trouble yourself? Hang the four posters that actually matter, and then go rehearse (or just relax).

Also, you can take the time and money that would have been spent on 100+ cheap flyers, and pour some of that into making better the handful of posters that actually matter. Or buying some spare guitar picks, if that’s more important.

I’ll also point out that if traditional flyering does work in your locale, you should definitely do it – because it’s working.

In a larger sense, all promo obeys the rule of not doing it if it doesn’t work. Once a band or venue figures out what marketing the general public responds to (if any), it doesn’t make sense to spend money on doing more. If a few Facebook and Twitter posts have all the effect, and a bunch of spendy ads in traditional media don’t seem to do anything, why spend the money? Do the free stuff, and don’t feel like you have to justify wearing yourself (or your bank account) down to a nub. You may have to be prepared to defend yourself in some rational way, but that’s better than being broke, tired, and frustrated for no necessary reason.

It works for gear, too. People love to buy big, expensive amplification rigs, but they haven’t been truly necessary for years. If you’re not playing to large, packed theaters and arenas with vocals-only PA systems – which is unlikely – then a huge and heavy amp isn’t getting you anything. It’s a bunch of potential that never gets used. Paying for it and lugging it around isn’t working, so you shouldn’t want to do it. Spend the money on a compact rig that sounds fantastic in context, and is cased up so it lasts forever. (And if you would need a huge rig to keep up with some other player who’s insanely loud, then at least consider doing the sensible, cheap, and effective thing…which is to fire the idiot who can’t play with the rest of the team.)

To reiterate what I mentioned about flyering, there’s always a caveat somewhere. Some things work for some people and not for others. The point is to figure out what works for YOU, and then do as much of that as is effective. Doing stuff that works for someone else (but not you) so you can get not-actually-existent “effort expenditure points” is just a waste of life.

There are examples to be had in every area of show production. To try and identify them all isn’t necessary. The point is that this is a generally applicable philosophy.

If it works, you should want to do it.

If you don’t yet know if it works, you should want to give it a try.

But…

If it doesn’t work, I don’t want to do it, and neither do you (even if you don’t realize it yet).


Echoes Of Feedback

By accident, I seem to have discovered an effective, alternate method for “ringing out” PA systems and monitor rigs.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Sometimes, the best way to find something is to be looking for something else entirely.

A couple of weeks ago, I got it into my head to do a bit of testing. I wanted to see how much delay time I could introduce into a monitor feed before I noticed that something was amiss. To that end, I took a mic and monitor that were already set up, routed the mic through the speaker, and inserted a delay (with no internal feedback) on the signal path. I walked between FOH (Front Of House) and the stage, each time adding another millisecond of delay and then talking into the mic.

For several go-arounds, everything was pretty nondescript. I finally got to a delay time that was just noticeable, and then I thought, “What the heck. I should put in something crazy to see how it sounds.” I set the delay time to something like a full second, and then barked a few words into the mic.

That’s when it happened.

First, silence. Then, loud and clear, the delayed version of what I had said.

…and then, the delayed version of the delayed version of what I had just said, but rather more quietly.

“Whoops,” I thought, “I must have accidentally set the delay’s feedback to something audible.” I began walking back to FOH, only to suddenly realize that I hadn’t messed up the delay’s settings at all. I had simply failed to take into account the entire (and I do mean the ENTIRE) signal path I was working on.

Hold that thought.

Back In The Day

There was a time when delay effects weren’t the full-featured devices we’re used to. Whether the unit was using a bit of tape or some digital implementation, you didn’t always get a processor with a knob labeled “feedback,” or “regen,” or “echoes,” or whatever. There was a chance that your delay processor did one thing: It made audio late. Anything else was up to you.

Because of this, certain consoles of the day had a feature on their aux returns that allowed for the signal passing through the return to be “multed” (split), and then sent back through the aux send to the processor it came from. (On SSL consoles, this feature was called “spin.”) You used this to get the multiple echoes we usually associate with delay as an effect for vocals or guitar.

At some point, processor manufacturers decided that including this feature inside the actual box they were selling was a good idea, and we got the “feedback” knob. There’s nothing exotic about the control. It just routes some of the output back to the input. So, if you have a delay set for some number of milliseconds, and send a copy of the output back to the input end (at a reduced level), then you get a repeat every time your chosen number of milliseconds ticks by. Each repeat drops in level by the gain reduction applied at the feedback control…and eventually, the echo signal can’t be readily heard anymore.

But anyway, the key point here is that whether or not it’s handled “internally,” repeating echoes from a delay line are usually caused by some amount of the processor’s output returning to the front end to be processed again. (I say “usually” because it’s entirely possible to conceive of a digital unit that operates by taking an input sample, delaying the sample, playing the sample back at some volume, and then repeats the process for the sample a certain number of times before stopping the process. In this case, the device doesn’t need to listen to its own output to get an echo.)

I digress. Sorry.

If the output were to be routed back to the input at “unity gain,” (with no reduction or increase in level relative to the original output signal) what would happen? That’s right – you’d get an unlimited number of repeats. If the output is routed back to the front end at greater than unity gain, what would happen? Each repeat would grow in level until the processor’s output was completely saturated in a hellacious storm of distorted echo.

Does that remind you of anything?

Acoustical Circuits

This is where my previous sentence comes into play: “I had simply failed to take into account the entire (and I do mean the ENTIRE) signal path I was working on.” I had temporarily forgotten that the delay line I was using for my tests had not magically started to exist in a vacuum, somehow divorced from the acoustical circuit it was attached to. Quite the opposite was true. The feedback setting on the processor might have been set at “negative infinity,” but that did NOT mean that processor output couldn’t return to the input.

It’s just that the output wasn’t returning to the input by a path that was internal to the delay processor.

I’ve talked about acoustical, resonant circuits before. We get feedback in live-audio rigs because, rather like a delay FX unit, our output from the loudspeakers is acoustically routed back to our input microphones. As the level of this re-entrant signal rises towards being equal with the original input, the hottest parts of the signal begin to “smear” and “ring.” If the level of the re-entrant signal reaches “unity,” then the ringing becomes continuous until we do something to reduce the gain. If the returning signal goes beyond unity gain, we get runaway feedback.

This is not fundamentally different from our delay FX unit. The signal output from the PA or monitor speakers takes some non-zero amount of time to get back into the microphone, just like the feedback to the delay takes a non-zero amount of time to return. We’re just not used to thinking of the microphone loop in that way. We don’t consciously set a delay time on the audio re-entering the mic, and we don’t intentionally set an amount of signal that we want to re-enter the capsule – we would, of course, prefer that ZERO signal re-entered the capsule.

And the “delay time” through the mic-loudspeaker loop is just naturally imposed on us. We don’t dial up “x number of milliseconds” on a display, or anything. However long it takes audio to find its way back through the inputs is however long it takes.

Even so, feedback through our mics is basically the same creature as our “hellacious storm” of echoes through a delay processor. The mic just squeals, howls, and bellows because of differences in overall gain at different frequencies. Those frequencies continue to echo – usually, so quickly that we don’t discern individual repeats – while the other frequencies die down. That’s why the fighting of feedback so often involves equalization: If we can selectively reduce the gain of the frequencies that are ringing, we can get their “re-entry level” down to the point where they don’t noticeably ring anymore. The echoes decay so far and so fast that we don’t notice them, and we say that the system has stabilized.

All of this is yet another specific case where the patterns of audio behavior mirror and repeat themselves in places you might not expect.

As it turns out, you can put this to very powerful use.

The Application

As I discussed in “Transdimensional Noodle Baking,” we can do some very interesting things with audio when it comes to manipulating it in time. Making light “late” is a pretty unwieldy thing for people to do, but making audio late is almost trivial in comparison.

And making audio events late, or spreading them out in time, allows you to examine them more carefully.

Now, you might not associate careful examination with fighting feedback issues, but being able to slow things down is a big help when you’re trying to squeeze the maximum gain-before-feedback out of something like a a monitor rig. It’s an especially big help when you’re like me – that is to say, NOT an audio ninja.

What I mean by not being an audio ninja is that I’m really quite poor at identifying frequencies. Those guys who can hear a frequency start to smear a bit, and instantly know which fader to grab on their graphic EQ? That’s not me. As such, I hate graphic EQs and avoid putting them into systems whenever possible. I suppose that I could dive into some ear-training exercises, but I just can’t seem to be bothered. I have other things to do. As such, I have to replace ability with effort and technology.

Now, couple another issue with that. The other issue is that the traditional method of “ringing out” a PA or monitor rig really isn’t that great.

Don’t get me wrong! Your average ringout technique is certainly useful. It’s a LOT better than nothing. Even so, the method is flawed.

The problem with a traditional ringout procedure is that it doesn’t always simulate all the variables that contribute to feedback. You can ring out a mic on deck, walk up, check it, and feel pretty good…right up until the performer asks for “more me,” and you get a high-pitched squeal as you roll the gain up beyond where you had it. The reason you didn’t find that high-pitched squeal during the ringout was because you didn’t have a person with their face parked in front of the mic. Humans are good absorbers, but we’re also partially reflective. Stick a person in front of the mic, and a certain, somewhat greater portion of the monitor’s output gets deflected back into the capsule.

You can definitely test for this problem if you have an assistant, or a remote for the console, but what if you have neither of those things? What if you’ve got some other weird, phantom ring that’s definitely there, and definitely annoying, but hard to pin down? It might be too quiet to easily catch on a regular RTA (Real Time Analyzer), and you might not be able to accurately whistle or sing the tone while standing where you can easily read your RTA. Even if you can carry an RTA with you (if you have a smartphone, you can carry a basic analyzer with you everywhere – for free) you still might not be able to accurately whistle or sing the offending frequency.

But what if you could spread out the ringing into a series of discrete echoes? What if you could visually record and inspect those echoes? You’d have a very powerful tuning tool at your disposal.

The Implementation

I admit, I’m pretty lucky. Everything I need to implement this super-nifty feedback finding tool lives inside my mixing console. For other folks, there’s going to be more “doing” involved. Nevertheless, you really only need to add two key things to your audio setup to have access to all this:

1) A digital delay that can pass all audio frequencies equally, is capable of long (1 second or more) delays, and can be run with no internal feedback.

2) A spectrograph that will show you a range of 10 seconds or more, and will also show you the frequency under a cursor that you can move around to different points of interest.

A spectrograph is a type of audio analysis system that is specifically meant to show frequency magnitude over a certain amount of time. This is similar to “waterfall” plots that show spectral decay, but a spectrograph is probably much easier to read for this application.

The delay is inserted in the audio path of the microphone, in such a way that the only signal audible in the path is the output of the delay. The delay time should be set to somewhere around 1.5 to 2 seconds, long enough to speak a complete phrase into the mic. The output of the signal path is otherwise routed to the PA or monitors as normal, and the spectrograph is hooked up so that it can directly (that is, via an electrical connection) “listen” to the signal path you’re testing. The spectrograph should be set up so that ambient noise is too low to be visible on the analysis – otherwise, the output will be harder to interpret.

To start, you apply a “best guess” amount of gain to the mic pre and monitor sends. You’ll need to wait several seconds to see if the system starts to ring out of control, because the delay is making everything “late.” If the system does start to ring, the problem frequencies should be very obvious on the spectrograph. Adjust the appropriate EQs accordingly, or pull the gain back a bit.

With the spectrograph still running, walk up to the mic. Stick your face right up on the mic, and clearly but quickly say, “Check, test, one, two.” (“Check, test, one, two” is a phrase that covers most of the audible frequency spectrum, and has consonant sounds that rely on high-mid and high frequency reproduction to sound good.)

DON’T FREAKIN’ MOVE.

See, what you’re effectively doing is finding the “hot spots” in the sound that’s re-entrant to the microphone, and if you move away from the mic you change where those hot spots are. So…

Stay put and listen. The first thing you’ll hear is the actual, unadulterated signal that went through the microphone and got delivered through the loudspeaker. The repeats you will hear subsequently are what is making it back into the microphone and getting re-amplified. If you hear the repeats getting more and more “odd” and “peaky” sounding, that’s actually good – it means that you’re finding problem areas.

After the echoes have decayed mostly into silence, or are just repeating and repeating with no end in sight, walk back to your spectrograph and freeze the display. If everything is set up correctly, you should be able to to visually identify sounds that are repeating. The really nifty thing is that the problem areas will repeat more times than the non-problem areas. While other frequencies drop off into black (or whatever color is considered “below the scale” by your spectrograph) the ringy frequencies will still be visible.

You can now use the appropriate EQs to pull your problem frequencies down.

Keep iterating the procedure until you feel like you have a decent amount of monitor level. As much as possible, try to run the tests with gains and mix levels set as close to what they’ll be to the show as possible. Lots of open mics going to lots of different places will ring differently than a few mics only going to a single destination each.

Also, make sure to remember to disengage the delay, walk up on deck, and do a “sanity” check to make sure that everything you did was actually helpful.



If you’re having trouble visualizing this, here are some screenshots depicting one of my own trips through this process:

This spectrograph reading clearly shows some big problems in the low-mid area.

Some corrective EQ goes in, and I retest.

That’s better, but we’re not quite there.

More EQ.

That seems to have done the trick.



I can certainly recognize that this might be more involved than what some folks are prepared to do. I also have to acknowledge that this doesn’t work very well in a noisy environment.

Even so, turning feedback problems into a series of discrete, easily examined echoes has been quite a revelation for me. You might want to give it a try yourself.


Digital Audio – Bold Claims, Experimental Testing

Digital audio does benefit from higher bit-depth and sample rate – but only in terms of a better noise floor and higher frequency-capture bandwidth.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

When I get into a dispute about how digital audio works, I’ll often be the guy making the bizarre and counter-intuitive statements:

“Bit depth affects the noise floor, not the ability to reproduce a wave-shape accurately.”

“Sample rates beyond 44.1 k don’t make material below 21 kHz more accurate.”

The thing is, I drop these bombshells without any experimental proof. It’s no wonder that I encounter a fair bit of pushback when I spout off about digital audio. The purpose of this article is to change that, because it simply isn’t fair for me to say something that runs counter to many people’s understanding…and then just walk away. Audio work, as artistic and subjective as it can be, is still governed by science – and good science demands experiments with reproducible results.

I Want You To Be Able To Participate

When I devised the tests that you’ll find in this article, one thing that I had in mind was that it should be easy for people to “try them at home.” For this reason, every experiment is conducted inside Reaper (a digital audio workstation), using audio processing that is available with the basic Reaper download.

Reaper isn’t free software, but you can download a completely un-crippled evaluation copy at reaper.fm. The 30-day trial period should be plenty of time for you to run these experiments yourself, and maybe even extend them.

To ensure that you can run everything, you will need to have audio hardware capable of a 96 kHz sampling rate. If you don’t, and you open one of the projects that specifies 96 kHz sampling, I’m not sure what will happen.

With the exception of Reaper itself, this ZIP file should contain everything you need to perform the experiments in this article.

IMPORTANT: You should NOT open any of these project files until you have turned the send level to your monitors or headphones down as far as possible. Otherwise, if I have forgotten to set the initial level of the master fader to “-inf,” you may get a VERY LOUD and unpleasant surprise. If you wreck your hearing or your gear in the process of experimenting, I am NOT responsible.

Weak Points In These Tests

The veracity of these experiments is by no means unassailable. It’s very important that I say that. For instance, the measurement “devices” that will be used are not independent of Reaper. They run inside the software itself, and so are subject to both their own flaws and the flaws of the host software.

Because I did not want to burden you with having to provide external hardware and software for testing, the only appeal to external measurement that is available is listening with the human ear. Human hearing, as an objective measurement, is highly fallible. Our ability to hear is not necessarily consistent across individuals, and what we hear can be altered by all manner of environmental factors that are directly or indirectly related to the digital signals being presented.

I personally hold these experiments as strong proof of my assertions, but I have no illusions of them being incontestable.

Getting Started – Why We Won’t Be Using Many “Null” Tests

A very common procedure for testing digital audio assumptions is the “null” test. This experiment involves taking two signals, inverting the polarity of one signal, and then summing the signals. If the signals are a perfect match, then the resulting summation will be digital silence. If they are not a perfect match, then some sort of differential signal will remain.

Null tests are great for proving some things, and not so great for proving others. The problem with appealing to a null test is that ANYTHING which changes anything about the signal will cause a differential “remainder” to appear. Because of this, you can’t use null testing to completely prove that, say, a 24-bit file and a 16-bit file contain the same desired signal, independent of noise. You CAN prove that the difference between the files is located at some level below 0 dBFS (decibels referenced to full scale), and you can make inferences as to the audibility of that difference. Still, the reality that the total signal (including unwanted noise) contained within a 16-bit file is different from the total signal contained within a 24-bit file is very real, and non-trivial.

There are other issues as well, which the first few experiments will reveal.

Open the project file that starts with “01,” verify that the master fader and your listening system are all the way down, and then begin playback. The analysis window should – within its own experimental error – show you a perfect, undistorted tone occurring at 10 kHz. As the file loops, there will be brief periods where noise becomes visible in the trace. Other than that, though, you should see something like this:

test01

When Reaper is playing a file that is at the same sample rate as the project, nothing odd happens.

Now, open the file that starts with “02.” When you begin playback, something strange occurs. The file being played is the same one that was just being used, but this time the project should invoke a 96 kHz sample rate. As a result, Reaper will attempt to resample the file on the fly. This resampling results in some artifacts that are difficult (if not entirely impossible) to hear, but easy to see in the analyzer.

test02

Reaper is incapable of realtime (or even non-realtime) resampling without artifacts, which means that we can’t use a null test to incontestably prove that a 10 kHz tone, sampled at 44.1 kHz, is exactly the same as a 10 kHz tone sampled at 96 kHz.

What is at least somewhat encouraging, though, is that the artifacts produced by Reaper’s resampling are consistent in time, and from channel to channel. Opening and playing the project starting with “03” confirms this, in a case where a null test is actually quite helpful. The same resampled file, played in two channels (with one channel polarity-inverted) creates a perfect null between itself and its counterpart.

test03

Test 04 demonstrates the problem that I talked about above. With one file played at the project’s “native” sample rate, and the other file being resampled, the inverted-polarity signal doesn’t null perfectly with the other channel. The differential signal IS a long way down, at about -90 dBFS. That’s probably impossible to hear under most normal circumstances, but it’s not digital silence.

test04

Does A Higher Sampling Rate Render A Particular Tone More Accurately?

With the above experiments out of the way, we can now turn our attention to one of the major questions regarding digital audio: Does a higher rate of sampling, and thus, a more finely spaced “time grid,” result in a more accurate rendition of the source material?

Hypothesis 1: A higher rate of sampling does result in a more accurate rendition of a particular tone, as long as the tone in question is a frequency unaffected by input or output filtering. This hypothesis assumes that digital audio is an EXPLICIT representation of the signal – that is, that each sample point is reproduced “as is,” and so more samples per unit time create a more faithful reproduction of the material.

Hypothesis 2: A higher rate of sampling does not result in a more accurate rendition of a particular tone, as long as the tone in question is a frequency unaffected by input or output filtering. This hypothesis assumes that digital audio is an IMPLICIT representation of the signal, where the sample data is used to mathematically reconstruct a perfect copy of the stored event.

The experiment begins with the “05” project. The project generates a 10 kHz tone, with a 44.1 kHz sampling rate. If you listen to the output (and aren’t clipping anything) you should hear what the analyzer displays: A perfect, 10 kHz sine wave with no audible distortion, harmonics, undertones, or anything else.

test05

Project “06” generates the same tone, but in the context of a 96 kHz sampling rate. The analyzer shifts the trace to the left, because 96 kHz sampling can accommodate a wider frequency range. However, the signal content stays the same: We have a perfect, 10 kHz tone with no audible artifacts, and nothing else visible on the analyzer (within experimental error).

test06

Project “07” also generates a 10 kHz tone, but it does so within a 22.05 kHz sampling rate. There is still no audible signal degradation, and the tone displays as “perfect” in the analyzer. The trace is shifted to the right, because 10 kHz is very near the limit of what 22.05 kHz sampling can handle.

test07

Conclusion: Hypothesis 2 is correct. At 22,500 samples per second, any given cycle of a 10 kHz wave only has two samples available to represent the signal. At 44.1 kHz sampling, any given cycle still only has four samples assigned. Event at 96 kHz, a 10 kHz wave has less than 10 samples assigned to it. If digital audio were an explicit representation of the wave, then such small numbers of samples being used to represent a signal should result in artifacts that are obvious either to the ear or to an analyzer. Any such artifacts are not observable via the above experiments, at any of the sampling rates used. The inference from this observation is that digital audio is an implicit representation of the signals being stored, and that sample rate does not affect the ability to accurately store information – as long as that information can be captured and stored at the sample rate in the first place.

Does A Higher Sample Rate Render Complex Material More Accurately?

Some people take major issue with the above experiment, because musical signals are not “naked” sine waves. Thus, we need an experiment which addresses the question of whether or not complex signals are represented more accurately by higher sample rates.

Hypothesis 1: A higher sampling rate does create a more faithful representation of complex waves, because complex waves are more difficult to represent than sine waves.

Hypothesis 2: A higher sampling rate does not create a more faithful representation of complex waves, because any complex wave is simply a number of sine waves modulating each other to varying degrees.

This test opens with the “08” project, which generates a complex sound at a 96 kHz sample rate. To make any artifacts easy to hear, the sound still uses pure tones, but the tones are spread out across the audible spectrum. Accordingly, the analyzer shows us 11 tones that read as “pure,” within experimental error. (Lower frequencies are less accurately depicted by the analyzer than high frequencies.)

test08

If we now load project “09,” we get a tone which is audibly and visibly the same, even though the project is now restricted to 44.1 kHz sampling. Although the analyzer’s trace has shifted to the right, we can still easily see 11, “pure” tones, free of artifacts beyond experimental error.

test09

Conclusion: Hypothesis 2 is correct. A complex signal was observed as being faithfully reproduced, even with half the sampling data being available. An inference that can be made from this observation is that, as long as the highest frequency in a signal can be faithfully represented by a sampling rate, any additional material of lower frequency can be represented with the same degree of faithfulness.

Do Higher Bit Depths Better Represent A Given Signal, Independent Of Noise?

This question is at the heart of current debates about bit depth in consumer formats. The issue is whether or not larger bit-depths (and the consequentially larger file sizes) result in greater signal fidelity. This question is made more difficult because lower bit depths inevitably result in more noise. The presence of increasing noise makes a partial answer possible without experimentation: Yes, greater fidelity is afforded by higher bit-depth, because the noise level related to quantization error is inversely proportional to the number of bits available for quantization. The real question that remains is whether or not the signal, independent of the quantization noise, is more faithfully represented by having more bits available to assign sample values to.

Hypothesis 1: If noise is ignored, greater bit-depth results in greater accuracy. Again, the assumption is that digital is an explicit representation of the source material, and so more possible values per unit of voltage or pressure are advantageous.

Hypothesis 2: If noise is ignored, greater bit-depth does not result in greater accuracy. As before, the assumption is that we are using data in an implicit way, so as to reconstruct a signal at the output (and not directly represent it).

For this test, a 100 Hz tone was chosen. The reasoning behind this was because, at a constant 44.1 kHz sample rate, a single cycle of a 100 Hz tone has 441 sample values assigned to it. This relatively high number of sample positions should ensure that sample rate is not a factor in the signal being well represented, and so the accuracy of each sample value should be much closer to being an isolated variable in the experiment.

Project “10” generates a 100 Hz tone, with 24 bits of resolution. Dither is applied. Within experimental error, the tone appears “pure” on the analyzer. Any noise is below the measurement floor of -144 dBFS. (This measurement floor is convenient, because any real chance of hearing the noise would require listening at a level where the tone was producing 144 dB SPL, which is above the threshold of pain for humans.)

test10

Project “11” generates the same tone, but at 16-bits. Noise is visible on the analyzer, but is inaudible when listening. No obvious harmonics or undertones are visible on the analyzer or audible to an observer.

test11

Project “12” restricts the signal to an 8-bit sample word. Noise is clearly visible on the analyzer, and easily audible to an observer. There are still no obvious harmonics or undertones.

test12

Project “13” offers only 4 bits of resolution. The noise is very prominent. The analyzer displays “spikes” which seem to suggest some kind of distortion, and an observer may hear something that sounds like harmonic distortion.

test13

Conclusion: Hypothesis 2 is partially correct and partially incorrect (but only in a functional sense). For bit depths likely to be encountered by most listeners, a greater number of possible sample values does not produce demonstrably less distortion when noise is ignored. A pure tone remains observationally pure, and perfectly represented. However, it is important to note that, at some point, the harmonic distortion caused by quantization error appears to be able to “defeat” the applied dither. Even so, the original tone does not become “stair-stepped” or “rough.” It does, however, have unwanted tones superimposed upon it.


No Sale

A Small Venue Survivalist Saturday Suggestion

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Shopping for a personal vocal mic?

Forget about how it sounds in a pair of headphones.

Find some monitor wedges, and crank up the mic until it sounds like what you’ll need for your band. If the mic sounds bad, or you’re struggling with feedback, then it’s “no sale.”


The Uncanny Valley Of Freedom

If you want to experiment, small venues are a great place to be.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

“The Uncanny Valley” is something that you might not be familiar with. It’s the idea that, as humans, we become more and more comfortable with creatures and machines as they look more and more like us – until a certain point. Rather suddenly, a machine or animal reaches a point where it’s very human-like, and yet not human enough. We unconsciously interpret the thing as being a physically or mentally damaged person, and so we’re repulsed by it. Climbing out of the uncanny valley requires that the object of our horror become so lifelike that we can’t readily tell the difference between it and an actual human.

The reason The Uncanny Valley has its name is that, when you graph the above phenomenon, the “person is comfortable with the object” line climbs steadily, and then drops into something of a chasm before recovering.

What I’ve started to notice in the past years is that Uncanny Valleys are actually pretty easy to find. One of these valleys exists in the area of an audio human’s freedom to experiment and try new things. You might think that an audio tech’s ability to explore new techniques increases in a simple, linear fashion as venue size increases – but that’s not actually true.

There’s actually a huge downward slope in the “freedom” graph for sound persons, and it happens right after the curve leaves small-venue territory. As far as I can tell, anyway.

No Riders Means More Leeway

But why would this be the case? Why would small venues actually have the potential for more experimentation by a noise wrangler?

By my estimation, it all has to do with being in a spot that’s just right for doing things that are a bit out of the mainstream. To be in that spot, the venue has to be big enough for expanded system functionality and (or) advanced applications to actually matter, while being small enough that acceptability to the widest range of acts isn’t a major factor.

…yeah, that was kinda unwieldy. Let me make this a little more concrete.

Most mixing consoles that are widely used are either analog units – where the control surface is directly tied to the circuitry – or digital mixers that simulate this behavior to some degree. In defiance of these conventions, I use a semi-homebrew console that has no traditional control surface at all.

The industry standard stage-vocal mic is the SM-58. I don’t particularly care for the SM-58, and I’m not really wild about any other offering from Shure, and so my go-to mics for onstage singers are models from EV and Sennheiser. They’re mics that I’m interested in, at a personal level.

I could never get away with this if I worked at a mid-size venue.

The reason that I CAN experiment and do what I want in the small-venue environment is because I don’t have to conform to the expectations of acts that need wide compatibility and high predictability. As a guy that works primarily with local musicians, I don’t have to contend with concert riders that make demands for industry standard gear. I also don’t have to worry about ensuring the productivity of a large number of visiting audio humans.

Most of my locals don’t even HAVE a written list of production requirements. If they did, though, it would probably read:

“It would be great if you have two vocal mics available. If you have 3 vocal mics, that would be even better. We don’t care what the vocal mics are, as long as they basically sound like what they’re pointed at and don’t smell like the hippo enclosure at the zoo. If you’ve got mics, then we hope you also have a couple of speakers to point at the audience, and one or two to point back at us. Hopefully they’re okay to listen to and not ready to spontaneously combust. See you on Friday night.”

There’s a lot of freedom in there, because there aren’t a lot of specifics. When you get into bigger venues whose bread and butter is hosting regionals and smaller nationals, things suddenly change. The riders start to say things like:

“Must have 4 Beta-series Shure mics available.”

“Console must be functionally equivalent to a Soundcraft GB4-32.”

“No Peavey, Behringer, A&H, or Mackie.”

…and so on.

To do your job properly, you simply can’t be scratching your own itches and trying oddball solutions. You have to be ready to cater to a lot of people who need to walk up to something they can predict and be comfortable with immediately, and that means lots of “industry standard midgrade-pro” gear.

Let me be clear: There’s nothing wrong with working for a venue or provider who’s target market is the regional and national act. It’s a career path that can be very exciting and enjoyable. It’s just good to know what the expectations are.

Climbing Out Of The Valley

The audio-dude freedom curve does come back up eventually. For proof of this, see Dave Rat. When you get to the level of Dave Rat and his peers, your freedom for experimentation returns. The reason for this is twofold:

1) You are now trusted enough as a craftsperson, and are regarded as enough of a leader for people to put their faith in your experiments. You’re hired specifically to be you, as opposed to being hired because you can make audio gear work. (There is a difference.)

2) You have the resources necessary to execute your experiments in a nicely crafted way, where the fit, finish, and performance are at the caliber necessary for the acts that hire you.

There’s a certain level that you can get to where YOU are the one who writes up all the requirements. When you get to that stage, you can have all the crazy-cool notions you want. It becomes your job to have those notions and bring them to fruition, and so your “freedom curve” climbs higher and higher.

It’s important to note that “the curve” isn’t the same for everyone. For in-house audio humans, the freedom curve drops off after the small-venue scale, and then never recovers. For guys and gals that provide complete rigs for acts, the curve can have all kinds of peaks and dips. For folks that mix with their own front ends on other people’s systems, the curve can be pretty flat.

The bottom line is to figure out what excites you as an audio tech, and find a groove that works for you. If you love to do your own thing, buck the trends, and push the envelope, you can have a lot of fun in small venues.


The Trouble With Mister Floyd

A proposal for a show that incorporates a live, Pink Floyd tribute act with dance.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

A while back, a friend paid me a compliment. She said that she would love to bring me out to where her ballet company performs, so that I could assist with the audio. She was sure it would be a great show.

(Thanks, Gina! I definitely think it would be cool to work with Ballet Ariel.)

One day, I was in desperate need of a project to do, and I hit upon the idea of melding a full-tilt, Rock and Roll presentation of Pink Floyd’s “The Wall” with a similarly full-tilt, dramatic interpretation of the story through ballet.

In the end, I couldn’t quite get the results I wanted by just using “The Wall,” so I pulled in other Pink Floyd songs to introduce themes that would motivate the characters in ways I found interesting. I’m still basically ripping off “The Wall” as it was presenting in cinematic form, just with certain tweaks and a different ending.

Here’s what I ended up with. My guess is that the show could be pulled off for about $250,000 – anybody have any rich uncles who love Pink Floyd?

The Set

The idea for the set is to have a raised area for “Pink” and “Floyd” to perform, which is backed by a large platform for the band. The band area is roughly halfway enclosed by plexiglass sound barriers, which keep the band mostly visible while reducing their stagevolume’s contribution to sound in the house.

It is critical that the stage be well braced. Resonance from the platforms could be a huge sonic problem otherwise. The band portion of the stage should be carpeted, to help absorb sound.

The cost to construct the stage will probably be around $10,000.

The thumbnails below link to full-size versions of the pictures.

The Lighting Rig

The lighting rig is a huge piece of the show’s “soul,” and is also the show’s largest technological element. It is meant to be a primary driver of the show’s emotion and pacing, at a level equal to the physical movement by the performers and the music provided by the band.

A certain amount of restraint will be necessary, because the temptation will probably be to overuse the rig. We do want it to do some exciting things, and to do those things fairly frequently – but not so frequently that the audience simply filters the light show from their mind.

The experiment inherent in the rig is that there is no traditional front-lighting. Everything is from the side and/ or above. This is something of a risk, but the risk can be mitigated by performing the show in a space where front-lighting is already installed. The key luminaires, FX devices, rigging, and video gear are as follows:

  • 2 Haze Generators
  • 4 Geyser RGB Fog FX Units
  • 42 SlimPAR 12 IRC Sidelights
  • 48 SlimBANK Over-side lights
  • 28 BeamBAR Beam FX Units
  • 24 Intimidator Duo Moving Heads
  • 1 Rear projection screen
  • 1 5000+ ANSI lumen projector
  • 95 4′ sections of Featherlite Truss
  • 20 Featherlite square truss connectors

These pieces, along with their associated control gear, cabling, and miscellaneous items, are estimated to cost $67,000.

The Complete Stage, With Figures For Scale

The FOH Audio Rig

Like the lighting rig, the audio system needs to be extensive enough to be “big,” but the temptation to overuse it will have to be resisted. To that end, it seems reasonable to set a goal of having 50% of the audience experience an average level of no more than 100 dB SPLZ, slow. (Ideally, 94 dB SPLZ, slow would be the upper limit.)

Specifics in terms of the audio rig are not as important as those of the lighting rig. Many different kinds of subwoofer could be suitable, for example. In general, the audio system should include:

  • 8, 18-inch subwoofers
    • 4 amplifiers
  • 8, 15-inch subwoofers
    • 4 amplifiers
  • 8, 15-inch LF full-range enclosures, biamped, for the main stacks.
    • 8 amplifiers
  • 8, 12-inch LF full range enclosures, biamped, for the main stacks.
    • 8 amplifiers
  • 4, 15-inch LF full range enclosures, single amped, for surround FX.
    • 2 amplifiers
  • 10, 12 inch LF full range enclosures, single amped, for various fills.
    • 5 amplifiers

This FOH audio rig, along with its associated control and processing gear, could be built at a low-end cost of $25,000. The high end cost, of course, is unlimited. The cost does increase considerable when a monitor rig, mics, and accessories are added, but these have been left out for brevity.

The Cast

The overall imperative for any cast member is to be able, and indeed, delighted, to perform in a “full-tilt rock and roll” show with a live band, atmospheric effects like haze and fog, as well as lights that move and change rapidly at times.

Floyd: The main character.

It’s absolutely imperative that Floyd be VERY strong at duet and solo work, and also able to emote in ways that will seem very concrete and natural to the audience.

Mother: Floyd’s Mom

She will need to be a good soloist, but even more important is her ability to work well in a duet. Like Floyd, she needs to be able to project emotions in a very obvious and relatable way.

Daddy: Floyd’s Dad

The most important thing for this cast member is his ability to act in a vaguely menacing (but still palpably unsettling) way towards Floyd in several scenes. He only ever appears as a ghost. Some competence as a soloist and in duets will be required, but deep experience is probably not necessary – unless the choreographer decides to create some technically challenging moments for him, of course.

Pink: Floyd’s best friend.

He mostly needs to be able to be convincing as a young person who is “partners in crime” with Floyd. However, there is one key moment, late in the show, where he will need to deliver on some key emotions as a ghost. This may be a good part for a dancer who is just ready to transition into duets and solos.

The Groupies: Two “hangers-on” who get close to Floyd and Pink, briefly.

Both will need to be able to project an obvious (but NOT overdone) sort of “average intelligence rock girlfriend” persona. The twist is that, in one scene, they must be able to project a marked prowess as they dance, sensually, with Floyd and Pink. The Groupie who ends up with Floyd will need just that much more emotional ability than The Groupie who ends up with Pink.

The Company: Everybody else. Certain characters may be drawn from the company pool, if necessary.

The company plays concert goers, teachers, schoolkids, regular folks, and so on. The cast members who are the strongest technical and emotional performers should be selected to fill the roles of Pink and Floyd’s teacher, the “undesirables” singled out during In The Flesh, and so on.

The Show

Note: This section is not consistent in terms of details. The really important things are
specified, but there is quite a bit that will have to be determined later.

The audience is seated with the main curtain down. House-light flashes and aural tones should signal 5 minutes, 2 minutes, and 1 minute to show.

The show actually begins with the house-lights up. This is to promote safety for The Company, because they enter through the house. As they walk through the audience seating, they should chatter excitedly about being able to get into the “Pink and Floyd Concert,” amongst other things.

The house-lights dim slowly. The Company should offer the appropriate banter like, “Oh, wow!” and “It’s starting!”

The house goes black, as completely black as possible without compromising safety. The Company goes silent.

After a few seconds…

Prologue – In The Flesh?

The stage explodes with color, light, and sound. Pink and Floyd have started their show. The Company goes wild (silently, as they’re now in full “dance” mode) and go up to the stage to give their rapt attention to Floyd.

[Important – after this point, unless otherwise stated, all cast members are always silent. References to saying things, shouting, narration, etc, are to be mimed or danced and not actually vocalized.]

Although what Floyd sings might be a little confusing, lyrically, The Company is completely enthralled and joyful.

At the ending and plane crash, The Company erupts in celebration…and then freezes at the climax of light and sound.

The screen reads: “Bomber Shot Down – Crew Missing, Presumed Dead”

The Thin Ice

Daddy, as a ghost, stands a bit upstage. Downstage, Mother comforts Floyd, “singing” the song to him.

The Company “ice skates” around Mother and Floyd. When the music rises, Floyd tries to move away from Mother and interact with the “skaters,” but Mother, frightened, clings to him.

Another Brick In The Wall, Part 1

Mother and The Company exit the stage. Floyd attempts to reach Daddy, but he keeps retreating from Floyd’s touch.

The Happiest Days Of Our Lives

The screen reads: “School begins at 8:00 sharp! Tardiness will not be tolerated.”

Floyd finds Pink, and they “go hide” somewhere to have an illicit smoke. They are of course, found by their teacher, who “shouts” at them to “STAND STILL LADDIE!”

The teacher catches Pink, but Floyd gets away and comes downstage to “narrate” to the audience.

As the music rises, The Company (some as teachers, some as students) enter.

Another Brick In The Wall, Part 2

This entire scene is Pink, Floyd, and the students having a passive-aggressive battle. The teachers should turn their backs to give the students the opprotunity to “shout.” (“HEY TEACHER! LEAVE THEM KIDS ALONE!”) The students should be just barely restrained when the teachers are looking at them.

As the scene closes, Floyd goes home and goes to bed. There is silence. Floyd falls asleep, and then Daddy appears as a ghost.

Floyd jolts upright.

Welcome To The Machine

Daddy shows Floyd a vision of a possible future life. In this life, everyone has been “good girls and boys,” and are now productive (but somewhat lifeless) workers in a factory. The work is boring and mechanical.

The screen reads: “Work begins at 8:00 sharp. Late arrivals must be pre-approved with form 86-T, and you must contact your supervisor, undersupervisor, and supersupervisor three weeks in advance.”

There are blasts of steam (actually fog) throughout the scene.

As Daddy “talks” to Floyd, he should whisper in his ears, move around him in an almost predatory fashion, and invade Floyd’s personal space often. However, at no point should Daddy and Floyd actually touch.

There should be a sense of mounting horror (on Floyd’s part) at the prospect of being put to work in the factory.

Time

Suddenly, the clocks strike. Horrified, Floyd watches as the factory workers fall over, lifeless.

The factory workers slowly rise, and help Daddy by acting out his “narration” of the song. At first, they move as carefree youths, but then seem to panic as the guitar solo comes in. An unseen terror is chasing them.

At the mention of the sun, they run out of energy and start collapsing. Less and less able to move. They seem to be dying off. Things seem to be falling out of their hands.

As the song ends, Daddy walks away.

Mother

Floyd wakes up, and finds his mother for support.

Mother tries to reassure Floyd. At first she seems successful, but as her parts of the song progress, it’s made clear that all she’s capable of is clinging to Floyd, preventing him from getting out of her sight.

At the end of the song, Floyd becomes abruptly repulsed. He runs off.

On The Run

Mother pursues Floyd, but can’t seem to catch up to him. Floyd links up with Pink, and they start to “write songs,” and “play shows” to The Company. Whenever Mother gets close, Pink, Floyd, and The Company always move on, looking cheery.

At the “boom,” the screen reads: “Pink and Floyd Song a Smash Hit!” The foggers let out a large, sustained blast.

Learning To Fly

Pink and Floyd perform their hit song to their adoring fans (The Company). Floyd seems free and happy, and The Company is ecstatic. The only one not seeming to enjoy things is Mother, who is unintentionally overwhelmed by the crowd and unable to get close to her son. She is essentially invisible to everyone.

(Mother’s part shouldn’t be too big in this scene – it has a dampening effect on the emotional tone, and this scene is meant to be one of the few really happy ones.)

Have A Cigar

The screen reads: “Pink and Floyd Continue Topping The Charts!”

Pink and Floyd are being wined, dined, congratulated, back-slapped, and buttered up by The Company as recording industry execs. In terms of formation, there are three areas:

The center, where Pink and Floyd spend most of their time.

The inner circle, where the execs are fawning over Pink and Floyd.

The outer circle, where the execs “talk” amongst themselves, count their money, and anticipate a very profitable future.

At the end of the scene, Pink and Floyd walk off, and are met by The Groupies.

Money

Pink and Floyd take The Groupies out for a night of partying. This scene should be very unambiguously about conspicuous consumption, and (at least) heavily imply that the characters fall into using alcohol and hard drugs. These are young people caught up in an imaginary-yet-real world where they can have anything they want. Pink should be noticeably more affected by his drinking and drug use than Floyd. The Groupies mostly act as starry-eyed hangers-on.

Young Lust

The screen reads: “Pink and Floyd – Are These The Girlfriends? Exclusive Photos Inside!”

The Groupies definitely want to hang on to Pink and Floyd, and so now they reveal their true prowess – sensuality. This scene should provide a great opportunity for The Groupies to show off movement that is an order of magnitude more fluid and technically impressive than what they’ve done before.

At the guitar solo, Pink and his Groupie run off, leaving Floyd and his Groupie to do a short, but intense duet.

At the end of the scene, the song ends and silence falls. Suddenly, the phone rings. Floyd picks is up, and reacts with disbelief, then shock and grief. He and his Groupie exit.

The screen reads: “Pink Dead in Auto Accident. Substance Abuse a Factor?”

The Great Gig In The Sky

Pink starts out bewildered. The Groupie is lying lifeless nearby. As the vocal part comes in, The Company enters as angels. They “wake” The Groupie, and escort both her and Pink to heaven. They both look apprehensive as they arrive, but it’s soon clear that they’re both pardoned. They go off happily, trailed by The Company.

Wish You Were Here

Floyd is alone and dejected. All he has to express is his grief in a lengthy solo. He is alternately lit dimly and in silhouette.

At the end of the song, Floyd sits down and switches on a TV. He becomes cold and distant.

One Of My Turns

The Groupie enters, and, oblivious to Floyd’s feelings at the start, does her routine of being fantastically impressed by the house. She tries to get Floyd’s attention, but becomes crestfallen as all her strategies fail.

Floyd begins his part in a self-absorbed way, seemingly oblivious to The Groupie. However, as the song’s intensity rises. He begins interacting with her.

The key thing for this part is that The Groupie does feel threatened by Floyd, but not in the same way as Floyd was threatened by Daddy earlier. Floyd is not a creeping, psychological menace. In fact, he doesn’t mean to threaten her at all – he’s dangerous because he’s suddenly gone manic.

At the end of the scene, The Groupie runs off in terror.

Don’t Leave Me Now

Floyd is now alone, and not by choice. The Company enters, but stands in a semi-circle upstage, their backs turned to Floyd.

At the guitar solo, The Company suddenly turns and tries to get Floyd’s attention. They are now fans, people who desperately want attention from the semi-mythical figure they’ve constructed for themselves.

At the end of the scene, Floyd becomes enraged.

Another Brick In The Wall, Part 3

Floyd angrily chases The Company away, rejecting everyone and everything. He is, briefly a very intentional menace.

Goodbye Cruel World

Floyd, with very muted movement, expresses his alienation.

Sorrow

Floyd spends this entire scene down center, brightly lit, with his head down. He moves very little throughout the lengthy song.

In turns, everyone who Floyd has hurt enters and “has their say.” Mother first, then The Groupie, then members of The Company as fans.

Daddy enters as a ghost, and moves close to Floyd accusingly. Pink also enters as a ghost, and is clearly unhappy with what’s going on. It should be clear that he’s not really upset with Floyd. Concerned would be more accurate.

Near the end of the scene, one or two members of The Company (as recording execs) come on stage and force Floyd to his feet. They are demanding that he keep playing.

The screen reads: “Can The Show Go On?”

In The Flesh

Floyd is still alienated, but gets onstage to do the show. The fans are less animated this time. They’re even a little confused – especially as Floyd says “Pink isn’t well, he’s stayed back at the hotel.” (Pink is very, unambiguously dead, and they know it – but Floyd is in denial. If this can’t be readily expressed through movement, that’s fine. Sometimes a few unanswered questions in an audiences mind are perfectly acceptable.)

As Floyd starts suggesting that people who don’t fit be put “up against the wall,” the fans very quickly (and frighteningly) go along with him. They reject, threaten, and throw out anyone that Floyd points out.

At the climax of the song (which is the end), Floyd runs off by himself. He doses himself with drugs, and falls asleep in the silence.

Two Suns In The Sunset

Pink enters as a ghost. He presents Floyd with a vision of the future, much like Daddy did. In this future, the UK is destroyed in a nuclear attack. Pink is much more sympathetic than Daddy, although Floyd is a little frightened of him.

Although the presentation of this piece is concrete, the intention is that the nuclear attack is a metaphor for the self-destructive behavior that Pink and Floyd have engaged in. The trouble is that expressing a complex and non-concrete concept like that is probably impossible, so we just have to leave things ambiguous.

Comfortably Numb

The screen reads: “The Show Must Go On”

In the silence, Floyd wakes up. He doses himself again, and then, in a daze, goes onstage to do a show.

The Company enters as fans. They are facing Floyd, and interact with him, but they are strangely distant and move slowly. Pink and Daddy enter as ghosts and observe. Daddy is disapproving. Pink is worried.

Near the end, Mother enters. She has finally found Floyd, and manages to get close. As the song ends, in the silence, Floyd waits for the crowd’s adulation. However, he hasn’t done what they want. They become angry, and try to get their hands on him. Daddy is egging them on.

Run Like Hell

Floyd is now the target of The Company. Daddy chases Pink away. Mother is pushed down and out of the way. Floyd’s star has now fallen completely, and the crowd wants vengeance.

Floyd is finally cornered, and roughly pulled to center stage.

The Trial

Daddy stands up-center. The Company enters and flanks him. They join hands, and “speak” with one voice during the trial, becoming a composite character. They slowly close in on Floyd.

The final pronouncement of the court belongs to Daddy. He suddenly separates from The Company and gets right in Floyd’s face. At the order to “Tear down the wall!” The Company sets upon Floyd.

There are flashes as the explosion sounds.

Outside The Wall

As the lights come up we see Floyd cowering. Downstage, we see Mother, who has fallen. Daddy’s ghost enters, and angrily tries to get his hands on Floyd. Before he can get there, though, Pink’s ghost heads him off. Pink gently beckons to Daddy, and they move upstage right.

The Groupie enters, and tries to help Mother to her feet. Floyd looks up and sees them. He approaches, and takes their hands.

A change comes over Daddy, and he follows Pink into a strong light coming from offstage up-right.

Fade to blackout.

Bows

The band begins playing an instrumental version of “In The Flesh?” They vamp the middle part as necessary to extend the piece.

If at all possible, each member of the cast should be given the opportunity to bow as an individual. After the cast has finished, they part to allow a good look at the band, who takes their bow by way of playing the ending to the song.

Immediate blackout – main curtain, house lights.


Mixing A Live Album: Bass

Making the bass guitar work is as much about the midrange as the low end.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.


Experiments Are For Discovery

Don’t do experiments to save money. Do experiments to learn things and get maximum ownership.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

If you yourself aren’t crazy enough to want to build your own amplifier, or construct your own loudspeaker, I’m betting that you know somebody who does. Hey, you know me, and I built my own digital mixing console. That’s pretty “out there” for most audio folks.

The reason people get these bats in their belfries is because building things is fascinating. You get to figure out what actually makes audio gear work – you get a hands-on trip through the actual tradeoffs that industry designers have to handle.

That’s the point of doing experiments: Learning something.

I’ve seen something unfortunate surrounding these endeavors, though. There’s a tendency for people to get into these projects solely for the purpose of trying to save money. When they discover (in one way or another) that doing an experiment is highly likely to actually cost more than buying a finished project, they bail out. Any excitement they had is completely wrecked.

It’s sad, really.

Makin’ Sawdust

It’s pretty easy for folks to get taken in by websites promising that you can build a superior loudspeaker for less than what it costs to buy one outright. The problem with the assertion is that it forces a lot of assumptions onto both the builder and the project:

  • It assumes that the builder knows how to use the necessary tools.
  • It assumes that the builder has the tools handy, or can obtain them for little cost.
  • It assumes that the tradeoffs made in the project design to allow for inexpensive components are well-understood by the builder.

On that last point, there’s one site for speaker enclosure plans that repeatedly touts how the designs outperform far more expensive models. The thing is that the supplied designs DO outperform their commercial counterparts – but only in one area. The DIY speakers are great if you want to get the maximum per-watt output available from inexpensive drivers, but not so great if you want deep LF (low frequency) extension and consistent overall response.

Once you couple the above with having to buy your own tools and deal with your own construction mistakes, you’ve pretty much burned any monetary advantage you might have had. There’s also the whole problem of how amplification and processing costs have dropped like a rock…as long as those components have been engineered into the actual speaker enclosure. If not, you have to provide that externally, which further drives up the cost of your homebrew project.

Now, sure, you might be able to find a sweet-spot where you can build a box with higher-end parts at a good price. If you’re not trying to maximize profit, and you’re willing to ignore the effective cost of your own labor, then you just might manage to save a few bucks in some way. It’s all just a game of moving the numbers around, though, where you can conveniently sweep certain costs under the perceptual rug.

That’s why “doing it cheaper” shouldn’t be the goal. The goal should be to have fun, learn something about woodworking, get a feel for what works and doesn’t in loudspeaker design, and ultimately have something in your hands where you can say, “I MADE this.” That’s where the real value is – and that value is far in excess of the few bucks you might save if you get lucky.

Console Yourself

Get it? “Console” yourself? It’s a play on…anyway.

In a purely “cash” sense, I did effectively save some money by building my own mixing system. To get fundamentally equivalent functionality and I/O, I would have had to spend about $1000 more than what the build cost. However, it’s important to point out that other, no less important expenses had already been made.

I already knew about the construction, care, and feeding of DAW computers.

I already knew enough about computers in general to be my own tech support.

I already knew enough about signal flow that I could effectively set up my own console configuration.

I already had enough overall experience to know what I wanted, and be able to actually leverage the advantages of the system.

I already had a spare console if something went wrong.

The value of all that goes beyond $1000. Several times over.

Again, though, that’s not the point of building your own digital console. The point is that you get to have a rig that’s truly yours – that you’re responsible for. You get to pick the compromises that you’re willing or not willing to make. You get to be the “proud parent.” You get to discover what it’s actually like to run a system with a custom front-end.

There was a time when pro-audio gear was something that you essentially had to construct yourself. It wasn’t a commoditized industry like it is now. These days, though, economies of scale make it vastly cheaper to buy things off the shelf when compared to doing your own build.

As a result, you shouldn’t do DIY experiments to save money. You should do them because they’re awesome.