Tag Archives: Experimentation

Buzzkill

Ridding yourself of hum and buzz is like all other troubleshooting: You have to isolate the problem to fix it.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

buzzkillWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Not all hums and buzzes are equally bad. Honeybees hum and buzz, but they’re super-helpful creatures that are generally interested in being left alone and making honey. Wasps, like the one pictured above, are aggressive jerks.

Of course, this site isn’t about insects. It’s about audio, where hum and buzz mean problems. Unwanted noise. Blech.

I recently got an email from a friend who wanted to know how to de-buzzify (I just made that word up) a powered mixer. When you mercilessly distill what I told him, you come up with a basic truth that covers all of troubleshooting:

The probability of an effective fix for a problem is directly proportional to your ability to isolate the problem.

Solitude

The importance of finding the exact location of a fault is something that I don’t believe I can overemphasize. It’s the key to all the problem-solving I’ve ever had to do. It doesn’t matter if the problem is related to audio signal flow, car trouble, or computer programming; if you can actually nail down the location of the problem, you’ve got a real shot at an effective (and elegant) fix.

The reverse is also true. The less able you are to pinpoint your conundrum’s place of residence, the more likely you are to end up doing surgery with a sledgehammer. If you can’t zero-in on a root cause, you end up “fixing” a certain amount of things that aren’t actually being troublesome. The good news is that you can usually take an iterative approach. All problems begin with “this system isn’t working as I expected,” which is a completely non-specific view – but they don’t have to end there. The key is to progressively determine whether each interrelated part of the system is contributing to the issue or not. There are lots of ways to do this, but all the possible methods are essentially an expression of one question:

“Is the output of this part of the system what I expect it to be?”

So…here’s a way to apply this to buzz and hum problems.

Desperately Seeking Silence

Talking in depth about the exact electrical whys and wherefores surrounding strange and unwanted noises is a little bit beyond my experience. At a general level, though, the terminology of “ground loop” provides a major clue. Voltage that should be taking a direct path to ground is instead taking a “looping” or “circuitous” path. A common cause of this is equipment receiving mains (“wall”) power from two different circuits, where each path to mains ground has a significantly different impedance. There is now a voltage potential between the two pieces of gear.

Bzzzzzzzz….

You can also have a situation where two device’s audio grounds are interconnected such that there is a potential between the two devices.

Hmmmmmmzzzzzzzz…

Anyway.

The first thing to do is to decide what piece of equipment you’re testing against. Maybe it’s a mixing console. Maybe it’s an amplifier. Whatever it is, you are asking the question from before:

“Is the output of this part of the system what I expect it to be?”

Or, more specifically…

“I expect this device’s output to be quiet, unless an audio signal is present. Is that the case?”

To answer that question, you need isolation.


WARNING: At NO point should you do anything to disconnect the mains-power/ safety grounds from your equipment. It’s there to prevent you from dying if the equipment chassis should become energized. In fact, as a start, try to verify that the mains-power sockets you are using actually DO provide a connection to “earth.” If they don’t, stop using them until they’re fixed. You may even find that your noise problem goes away.


To get isolation, start by disconnecting as much as you possibly can from the DUT (the Device Under Test). Of course, you’ve got to have some kind of way to monitor the output, so that might mean that you can’t disconnect everything. As much as possible, try to ensure that all mains-power grounds offer the same impedance – if it must stay connected, and it requires mains power, get all the power to connect to the same socket. A multi-outlet power tap can come in handy for this.

Is the output what you expect?

If yes, then something which was connected to your DUT’s input has a good chance of being the problem. At this point, if possible, treat each potential culprit as a secondary DUT in turn. If feasible, connect each suspect directly to your monitoring solution. If the ground loop manifests itself, and the suspect device requires mains power, try getting power from the same tap that the primary DUT is on. If the loop goes away, you’ve established that the two devices in play were likely having an “unequal impedance to ground” problem. If the loop stays in effect, you can jump back up to the beginning of this process and try again, but with the gear you had just plugged in as the new, primary DUT. You can keep doing this, “moving up the stack” of things to test until you finally isolate the piece of gear that’s being evil. (IMPORTANT: Any piece of the chain could be your problem source. This includes cables. You may need to pack a lunch if you have a lot of potential loop-causers to go through.)

If you can’t get the buzz to manifest when adding things back one at a time, then you might have a multi-device interaction. If possible, work through every possible combination of input connections until you get your noise to happen.

But what if the output on the original DUT was NOT what you expected, even with everything pulled off the output side?

At that point, you know that an input device isn’t the source of your trouble with this particular DUT. This is good – your problem is becoming isolated to a smaller and smaller pool of possibilities.

Try to find an alternate way to connect to your monitoring solution, like a different cable. If the problem goes away, that locates the cable as the menace. If you’re switching the connection, and the noise remains with no audio path, then the monitoring system has the problem and you need to restart with a new DUT. (If you’ve got a mixer connected to an amp and a speaker, and a ground loop stays audible when the mixer-to-amp connection is broken, then the amp is your noise source.)

If you’ve tried all that and you still have the buzz, it’s time to try a different circuit. Get as far away from the original mains-power socket as you can, and reproduce the minimal setup. If the ground-loop goes away, then you may have a site-wiring issue that’s local to the original socket(s). If the problem doesn’t go away, it’s time to take a field-trip to another building. It’s possible to have a site-wide electrical problem.

If the loop still won’t resolve, it’s very likely that your DUT has an internal fault that needs attention. Whether that means repair or replace is an exercise left to the reader.

Hopefully, you don’t get to that point – but you won’t figure out if you ARE at that point unless you can isolate your problem.


The Sublime Beauty Of Cheap, Old, Dinged-Up Gear

Some things can be used, and used hard, without worry.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

I really do think that classy gear is a good idea in the general case. I think it sends a very important signal when a band walks into a room, and their overwhelming impression is that of equipment which is well-maintained and worth a couple of dollars. When a room is filled with boxes and bits that all look like they’re about to fail, the gigs in that room stand a good chance of being trouble-filled. In that case, musician anxiety is completely justified.

In the past, I have made updates to gear almost purely for the sake of “politics.” I don’t regret it.

At the same time, though, “new n’ shiny” equipment isn’t a guarantee of success. I’ve had new gear that developed problems very quickly, but more than that, new and spendy gear tends to make you ginger (in the timid sense). You can end up being so worried about something getting scratched up or de-spec’d that you forget the purpose of the device: It’s there to be used.

And that’s where the sublime beauty of inexpensive, well-worn equipment comes in. You’ve found a hidden gem, used it successfully in the past, will probably keep using it successfully in the future, and you can even abuse it a bit in the name of experimentation.

Case Study: Regular Kick Mics Are Boring

I’ve used spendy kick mics, and I’ve used cheap kick mics. They’ve all sounded pretty okay. The spendy ones are pre-tuned to sound more impressive, and that’s cool enough.

…but, you know, I find the whole “kick mic” thing to be kinda boring. It’s all just a bunch of iteration or imitation on making a large-diaphragm dynamic. Different mics do, of course, exhibit different flavors, but there’s a point where it all seems pretty generic. It doesn’t help that folks are so “conditioned” by that generic-ness – that is, if it doesn’t LOOK like a kick mic, it can’t be any good. (And, if it doesn’t COST like a kick mic, it can’t be any good.)

I once had a player inquire after a transducer I used on his bass drum. He seemed pretty interested in it based on how it worked during the show, and wanted to know how expensive it was. I told him, and he was totally turned OFF…by the mic NOT costing $200. He stated, “I’m only interested in expensive mics,” and in my head, I’m going, “Why? This one did a good enough job that you started asking questions about it. Doesn’t that tell you something?”

Anyway, the homogeneity of contemporary kick mic-ery is just getting dull for me. It’s like how modern car manufacturers are terrified to “color outside the lines” with any consumer model.

To get un-bored, I’ve started doing things that expose the greatness of “cheap, old, and dinged up.” In the past, I tried (and generally enjoyed) using a Behringer ECM8000 for bass drum duty. Mine was from back when they were only $40, had been used quite a bit, and had been dropped a few times. This was not a pristine, hardwood-cased, ultra-precision measurement mic that would be a real bear to replace. It was a knock-around unit that I had gotten my money out of, so if my experiment killed it I would not be enduring a tragedy.

And it really worked. Its small diameter made it easy to maneuver inside kick ports, and its long body made it easy to get a good ways inside those same kick ports. The omni pattern had its downsides, certainly. Getting the drum to the point of being “stupid loud” in FOH or the drumfill wasn’t going to happen, but that’s pretty rare for me. At an academic level, I’m sure the tiny diaphragm had no trouble reacting quickly to transients, although it’s not like I noticed anything dramatic. Mostly, the mic “sounded like a drum to me” without having to be exactly like every other bass-drum mic you’re likely to find. The point was to see if it could work, and it definitely did.

My current “thing” bears a certain similarity, only on the other end of the condenser spectrum. I have an old, very beat-up MXL 990 LDC, which I got when they were $20 cheaper. I thought to myself, “I wonder what happens if I get a bar-towel and toss this in a kick drum?” What I found out is that it works very nicely. The mic does seem to lightly distort, but the distortion is sorta nifty. I’m also freed from being required to use a stand. The 990 might die from this someday, but it’s held up well so far. Plus, again, it was cheap, already well used, and definitely not in pristine condition. I don’t have to worry about it.

Inoculation Against Worry Makes You Nicer

Obviously, an unworried relationship with your gear is good for you, but it’s also good in a political sense. Consternation over having a precious and unblemished item potentially damaged can make you jumpy and unpleasant to be around. There are folks who are so touchy about their rigs that you wonder how they can get any work done.

Of course, an overall attitude of “this stuff is meant to be used” is needed. Live-audio is a rough and tumble affair, and some things that you’ve invested in just aren’t going to make it out alive. Knowing this about everything, from the really expensive bits to the $20 mic that’s surprisingly brilliant, helps you to maintain perspective and calmness.

The thing with affordable equipment (that you’ve managed to hold on to and really use) is that it feeds this attitude. You don’t have to panic about it being scuffed up, dropped, misplaced, or finally going out with a bang. As such, you can be calm with people. You don’t have to jump down someone’s throat if they’re careless, or if there’s a genuine accident. It’s easy to see that the stuff is just stuff, and while recklessness isn’t a great idea, everything that has a beginning also has an end. If you got your money out of a piece of equipment, you can just shrug and say that it had a good life.

Have some nice gear around, especially for the purpose of public-relations, but don’t forget to keep some toys that you can “leave out in the rain.” Those can be the most fun.


Why I Think Steam Machines Are Cool

My audio-human mind races when thinking of high-performance, compact, affordable machines.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

steamWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

“Wait,” you’re thinking, “I thought this site was about live shows. Steam Machines are gaming devices.”

You’re right about that. What you have to remember (or just become aware of), is that I have a strange sort of DIY streak. It’s why I assembled my own live-audio console from “off the shelf” products. I really, really, REALLY like the idea of doing powerful things with concert sound via unorthodox means. An unorthodox idea that keeps bubbling up in my head is that of a hyper-customizable, hyper-expandable audio mix rig. It could be pretty much any size a user wanted, using pretty much whatever audio hardware a user wanted, and grow as needed. Also, it wouldn’t be too expensive. (About $900 per 16X16 channel “block.”)

When I look at the basic idea of the Valve Steam Machine, I see a device that has the potential to be a core part of the implementation.

But let’s be careful: I’m not saying that Steam Machines can do what I want right now. I’m not saying that there aren’t major pitfalls, or even dealbreakers to be encountered. I fully expect that there are enormous problems to solve. Just the question of how each machine’s audio processing could be conveniently user-controlled is definitely non-trivial. I’m just saying that a possibility is there.

Why is that possibility there?

The Box Is Prebuilt

The thing with prebuilt devices is that it’s easier for them to be small. A manufacturer building a large number of units can get custom parts that support a compact form factor, put it all together, and then ship it to you.

Of course, when it comes to PCs, you can certainly assemble a small-box rig by hand. However, when we’re talking about using multiple machines, the appeal of hand-building multiple boxes drops rapidly. So, it’s a pretty nice idea that a compact but high(er) performance computing device can be gotten for little effort.

The System Is Meant For Gaming

Gaming might seem like mere frivolity, but these days, it’s a high-performance activity. We normally think of that high-performance as being located primarily in the graphics subsystem – and for good reason. However, I also think a game-capable system could be great for audio. I have this notion because games are so reliant on audio behaving well.

Take a game like a modern shooter. A lot of stuff is going on: Enemy AI, calculation of where bullets should go, tracking of who’s shooting at who, collision detection, input management, the knowing of where all the players are and where they’re going, and so on. Along with that, the sound has to work correctly. When anybody pulls a trigger, a sound with appropriate gain and filtering has to play. That sound also has to play at exactly the right time. It’s not enough for it to just happen arbitrarily after the “calling” event occurs. Well-timed sounds have to play for almost anything that happens. A player walks around, or a projectile strikes an object, or a vehicle moves, or a player contacts some phsyics-enabled entity, or…

You get the idea.

My notion is that, if the hardware and OS of a Steam Machine are already geared specifically to make this kind of thing happen, then getting pro-audio to work similarly isn’t a totally alien application. It might not be directly supported, of course, but at least the basic device itself isn’t in the way.

The System Is Customizable

My understanding of Steam Machines is that they’re meant to be pretty open and “user hackable.” This excites me because of the potential for re-purposing. Maybe an off-the-shelf Steam Machine doesn’t play nicely with pro-audio hardware? Okay…maybe there’s a way to take the box’s good foundation and rebuild the upper layers. In theory, a whole other OS could be runnable on one of these computers, and a troublesome piece of hardware might be replaceable (or just plain removable).


I acknowledge that all of this is off in the “weird and theoretical” range. My wider goal in pointing it out is to say that, sometimes, you can grab a thing that was intended for a different application and put it to work on an interesting task. The most necessary component seems to be imagination.


Where’s Your Data?

I don’t think audio-humans are skeptical enough.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

traceWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

If I’m going to editorialize on this, I first need to be clear about one thing: I’m not against certain things being taken on faith. There are plenty of assumptions in my life that can’t be empirically tested. I don’t have a problem with that in any way. I subscribe quite strongly to that old saw:

You ARE entitled to your opinion. You ARE NOT entitled to your own set of “facts.”

But, of course, that means that I subscribe to both sides of it. As I’ve gotten farther and farther along in the show-production craft, especially the audio part, I’ve gotten more and more dismayed with how opinion is used in place of fact. I’ve found myself getting more and more “riled” with discussions where all kinds of assertions are used as conversational currency, unbacked by any visible, objective defense. People claim something, and I want to shout, “Where’s your data, dude? Back that up. Defend your answer!”

I would say that part of the problem lies in how we describe the job. We have (or at least had) the tendency to say, “It’s a mix of art and science.” Unfortunately, my impression is that this has come to be a sort of handwaving of the science part. “Oh…the nuts and bolts of how things work aren’t all that important. If you’re pleased with the results, then you’re okay.” While this is a fair statement on the grounds of having reached a workable endpoint through unorthodox or uneducated means, I worry about the disservice it does to the craft when it’s overapplied.

To be brutally frank, I wish the “mix of art and science” thing would go away. I would replace it with, “What we’re doing is science in the service of art.”

Everything that an audio human does or encounters is precipitated by physics – and not “exotic” physics, either. We’re talking about Newtonian interactions and well-understood electronics here, not quantum entanglement, subatomic particles, and speeds approaching that of light. The processes that cause sound stuff to happen are entirely understandable, wieldable, and measurable by ordinary humans – and this means that audio is not any sort of arcane magic. A show’s audio coming off well or poorly always has a logical explanation, even if that explanation is obscure at the time.

I Should Be Able To Measure It

Here’s where the rubber truly meets the road on all this.

There seems to be a very small number of audio humans who are willing to do any actual science. That is to say, investigating something in such a way as to get objective, quantitative data. This causes huge problems with troubleshooting, consulting, and system building. All manner of rabbit trails may be followed while trying to fix something, and all manner of moneys are spent in the process, but the problem stays un-fixed. Our enormous pool of myth, legend, and hearsay seems to be great for swatting at symptoms, but it’s not so hot for tracking down the root cause of what’s ailing us.

Part of our problem – I include myself because I AM susceptible – is that listening is easy and measuring is hard. Or, rather, scientific measuring is hard.

Listening tests of all kinds are ubiquitous in this business. They’re easy to do, because they aren’t demanding in terms of setup or parameter control. You try to get your levels matched, setup some fast signal switching, maybe (if you’re very lucky) make it all double-blind so that nobody knows what switch setting corresponds to a particular signal, and go for it.

Direct observation via the senses has been used in science for a long time. It’s not that it’s completely invalid. It’s just that it has problems. The biggest problem is that our senses are interpreted through our brains, an organ which develops strong biases and filters information so that we don’t die. The next problem is that the experimental parameter control actually tends to be quite shoddy. In the worst cases, you get people claiming that, say, console A has a better sound than console B. But…they heard console A in one place, with one band, and console B in a totally different place with a totally different band. There’s no meaningful comparison, because the devices under test AND the test signals were different.

As a result, listening tests produce all kinds of impressions that aren’t actually helpful. Heck, we don’t even know what “sounds better” means. For this person over here, it means lots of high-frequency information. For some other person, it means a slight bass boost. This guy wants a touch of distortion that emphasizes the even-numbered harmonics. That gal wants a device that resembles a “straight wire” as much as possible. Nobody can even agree on what they like! You can’t actually get a rigorous comparison out of that sort of thing.

The flipside is, if we can actually hear it, we should be able to measure it. If a given input signal actually sounds different when listened to through different signal paths, then those signal paths MUST have different transfer functions. A measurement transducer that meets or exceeds the bandwidth and transient response of a human ear should be able to detect that output signal reliably. (A measurement mic that, at the very least, significantly exceeds the bandwidth of human hearing is only about $700.)

As I said, measuring – real measuring – is hard. If the analysis rig is setup incorrectly, we get unusable results, and it’s frighteningly easy to screw up an experimental procedure. Also, we have to be very, very defined about what we’re trying to measure. We have to start with an input signal that is EXACTLY the same for all measurements. None of this “we’ll set up the drums in this room, play them, then tear them down and set them up in this other room,” can be tolerated as valid. Then, we have to make every other parameter agree for each device being tested. No fair running one preamp closer to clipping than the other! (For example.)

Question Everything

So…what to do now?

If I had to propose an initial solution to the problems I see (which may not be seen by others, because this is my own opinion – oh, the IRONY), I would NOT say that the solution is for everyone to graph everything. I don’t see that as being necessary. What I DO see as being necessary is for more production craftspersons to embrace their inner skeptic. The lesser amount of coherent explanation that’s attached to an assertion, the more we should doubt that assertion. We can even develop a “hierarchy of dubiousness.”

If something can be backed up with an actual experiment that produces quantitative data, that something is probably true until disproved by someone else running the same experiment. Failure to disclose the experimental procedure makes the measurement suspect however – how exactly did they arrive at the conclusion that the loudspeaker will tolerate 1 kW of continuous input? No details? Hmmm…

If a statement is made and backed up with an accepted scientific model, the statement is probably true…but should be examined to make sure the model was applied correctly. There are lots of people who know audio words, but not what those words really mean. Also, the model might change, though that’s unlikely in basic physics.

Experience and anecdotes (“I heard this thing, and I liked it better”) are individually valid, but only in the very limited context of the person relating them. A large set of similar experiences across a diverse range of people expands the validity of the declaration, however.

You get the idea.

The point is that a growing lack of desire to just accept any old statement about audio will, hopefully, start to weed out some of the mythological monsters that periodically stomp through the production-tech village. If the myths can’t propagate, they stand a chance of dying off. Maybe. A guy can hope.

So, question your peers. Question yourself. Especially if there’s a problem, and the proposed fix involves a significant amount of money, question the fix.

A group of us were once troubleshooting an issue. A producer wasn’t liking the sound quality he was getting from his mic. The discussion quickly turned to preamps, and whether he should save up to buy a whole new audio interface for his computer. It finally dawned on me that we hadn’t bothered to ask anything about how he was using the mic, and when I did ask, he stated that he was standing several feet from the unit. If that’s not a recipe for sound that can be described as “thin,” I don’t know what is. His problem had everything to do with the acoustic physics of using a microphone, and nothing substantial AT ALL to do with the preamp he was using.

A little bit of critical thinking can save you a good pile of cash, it would seem.

(By the way, I am biased like MAD against the the crowd that craves expensive mic pres, so be aware of that when I’m making assertions. Just to be fair. Question everything. Question EVERYTHING. Ask where the data is. Verify.)


The Puddle Mountain Arc

If you have the space and technical flexibility, a semicircular stage layout can be pretty neat.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

monitorsWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Just last week, my regular gig hosted a show for The Puddle Mountain Ramblers. During the show advance, Amanda proposed an idea.

What if we set up the stage so that the layout was an arc, instead of a straight line?

I thought that was a pretty fine idea, so we went with it. The way it all came together was that fiddle, bass, and banjo were on the stage-right side, the drums were upstage center, and guitar plus another fiddle were on the stage-left side. The setup seemed very effective overall.

Why?

Visibility, Separation, and Such

The main reason for the setup was really to facilitate communication. PMR is a band that derives a good deal of comfort and confidence from the members being able to see what each other player is doing. Also, it’s just generally nice to be able to make eye contact with someone to let them know that it’s their turn for a solo. Setting up in an arc makes this much easier, because you can get essentially unobstructed sightlines from each player to every other player. An added benefit is that all the players are closer together on average, which reduces the difficulty of reading faces, identifying hand movements, and keeping time. (An arc is geometrically more compact than a line. In a linear configuration, the farthest that any two players can be from each other is the entire length of the line. Bend that same line into a circle or circle-segment, and the farthest that any two players can be from each other is the line length divided by pi. That’s a pretty significant “packing.”)

Another benefit of the configuration is (potentially) reduced drum bleed. In a traditional setup, an upstage drumkit is pretty much “firing” into the most sensitive part of all the vocal and instrument mics’ pickup patterns. In an arc layout, with the drums at the center, the direct sound from the kit enters any particular mic at some significant off-axis angle. This bleed reduction can also extend to other vocals and instruments, especially because the mics can easily be at angles greater than 90 degrees relative to other sources.

Of course, it’s important to note that – especially with wide-pattern mics, like SM58s and other cardioids – compacting the band may undo the “off-axis benefit” significantly. This is especially true for bleed from whatever source is commonly at the midpoint of the arc’s circumference, like a drumkit probably would be. For the best chance of bleed reduction, you need tighter-patterned transducers, like an ND767a, or Beta 58, or e845, or OM2, or [insert your favorite, selectively patterned mic here]. Even so, the folks closest to, and at the smallest angle from the drumkit should be the strongest singers in the ensemble, and their miced instruments should be the most able to compete with whatever is loud on deck.

A third “bit of nifty” that comes from an arc setup is that of reduced acoustical crosstalk from monitor wedge to monitor wedge. With all the wedges firing away from each other, instead of in parallel paths, the tendency for any one performer to hear the wedges adjacent to them is reduced. Each monitor mix therefore has more separation than it otherwise might, which can keep things “cleaner” overall. It may also reduce gain-hungry volume wars on the deck.

Downsides

There are some caveats to putting a band on stage in a circle-segment.

The first thing to be aware of is that you tend to lose “down center” as a focal point. It’s not that you can’t put someone in there, but you have to realize that the person you’ve put down-center will no longer get the visibility and communication benefits of the arc. Also, a down-center wedge will probably be very audible to the performers standing up-center from that monitor, so you’ll have to take that into account.

The more isolated that monitor-mix sources become from one another, the more important it becomes that each monitor mix can be customized for individual performers. If you were on in-ears, for instance (the ultimate in isolated monitor feeds), separate mixes for each individual would be almost – if not entirely – mandatory. Increasing the mix-to-mix acoustical isolation pushes you towards that kind of situation. It’s not that shared mixes can’t be done in an arc, it’s just that folks have to be inclined to agree and cooperate.

A corollary to the above is that the show complexity actually tends to go up. More monitor mixes means more to manage, and an arc layout requires more thinking and cable management than a linear setup. You have to have time for a real soundcheck with careful tweaking of mixes. Throw-n-go really isn’t what you want to do when attempting this kind of layout, especially if you haven’t done it before.

Another factor to consider is that “backline” shouldn’t actually be in the back…unless you can afford to waste the space inside the arc. If at all possible, amps and instrument processing setups should utilize the empty space in front of everybody, and “fire” towards the performers (unless it’s absolutely necessary for the amps to combine with or replace the acoustical output of the PA).

If these considerations are factors you can manage, then an arc setup may be a pretty cool thing to try. For some bands, it can help “square the circle” of how to arrange the stage for the best sonic and logistical results, even if pulling it all off isn’t quite as easy as “pi.”

I’ll stop now.


Gain Vs. Bandwidth

Some preamps do pass audio differently when cranked up, but you probably don’t need to worry about it.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

gainvbandwidthWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

After my article on making monitor mixes not suck, a fellow audio human asked me to address the issue of how bandwidth changes with gain. Op-amps, which are very common in modern audio gear, have a finite bandwidth. This bandwidth decreases as gain increases.

A real question, then, is how much an audio tech needs to worry about this issue – especially in the context of microphone preamps. Mic pres have to apply a great deal of gain to signals, because microphones don’t exactly spit out a ton of voltage. Your average, dynamic vocal mic probably delivers something like two millivolts RMS with 94 dB SPL occurring at the capsule. Getting that level up to 0 dBu (0.775 Vrms) is a nearly 52 decibel proposition.

That’s not a trivial amount of gain, at least as far as audio is concerned. For instance, if we could get 52 dB of gain over the 1 watt @ 1 meter sensitivity of a 95 dB SPL loudspeaker, that speaker could produce 147 dB SPL! (That’s REALLY LOUD, if you didn’t know.) While there are loudspeaker systems that can produce that kind of final output, they have to start at a much higher sensitivity. A Danley Labs J3-64 is claimed to be able to produce 150 dB SPL continuous, but its sensitivity is 112 dB. The “gain beyond sensitivity” is a mere 38 dB. (“Mere” when compared to what mic pres can do. Getting 38 dB above sensitivity is definitely “varsity level” performance for a loudspeaker.)

Anyway.

In the face of a question like this, my response of late has become that we should try to measure something. There are, of course, gajillions of people willing to offer anecdotes, theories, and mythology, but I don’t find that to be very satisfying. I much prefer to actually see real data from real testing.

As such, I decided to grab a couple of mic-pre examples, and “put them on the bench.”

Setting Up The Experiment

The first thing I do is to set up a DAW session with the interface running at 96 kHz. I also set up an analyzer session with the same sampling rate.

The purpose of this is to – hopefully – be able to clearly “see” beyond the audible spectrum. Although my opinion is that audio humans don’t have to worry about anything beyond the audible range (20 Hz to 20 kHz) in practice, part of this experiment’s purpose is to figure out how close to audible signals any particular bandwidth issue gets. Even if frequencies we can hear remain unaffected, it’s still good to have as complete a picture as possible.

The next thing I do is generate a precisely 15 second long sample of pink noise. The point of having a sample of precisely known length is to make compensating for time delays easier. The choice of a 15 second length is just to have a reasonably long “loop” for the analyzer to chew on.

At this point, it’s time to take a look at how the analyzer handles a transfer-function calculation where I know that both “sides” are the same. The trace I get is a touch jumpy, so I bump up the averaging to “2.” This settles the trace nicely.

steadytrace

At this point, it’s time to connect the noise to a mic pre. I do this from my Fast Track interface’s headphone amp through an active DI, because I want to be absolutely sure that I’m ultimately running through high-gain circuitry. Yes – it’s true that the DI might corrupt the measurement to some degree, but I think I have a partial solution: My reference point for all measurements will be the test noise played through the DI, with the mic pre at the minimum gain setting. Each test will use the original noise, so that any “error factors” associated with the signal path under test don’t stack up.

Preamp 1: Fast Track Ultra 8R

My M-audio Fast Track Ultra 8R is what I would call a reasonably solid piece of pro-sumer equipment. My guess is that the preamps in the box are basically decent pieces of engineering.

The first thing to do is to get my low-gain reference. I set the noise output level so that the input through the preamp registers about -20 dBFS RMS, and record the result. I’m now ready to proceed further.

My next order of business is to put my test noise through at a higher gain. I set the gain knob to the middle of its travel, which is about +10 dB of gain from the lowest setting. I roll down the level going to the pre to compensate.

The next test will be with the gain at the “three-o-clock” position. This is about +25 dB of gain from the reference.

The final test is at maximum gain. This causes an issue, because so much gain is applied that the output compensation is extreme. In the end, I opt to find a compromise by engaging the mic preamp’s pad. This allows me to keep the rest of the gain structure in a basically “sane” area.

At this point, I check the alignment on the recorded signals. What’s rather odd is that the signal recorded through the pad seems to have arrived a few samples earlier than the signals recorded straight through. (This is curious, because I would assume that a pad would INCREASE group delay rather than reduce it.)

timingwithpad

No matter what’s going on, though, the fix is as simple as nudging the max-gain measurement over by 10 samples, or 0.1ms.

Preamp 2: SL2442-FX

The first round of testing involved a preamp that I expect is pretty good. A more interesting case comes about when we test a device with a not-so-stellar reputation: A mic pre from an inexpensive Behringer console. My old Behringer SL2442-FX cost only a bit more than the Fast Track did, and the Behringer has a LOT more analog circuitry in it (as far as I can tell). My guess is that if I want to test a not-too-great mic pre, the Behringer is a good candidate.

(To be fair, in the situations where I’ve used the Behringer, I haven’t been unhappy with the preamps at all.)

I use the same DI to get signal to the Behringer. On the output side, I tap the console’s insert point so as to avoid the rest of the internal signal path. I want to test the preamp, not the whole console. The insert connection is fed to the line input of the Fast Track, which appears to bypass the high-gain circuitry in the Fast Track mic pre.

In basically the same way as I did the Fast Track, I get a reference by putting the test noise through the preamp at its lowest setting, aiming for an RMS level of -20 dBFS. My next test is with the gain at “half travel,” which on the Behringer is a difference of about 18 dB. The “three-o-clock” position on the Behringer preamp corresponds to a gain of about +30 dB from the lowest point. The final test is, as you might expect, the Behringer at maximum gain.

A quick check of the files revealed that everything appeared to be perfectly time-aligned across all tests.

The Traces

Getting audio into the analyzer is as simple as running the Fast Track’s headphone out back to the first two inputs. Before I really get going, though, I need to verify that I’m measuring what I think I’m measuring. To do that, I mute the test noise, and push up the levels on the Fast Track Reference and Fast Track +10 dB tracks. I pan them out so that the reference is hard left, and the +10 dB measurement is hard right. I then put a very obvious EQ on the +10 measurement:

testeq

If the test rig is set up correctly, I should see a transfer function with a similarly obvious curve. It appears that my setup is correct:

eqverification

Now it’s time to actually look at things. The Fast Track +10 test shows a curve that’s basically flat, albeit with some jumpiness below 100 Hz. (The jumpiness makes me expect that what we’re seeing is “experimental error” of some kind.)

fasttrack+10transfer

The +25 dB test looks very much the same.

fasttrack+25transfer

The maximum gain test is also about as flat as flat can be.

fasttrackmaxtransfer

I was, quite frankly, surprised by this. I thought I would see something happening, even if it was above 20 kHz. I decide to insert an EQ to see if the test system is just blind to what’s going on above 20 kHz, despite my best efforts. The answer, to my relief, is that if the test were actually missing something outside the audible range, we would see it:

fasttrackfakedrolloff

So – in the case of the Fast Track, we can conclude that any gain vs. bandwidth issues are taking place far beyond the audible range. They’re certainly going on above the measurable range.

What about the Behringer?

The +18 dB transfer function looks like this, compared to the minimum gain reference:

beh+18transfer

What about the +30 dB test?

Maybe I missed something similar on the Fast Track, but the Behringer does seem to be noisier up beyond 30 kHz. The level isn’t actually dropping off, though. It’s possible that the phase gets “weird” up there when the Behringer is run hard – even so, you can’t hear 30 kHz, so this shouldn’t be a problem in real life.

beh+30transfer

Now, for the Behringer max gain trace.

This is interesting indeed. The Behringer’s trace is now visibly curved, with some apparent dropoff below 50 Hz. On the high side, the Behringer is dropping down after 20 kHz, with obvious noise and what I think is some pretty gnarly distortion at around 37 kHz. The trace also shows a bit of noise overall, indicating that the Behringer pre isn’t too quiet when “cranked.”

behmaxtransfer

At the same time, though, it has to be acknowledged that these deficiencies are easy to see when graphed, but probably hard to actually hear. The distortion is occurring far above what humans can perceive, and picking out a loss of 0.3 dB from 10 kHz to 20 kHz isn’t something you’re likely to do casually. A small dip under 50 Hz is fixable with EQ (if you can even hear that), and let’s be honest – how often do you actually have to run a preamp at full throttle? I haven’t had to in ages.

Conclusion

This is not a be-all, end-all test. It was definitely informal, and two different preamps are not exactly a large sample size. I’m sure that my methodology could be tweaked to be more pure. At the very least, getting precisely comparable gain values between preamps would be a better bit of science.

At the same time, though, I think these results can suggest that losing sleep regarding gain vs. bandwidth isn’t worthwhile. A good, yet not-boutique-at-all preamp run at full throttle was essentially laser flat “from DC to dog whistles.” The el-cheapo preamp looked a little scary when running at maximum gain, but that’s the key – it LOOKED scary. The graphed issues actually causing a problem with a show seems unlikely to me, and again, there’s the whole issue of whether or not you actually have to run the preamp wide open on a regular basis.

If I had my guess, I’d say that gain vs. bandwidth is worth being aware of at an academic level, but not something to obsess about in the field.


If It Doesn’t Work, I Don’t Want To Do It

Not doing things that are pointless seems like an obvious idea, but…

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This is going to sound off-topic, but be assured that you haven’t wandered onto the wrong site.

I promise.

Just hear me out. It’s going to take a bit, but I think you’ll get it by the end.

**********

I used to have a day-job at an SEO (Search Engine Optimization) company. If you don’t know what SEO is, then the name might lead you to believe that it’s all about making search engines work better. It isn’t. SEO should really be called “Optimizing Website FOR Search Engines,” but I guess OWFSE wasn’t as catchy as SEO. It’s the business of figuring out what helps websites to turn up earlier in search results, and then doing those things.

It’s probably one of the most bull[censored] businesses on the entire planet, as far as I can tell.

Anyway.

Things started out well, but after just a few months I realized that our product was crap. (Not to put too fine a point on it.) It wasn’t that anyone in the company wanted to produce crap and sell it. Pretty much everybody that I worked with was a “stand up” sort of person. You know – decent folks who wanted to do right by other folks.

The product was crap because the company’s business model was constrained such that we couldn’t do things for our customers that would actually matter. Our customers needed websites and marketing campaigns that set them apart from the crowd and made spending money with them as easy as possible. Those things are spendy, and require lots of time to implement well. The business model we were constrained to was “cheap and quick” – which we could have gotten away with if it was the time before the dotcom bubble popped. Unfortunately, the bubble had exploded into a slimy mess about 12 years earlier.

So, our product was crap. I spent most of my time at the company participating in the making of crap. When I truly realized just how much crap was involved, things got relatively awful and I planned my escape. (It was even worse because a number of us had ideas for fixes, ideas that were supported by our own management. However, our parent company had no real interest in letting us “pivot,” and that was that.)

But I learned a lot, and there were bright spots. One of the brightest spots was working with a product manager who was impervious to industry stupidity, had an analytical and reasonable mind, and who once uttered a sentence which has become a catchphrase for me:

“If it doesn’t work, I don’t want to do it.”

Is that not one of the most refreshing things you’ve ever heard? Seriously, it’s beautiful. Even with all the crap that was produced at that company, that phrase saved me from wading through some of the worst of it.

…and for any industry that suffers from an abundance of dung excreted from male cows, horses, or other work animals, it’s probably the thing that most needs to be said.

…and when it comes to dung, muck, crap, turds, manure, or just plain ca-ca, the music business is at least chest-deep. Heck, we might even be submerged, with the marketing and promo end of the industry about ten feet down. We need a flotation device, and being able to say “If it doesn’t work, I don’t want to do it,” is at least as good as a pair of water-wings.

The thing is, we’re reluctant to say (and embrace) something so honest, so brutally gentle and edifice-detonatingly kind.

We’ve Got To Do Stuff! Even If It’s Stupid!

I think this problem is probably at its worst in the US, although my guess is that it’s somehow rooted in the European cultures that form most of America’s behavioral bedrock. There’s this unspoken notion (that nobody would openly admit to embracing, even though we constantly embrace it by reflex) that the raw time and effort expended on something is what matters.

I’ll say that again.

We unconsciously believe that the raw time and effort expended on an endeavor is what matters.

We say that we love results, and we kinda do, but what we WORSHIP is effort – or the illusion thereof. The doing of stuff. The act of “being at work.”

In comparison, it barely matters if the end results are good for us, or anyone else. We tolerate the wasting of life, and the erosion of souls, and all manner of Sisyphean rock-pushing and sand-shoveling, because WE PUNCHED THE CLOCK TODAY, DANGIT!

If you need proof of this, look at what has become a defining factor in the ideological rock-throwing that is currently occurring in our culture. Notice a pattern? It’s all about work, and who’s doing enough of it. It’s figuring out how some people are better than other people, because of how much effort they supposedly expend. The guy who sits at the office for 12 hours a day is superior to you, you who only spend 8 hours a day in that cube. If you want to be the most important person in this culture, you need to be an active-duty Marine with two full-time jobs, who is going to college and raising three children by themselves. Your entire existence should be a grind of “doing stuff.” If you’re unhappy with your existence, or it doesn’t measure up to someone else’s, you obviously didn’t do enough stuff. Your expenditure of effort must be lacking.

I mean, do you remember school? People would do poorly on a test, and lament that they had spent [x] hours studying. Hours of their lives had been wasted on studying in a way that had just been empirically proven to be ineffective in some major aspect…yet, they would very likely do exactly the same thing again in a week or so. The issue goes deeper than this, but at just one level: Instead of spending [x] hours on an ineffective grind, why not spend, say, [.25x] hours on what actually works, and just be done?

Because, for all our love of results, we are CULTURALLY DESPERATE to justify ourselves in terms of effort.

I could go on and on and on, but I think you get it at this point.

What in blue blazes does this (and its antithesis) have to do with the music business?

Plenty.

Not Doing Worthless Crap Is The Most Practical Idea Ever

For the sake of an example, let’s take one tiny little aspect of promo: Flyering.

Markets differ, but I’m convinced that flyers (in the way bands are used to them) are generally a waste of time and trees. Even so, bands continue to arm themselves with stacks of cheap posters and tape/ staples/ whatever, and spend WAY too much time on putting up a bunch of promo that is going to be ignored.

The cure is to say, “If it doesn’t work, I don’t want to do it,” and to be granular about the whole thing.

What I mean by “granular” is that you figure out what bit of flyering does work in some way, and do that while gleefully forgetting about the rest. Getting flyers to the actual venue usually has some value. Even if none of the actual show-goers give two hoots about your night, getting that promo to the room sends a critical message to the venue operators – the message that you care about your show. In that way, those three or four posters that would go to the theater/ bar/ hall/ etc. do, in fact, work. As such, they’re worth doing for “political” reasons. The 100 or so other flyers that would go up in various places and may as well be invisible? They obviously don’t work, so why trouble yourself? Hang the four posters that actually matter, and then go rehearse (or just relax).

Also, you can take the time and money that would have been spent on 100+ cheap flyers, and pour some of that into making better the handful of posters that actually matter. Or buying some spare guitar picks, if that’s more important.

I’ll also point out that if traditional flyering does work in your locale, you should definitely do it – because it’s working.

In a larger sense, all promo obeys the rule of not doing it if it doesn’t work. Once a band or venue figures out what marketing the general public responds to (if any), it doesn’t make sense to spend money on doing more. If a few Facebook and Twitter posts have all the effect, and a bunch of spendy ads in traditional media don’t seem to do anything, why spend the money? Do the free stuff, and don’t feel like you have to justify wearing yourself (or your bank account) down to a nub. You may have to be prepared to defend yourself in some rational way, but that’s better than being broke, tired, and frustrated for no necessary reason.

It works for gear, too. People love to buy big, expensive amplification rigs, but they haven’t been truly necessary for years. If you’re not playing to large, packed theaters and arenas with vocals-only PA systems – which is unlikely – then a huge and heavy amp isn’t getting you anything. It’s a bunch of potential that never gets used. Paying for it and lugging it around isn’t working, so you shouldn’t want to do it. Spend the money on a compact rig that sounds fantastic in context, and is cased up so it lasts forever. (And if you would need a huge rig to keep up with some other player who’s insanely loud, then at least consider doing the sensible, cheap, and effective thing…which is to fire the idiot who can’t play with the rest of the team.)

To reiterate what I mentioned about flyering, there’s always a caveat somewhere. Some things work for some people and not for others. The point is to figure out what works for YOU, and then do as much of that as is effective. Doing stuff that works for someone else (but not you) so you can get not-actually-existent “effort expenditure points” is just a waste of life.

There are examples to be had in every area of show production. To try and identify them all isn’t necessary. The point is that this is a generally applicable philosophy.

If it works, you should want to do it.

If you don’t yet know if it works, you should want to give it a try.

But…

If it doesn’t work, I don’t want to do it, and neither do you (even if you don’t realize it yet).


Echoes Of Feedback

By accident, I seem to have discovered an effective, alternate method for “ringing out” PA systems and monitor rigs.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Sometimes, the best way to find something is to be looking for something else entirely.

A couple of weeks ago, I got it into my head to do a bit of testing. I wanted to see how much delay time I could introduce into a monitor feed before I noticed that something was amiss. To that end, I took a mic and monitor that were already set up, routed the mic through the speaker, and inserted a delay (with no internal feedback) on the signal path. I walked between FOH (Front Of House) and the stage, each time adding another millisecond of delay and then talking into the mic.

For several go-arounds, everything was pretty nondescript. I finally got to a delay time that was just noticeable, and then I thought, “What the heck. I should put in something crazy to see how it sounds.” I set the delay time to something like a full second, and then barked a few words into the mic.

That’s when it happened.

First, silence. Then, loud and clear, the delayed version of what I had said.

…and then, the delayed version of the delayed version of what I had just said, but rather more quietly.

“Whoops,” I thought, “I must have accidentally set the delay’s feedback to something audible.” I began walking back to FOH, only to suddenly realize that I hadn’t messed up the delay’s settings at all. I had simply failed to take into account the entire (and I do mean the ENTIRE) signal path I was working on.

Hold that thought.

Back In The Day

There was a time when delay effects weren’t the full-featured devices we’re used to. Whether the unit was using a bit of tape or some digital implementation, you didn’t always get a processor with a knob labeled “feedback,” or “regen,” or “echoes,” or whatever. There was a chance that your delay processor did one thing: It made audio late. Anything else was up to you.

Because of this, certain consoles of the day had a feature on their aux returns that allowed for the signal passing through the return to be “multed” (split), and then sent back through the aux send to the processor it came from. (On SSL consoles, this feature was called “spin.”) You used this to get the multiple echoes we usually associate with delay as an effect for vocals or guitar.

At some point, processor manufacturers decided that including this feature inside the actual box they were selling was a good idea, and we got the “feedback” knob. There’s nothing exotic about the control. It just routes some of the output back to the input. So, if you have a delay set for some number of milliseconds, and send a copy of the output back to the input end (at a reduced level), then you get a repeat every time your chosen number of milliseconds ticks by. Each repeat drops in level by the gain reduction applied at the feedback control…and eventually, the echo signal can’t be readily heard anymore.

But anyway, the key point here is that whether or not it’s handled “internally,” repeating echoes from a delay line are usually caused by some amount of the processor’s output returning to the front end to be processed again. (I say “usually” because it’s entirely possible to conceive of a digital unit that operates by taking an input sample, delaying the sample, playing the sample back at some volume, and then repeats the process for the sample a certain number of times before stopping the process. In this case, the device doesn’t need to listen to its own output to get an echo.)

I digress. Sorry.

If the output were to be routed back to the input at “unity gain,” (with no reduction or increase in level relative to the original output signal) what would happen? That’s right – you’d get an unlimited number of repeats. If the output is routed back to the front end at greater than unity gain, what would happen? Each repeat would grow in level until the processor’s output was completely saturated in a hellacious storm of distorted echo.

Does that remind you of anything?

Acoustical Circuits

This is where my previous sentence comes into play: “I had simply failed to take into account the entire (and I do mean the ENTIRE) signal path I was working on.” I had temporarily forgotten that the delay line I was using for my tests had not magically started to exist in a vacuum, somehow divorced from the acoustical circuit it was attached to. Quite the opposite was true. The feedback setting on the processor might have been set at “negative infinity,” but that did NOT mean that processor output couldn’t return to the input.

It’s just that the output wasn’t returning to the input by a path that was internal to the delay processor.

I’ve talked about acoustical, resonant circuits before. We get feedback in live-audio rigs because, rather like a delay FX unit, our output from the loudspeakers is acoustically routed back to our input microphones. As the level of this re-entrant signal rises towards being equal with the original input, the hottest parts of the signal begin to “smear” and “ring.” If the level of the re-entrant signal reaches “unity,” then the ringing becomes continuous until we do something to reduce the gain. If the returning signal goes beyond unity gain, we get runaway feedback.

This is not fundamentally different from our delay FX unit. The signal output from the PA or monitor speakers takes some non-zero amount of time to get back into the microphone, just like the feedback to the delay takes a non-zero amount of time to return. We’re just not used to thinking of the microphone loop in that way. We don’t consciously set a delay time on the audio re-entering the mic, and we don’t intentionally set an amount of signal that we want to re-enter the capsule – we would, of course, prefer that ZERO signal re-entered the capsule.

And the “delay time” through the mic-loudspeaker loop is just naturally imposed on us. We don’t dial up “x number of milliseconds” on a display, or anything. However long it takes audio to find its way back through the inputs is however long it takes.

Even so, feedback through our mics is basically the same creature as our “hellacious storm” of echoes through a delay processor. The mic just squeals, howls, and bellows because of differences in overall gain at different frequencies. Those frequencies continue to echo – usually, so quickly that we don’t discern individual repeats – while the other frequencies die down. That’s why the fighting of feedback so often involves equalization: If we can selectively reduce the gain of the frequencies that are ringing, we can get their “re-entry level” down to the point where they don’t noticeably ring anymore. The echoes decay so far and so fast that we don’t notice them, and we say that the system has stabilized.

All of this is yet another specific case where the patterns of audio behavior mirror and repeat themselves in places you might not expect.

As it turns out, you can put this to very powerful use.

The Application

As I discussed in “Transdimensional Noodle Baking,” we can do some very interesting things with audio when it comes to manipulating it in time. Making light “late” is a pretty unwieldy thing for people to do, but making audio late is almost trivial in comparison.

And making audio events late, or spreading them out in time, allows you to examine them more carefully.

Now, you might not associate careful examination with fighting feedback issues, but being able to slow things down is a big help when you’re trying to squeeze the maximum gain-before-feedback out of something like a a monitor rig. It’s an especially big help when you’re like me – that is to say, NOT an audio ninja.

What I mean by not being an audio ninja is that I’m really quite poor at identifying frequencies. Those guys who can hear a frequency start to smear a bit, and instantly know which fader to grab on their graphic EQ? That’s not me. As such, I hate graphic EQs and avoid putting them into systems whenever possible. I suppose that I could dive into some ear-training exercises, but I just can’t seem to be bothered. I have other things to do. As such, I have to replace ability with effort and technology.

Now, couple another issue with that. The other issue is that the traditional method of “ringing out” a PA or monitor rig really isn’t that great.

Don’t get me wrong! Your average ringout technique is certainly useful. It’s a LOT better than nothing. Even so, the method is flawed.

The problem with a traditional ringout procedure is that it doesn’t always simulate all the variables that contribute to feedback. You can ring out a mic on deck, walk up, check it, and feel pretty good…right up until the performer asks for “more me,” and you get a high-pitched squeal as you roll the gain up beyond where you had it. The reason you didn’t find that high-pitched squeal during the ringout was because you didn’t have a person with their face parked in front of the mic. Humans are good absorbers, but we’re also partially reflective. Stick a person in front of the mic, and a certain, somewhat greater portion of the monitor’s output gets deflected back into the capsule.

You can definitely test for this problem if you have an assistant, or a remote for the console, but what if you have neither of those things? What if you’ve got some other weird, phantom ring that’s definitely there, and definitely annoying, but hard to pin down? It might be too quiet to easily catch on a regular RTA (Real Time Analyzer), and you might not be able to accurately whistle or sing the tone while standing where you can easily read your RTA. Even if you can carry an RTA with you (if you have a smartphone, you can carry a basic analyzer with you everywhere – for free) you still might not be able to accurately whistle or sing the offending frequency.

But what if you could spread out the ringing into a series of discrete echoes? What if you could visually record and inspect those echoes? You’d have a very powerful tuning tool at your disposal.

The Implementation

I admit, I’m pretty lucky. Everything I need to implement this super-nifty feedback finding tool lives inside my mixing console. For other folks, there’s going to be more “doing” involved. Nevertheless, you really only need to add two key things to your audio setup to have access to all this:

1) A digital delay that can pass all audio frequencies equally, is capable of long (1 second or more) delays, and can be run with no internal feedback.

2) A spectrograph that will show you a range of 10 seconds or more, and will also show you the frequency under a cursor that you can move around to different points of interest.

A spectrograph is a type of audio analysis system that is specifically meant to show frequency magnitude over a certain amount of time. This is similar to “waterfall” plots that show spectral decay, but a spectrograph is probably much easier to read for this application.

The delay is inserted in the audio path of the microphone, in such a way that the only signal audible in the path is the output of the delay. The delay time should be set to somewhere around 1.5 to 2 seconds, long enough to speak a complete phrase into the mic. The output of the signal path is otherwise routed to the PA or monitors as normal, and the spectrograph is hooked up so that it can directly (that is, via an electrical connection) “listen” to the signal path you’re testing. The spectrograph should be set up so that ambient noise is too low to be visible on the analysis – otherwise, the output will be harder to interpret.

To start, you apply a “best guess” amount of gain to the mic pre and monitor sends. You’ll need to wait several seconds to see if the system starts to ring out of control, because the delay is making everything “late.” If the system does start to ring, the problem frequencies should be very obvious on the spectrograph. Adjust the appropriate EQs accordingly, or pull the gain back a bit.

With the spectrograph still running, walk up to the mic. Stick your face right up on the mic, and clearly but quickly say, “Check, test, one, two.” (“Check, test, one, two” is a phrase that covers most of the audible frequency spectrum, and has consonant sounds that rely on high-mid and high frequency reproduction to sound good.)

DON’T FREAKIN’ MOVE.

See, what you’re effectively doing is finding the “hot spots” in the sound that’s re-entrant to the microphone, and if you move away from the mic you change where those hot spots are. So…

Stay put and listen. The first thing you’ll hear is the actual, unadulterated signal that went through the microphone and got delivered through the loudspeaker. The repeats you will hear subsequently are what is making it back into the microphone and getting re-amplified. If you hear the repeats getting more and more “odd” and “peaky” sounding, that’s actually good – it means that you’re finding problem areas.

After the echoes have decayed mostly into silence, or are just repeating and repeating with no end in sight, walk back to your spectrograph and freeze the display. If everything is set up correctly, you should be able to to visually identify sounds that are repeating. The really nifty thing is that the problem areas will repeat more times than the non-problem areas. While other frequencies drop off into black (or whatever color is considered “below the scale” by your spectrograph) the ringy frequencies will still be visible.

You can now use the appropriate EQs to pull your problem frequencies down.

Keep iterating the procedure until you feel like you have a decent amount of monitor level. As much as possible, try to run the tests with gains and mix levels set as close to what they’ll be to the show as possible. Lots of open mics going to lots of different places will ring differently than a few mics only going to a single destination each.

Also, make sure to remember to disengage the delay, walk up on deck, and do a “sanity” check to make sure that everything you did was actually helpful.



If you’re having trouble visualizing this, here are some screenshots depicting one of my own trips through this process:

This spectrograph reading clearly shows some big problems in the low-mid area.

Some corrective EQ goes in, and I retest.

That’s better, but we’re not quite there.

More EQ.

That seems to have done the trick.



I can certainly recognize that this might be more involved than what some folks are prepared to do. I also have to acknowledge that this doesn’t work very well in a noisy environment.

Even so, turning feedback problems into a series of discrete, easily examined echoes has been quite a revelation for me. You might want to give it a try yourself.


Digital Audio – Bold Claims, Experimental Testing

Digital audio does benefit from higher bit-depth and sample rate – but only in terms of a better noise floor and higher frequency-capture bandwidth.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

When I get into a dispute about how digital audio works, I’ll often be the guy making the bizarre and counter-intuitive statements:

“Bit depth affects the noise floor, not the ability to reproduce a wave-shape accurately.”

“Sample rates beyond 44.1 k don’t make material below 21 kHz more accurate.”

The thing is, I drop these bombshells without any experimental proof. It’s no wonder that I encounter a fair bit of pushback when I spout off about digital audio. The purpose of this article is to change that, because it simply isn’t fair for me to say something that runs counter to many people’s understanding…and then just walk away. Audio work, as artistic and subjective as it can be, is still governed by science – and good science demands experiments with reproducible results.

I Want You To Be Able To Participate

When I devised the tests that you’ll find in this article, one thing that I had in mind was that it should be easy for people to “try them at home.” For this reason, every experiment is conducted inside Reaper (a digital audio workstation), using audio processing that is available with the basic Reaper download.

Reaper isn’t free software, but you can download a completely un-crippled evaluation copy at reaper.fm. The 30-day trial period should be plenty of time for you to run these experiments yourself, and maybe even extend them.

To ensure that you can run everything, you will need to have audio hardware capable of a 96 kHz sampling rate. If you don’t, and you open one of the projects that specifies 96 kHz sampling, I’m not sure what will happen.

With the exception of Reaper itself, this ZIP file should contain everything you need to perform the experiments in this article.

IMPORTANT: You should NOT open any of these project files until you have turned the send level to your monitors or headphones down as far as possible. Otherwise, if I have forgotten to set the initial level of the master fader to “-inf,” you may get a VERY LOUD and unpleasant surprise. If you wreck your hearing or your gear in the process of experimenting, I am NOT responsible.

Weak Points In These Tests

The veracity of these experiments is by no means unassailable. It’s very important that I say that. For instance, the measurement “devices” that will be used are not independent of Reaper. They run inside the software itself, and so are subject to both their own flaws and the flaws of the host software.

Because I did not want to burden you with having to provide external hardware and software for testing, the only appeal to external measurement that is available is listening with the human ear. Human hearing, as an objective measurement, is highly fallible. Our ability to hear is not necessarily consistent across individuals, and what we hear can be altered by all manner of environmental factors that are directly or indirectly related to the digital signals being presented.

I personally hold these experiments as strong proof of my assertions, but I have no illusions of them being incontestable.

Getting Started – Why We Won’t Be Using Many “Null” Tests

A very common procedure for testing digital audio assumptions is the “null” test. This experiment involves taking two signals, inverting the polarity of one signal, and then summing the signals. If the signals are a perfect match, then the resulting summation will be digital silence. If they are not a perfect match, then some sort of differential signal will remain.

Null tests are great for proving some things, and not so great for proving others. The problem with appealing to a null test is that ANYTHING which changes anything about the signal will cause a differential “remainder” to appear. Because of this, you can’t use null testing to completely prove that, say, a 24-bit file and a 16-bit file contain the same desired signal, independent of noise. You CAN prove that the difference between the files is located at some level below 0 dBFS (decibels referenced to full scale), and you can make inferences as to the audibility of that difference. Still, the reality that the total signal (including unwanted noise) contained within a 16-bit file is different from the total signal contained within a 24-bit file is very real, and non-trivial.

There are other issues as well, which the first few experiments will reveal.

Open the project file that starts with “01,” verify that the master fader and your listening system are all the way down, and then begin playback. The analysis window should – within its own experimental error – show you a perfect, undistorted tone occurring at 10 kHz. As the file loops, there will be brief periods where noise becomes visible in the trace. Other than that, though, you should see something like this:

test01

When Reaper is playing a file that is at the same sample rate as the project, nothing odd happens.

Now, open the file that starts with “02.” When you begin playback, something strange occurs. The file being played is the same one that was just being used, but this time the project should invoke a 96 kHz sample rate. As a result, Reaper will attempt to resample the file on the fly. This resampling results in some artifacts that are difficult (if not entirely impossible) to hear, but easy to see in the analyzer.

test02

Reaper is incapable of realtime (or even non-realtime) resampling without artifacts, which means that we can’t use a null test to incontestably prove that a 10 kHz tone, sampled at 44.1 kHz, is exactly the same as a 10 kHz tone sampled at 96 kHz.

What is at least somewhat encouraging, though, is that the artifacts produced by Reaper’s resampling are consistent in time, and from channel to channel. Opening and playing the project starting with “03” confirms this, in a case where a null test is actually quite helpful. The same resampled file, played in two channels (with one channel polarity-inverted) creates a perfect null between itself and its counterpart.

test03

Test 04 demonstrates the problem that I talked about above. With one file played at the project’s “native” sample rate, and the other file being resampled, the inverted-polarity signal doesn’t null perfectly with the other channel. The differential signal IS a long way down, at about -90 dBFS. That’s probably impossible to hear under most normal circumstances, but it’s not digital silence.

test04

Does A Higher Sampling Rate Render A Particular Tone More Accurately?

With the above experiments out of the way, we can now turn our attention to one of the major questions regarding digital audio: Does a higher rate of sampling, and thus, a more finely spaced “time grid,” result in a more accurate rendition of the source material?

Hypothesis 1: A higher rate of sampling does result in a more accurate rendition of a particular tone, as long as the tone in question is a frequency unaffected by input or output filtering. This hypothesis assumes that digital audio is an EXPLICIT representation of the signal – that is, that each sample point is reproduced “as is,” and so more samples per unit time create a more faithful reproduction of the material.

Hypothesis 2: A higher rate of sampling does not result in a more accurate rendition of a particular tone, as long as the tone in question is a frequency unaffected by input or output filtering. This hypothesis assumes that digital audio is an IMPLICIT representation of the signal, where the sample data is used to mathematically reconstruct a perfect copy of the stored event.

The experiment begins with the “05” project. The project generates a 10 kHz tone, with a 44.1 kHz sampling rate. If you listen to the output (and aren’t clipping anything) you should hear what the analyzer displays: A perfect, 10 kHz sine wave with no audible distortion, harmonics, undertones, or anything else.

test05

Project “06” generates the same tone, but in the context of a 96 kHz sampling rate. The analyzer shifts the trace to the left, because 96 kHz sampling can accommodate a wider frequency range. However, the signal content stays the same: We have a perfect, 10 kHz tone with no audible artifacts, and nothing else visible on the analyzer (within experimental error).

test06

Project “07” also generates a 10 kHz tone, but it does so within a 22.05 kHz sampling rate. There is still no audible signal degradation, and the tone displays as “perfect” in the analyzer. The trace is shifted to the right, because 10 kHz is very near the limit of what 22.05 kHz sampling can handle.

test07

Conclusion: Hypothesis 2 is correct. At 22,500 samples per second, any given cycle of a 10 kHz wave only has two samples available to represent the signal. At 44.1 kHz sampling, any given cycle still only has four samples assigned. Event at 96 kHz, a 10 kHz wave has less than 10 samples assigned to it. If digital audio were an explicit representation of the wave, then such small numbers of samples being used to represent a signal should result in artifacts that are obvious either to the ear or to an analyzer. Any such artifacts are not observable via the above experiments, at any of the sampling rates used. The inference from this observation is that digital audio is an implicit representation of the signals being stored, and that sample rate does not affect the ability to accurately store information – as long as that information can be captured and stored at the sample rate in the first place.

Does A Higher Sample Rate Render Complex Material More Accurately?

Some people take major issue with the above experiment, because musical signals are not “naked” sine waves. Thus, we need an experiment which addresses the question of whether or not complex signals are represented more accurately by higher sample rates.

Hypothesis 1: A higher sampling rate does create a more faithful representation of complex waves, because complex waves are more difficult to represent than sine waves.

Hypothesis 2: A higher sampling rate does not create a more faithful representation of complex waves, because any complex wave is simply a number of sine waves modulating each other to varying degrees.

This test opens with the “08” project, which generates a complex sound at a 96 kHz sample rate. To make any artifacts easy to hear, the sound still uses pure tones, but the tones are spread out across the audible spectrum. Accordingly, the analyzer shows us 11 tones that read as “pure,” within experimental error. (Lower frequencies are less accurately depicted by the analyzer than high frequencies.)

test08

If we now load project “09,” we get a tone which is audibly and visibly the same, even though the project is now restricted to 44.1 kHz sampling. Although the analyzer’s trace has shifted to the right, we can still easily see 11, “pure” tones, free of artifacts beyond experimental error.

test09

Conclusion: Hypothesis 2 is correct. A complex signal was observed as being faithfully reproduced, even with half the sampling data being available. An inference that can be made from this observation is that, as long as the highest frequency in a signal can be faithfully represented by a sampling rate, any additional material of lower frequency can be represented with the same degree of faithfulness.

Do Higher Bit Depths Better Represent A Given Signal, Independent Of Noise?

This question is at the heart of current debates about bit depth in consumer formats. The issue is whether or not larger bit-depths (and the consequentially larger file sizes) result in greater signal fidelity. This question is made more difficult because lower bit depths inevitably result in more noise. The presence of increasing noise makes a partial answer possible without experimentation: Yes, greater fidelity is afforded by higher bit-depth, because the noise level related to quantization error is inversely proportional to the number of bits available for quantization. The real question that remains is whether or not the signal, independent of the quantization noise, is more faithfully represented by having more bits available to assign sample values to.

Hypothesis 1: If noise is ignored, greater bit-depth results in greater accuracy. Again, the assumption is that digital is an explicit representation of the source material, and so more possible values per unit of voltage or pressure are advantageous.

Hypothesis 2: If noise is ignored, greater bit-depth does not result in greater accuracy. As before, the assumption is that we are using data in an implicit way, so as to reconstruct a signal at the output (and not directly represent it).

For this test, a 100 Hz tone was chosen. The reasoning behind this was because, at a constant 44.1 kHz sample rate, a single cycle of a 100 Hz tone has 441 sample values assigned to it. This relatively high number of sample positions should ensure that sample rate is not a factor in the signal being well represented, and so the accuracy of each sample value should be much closer to being an isolated variable in the experiment.

Project “10” generates a 100 Hz tone, with 24 bits of resolution. Dither is applied. Within experimental error, the tone appears “pure” on the analyzer. Any noise is below the measurement floor of -144 dBFS. (This measurement floor is convenient, because any real chance of hearing the noise would require listening at a level where the tone was producing 144 dB SPL, which is above the threshold of pain for humans.)

test10

Project “11” generates the same tone, but at 16-bits. Noise is visible on the analyzer, but is inaudible when listening. No obvious harmonics or undertones are visible on the analyzer or audible to an observer.

test11

Project “12” restricts the signal to an 8-bit sample word. Noise is clearly visible on the analyzer, and easily audible to an observer. There are still no obvious harmonics or undertones.

test12

Project “13” offers only 4 bits of resolution. The noise is very prominent. The analyzer displays “spikes” which seem to suggest some kind of distortion, and an observer may hear something that sounds like harmonic distortion.

test13

Conclusion: Hypothesis 2 is partially correct and partially incorrect (but only in a functional sense). For bit depths likely to be encountered by most listeners, a greater number of possible sample values does not produce demonstrably less distortion when noise is ignored. A pure tone remains observationally pure, and perfectly represented. However, it is important to note that, at some point, the harmonic distortion caused by quantization error appears to be able to “defeat” the applied dither. Even so, the original tone does not become “stair-stepped” or “rough.” It does, however, have unwanted tones superimposed upon it.


No Sale

A Small Venue Survivalist Saturday Suggestion

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Shopping for a personal vocal mic?

Forget about how it sounds in a pair of headphones.

Find some monitor wedges, and crank up the mic until it sounds like what you’ll need for your band. If the mic sounds bad, or you’re struggling with feedback, then it’s “no sale.”