Tag Archives: Science

The POTH Commentaries – VCAs/DCAs

VCA/ DCA control is very handy, especially for “non-homogenous” routing situations.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Author’s Note: This article is the first in a short series that addresses concepts and happenings related to Pigs Over The Horizon, a Pink Floyd tribute starring Advent Horizon and Friends.


When you start working with more full-featured consoles, you’ll likely run across something you might not have seen before. It’s a control feature called the VCA, or sometimes DCA on digital desks. What is this strange creature? What is it good for?

First off: A VCA is a Voltage Controlled Amplifier. The “D” comes in as a way to say that the same concept is being applied in the digital realm. (In my opinion, a digital system has it much easier, because you don’t have to work with analog circuit logic and the complexities of components or circuit layouts that come with that whole business.) The whole notion rests squarely on how it’s possible to build gain stages that modify the applied change to an audio signal in proportion to a separately applied control signal. If you have a number of control signal generators available, and can choose which control signal to apply to other gain stages, then you end up with a number of VCA/ DCA assignments. Connect a fader to the control signal generator such that the control signal is modified by that fader, and you have a VCA that’s intuitive to manage.

The VCA/ DCA concept, then, is that of a control group. When you assign faders to a control group, you are directing the console to maintain the relative balance that you set amongst those faders, while also giving you an overall level control for all of those channels at once.

“Like routing all those channels through a bus?” you ask.

Yes and no. The magic of the VCA/ DCA is that you get bus-like level management, but your routing is unaffected. In other words, VCA/ DCA groups are control groups independent of audio signal considerations. This was a big deal for me with Pigs Over The Horizon, because of how we did the playback FX.

The playback FX were in surround. Two channels were routed up front (in mono, actually), with two more channels that were sent directly to surround left and surround right, respectively. Once the surround channels were “lined up” with respect to each other and the feeds to the front, I didn’t want to change that relationship – but I DID want to be able to ride the overall FX cue volume if I had to.

I couldn’t achieve what I wanted by busing the four FX channels together; They would all have ended up going to a single destination, with no way to separate them back out to get surround again. By assigning them to a DCA group, though, it was a cinch. The routing didn’t change at all, but my ability to grab one control and regulate the overall volume of the unchanged balance was established.

Of course, busing is still a very important tool. You need it whenever you DO want to get a bunch of sources to flow to a single destination. This might not just be for simple combining. You might want to process a whole bunch of channels with exactly the same EQ and compression, for example, and then send them off to the main output. If that’s not what you’re after, though, a VCA/ DCA group is a great choice. You don’t chew up a bus just for the simple task of grouped volume control, and if you change your mind on the routing later it’s not a big deal. Your grouped controls stay grouped, no matter where you send them, again, because of that “independence” factor. The VCA/ DCA has nothing to do with where signals are coming from or where they’re going – it only changes the gain applied.

I personally am not as heavy a user of VCA/ DCA groups as some other audio humans, but I see them as a handy tool that I may end up leveraging more in the future. I’m glad I know what they are, because they’re a great problem solver. If your console has them, I definitely recommend becoming familiar with their usage. The day may come when you need ’em!


The Pro-Audio Guide For People Who Know Nothing About Pro-Audio, Part 7

Amplifiers and loudspeakers bring us to the end of my series for Schwilly Family Musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

“Now that we’ve turned audio into electricity and back again, we’ve reached the end of this series.”


This article is available, for free, right here.


130 dB Disbelief

It’s hard for me to believe that 130 dB is possible from some loudspeaker designs.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

When a manufacturer claims that a loudspeaker system (say, a two-way arrangement in a single, vented enclosure) can create a 130 dB SPL peak at 1 meter with a 1000 watt peak power input, I’m a skeptic. Or rather, I should say that I’m a skeptic about how useful that 130 dB actually is.

What I’m getting at is this: A 1000 watt input is 30 dB above the 1 watt input level. Getting a direct-radiating cone driver to give you 100+ dB SPL of sensitivity in a consistent way is challenging (thought I will not say it’s impossible). There are, of course, plenty of drivers available that will get you over that mark of 100 dB @ 1 watt/ 1 meter, BUT, only with the caveat that the 100+ dB sensitivity zone is confined to a “smallish” peak around 2 kHz. The nice, smooth part of the response that doesn’t need to be tamed is probably between 95 – 97 dB. If you’re lucky, that zone might be just south of 100 dB.

When it comes to useful output, what really matters is what a driver can do with minimal variation across the bandpass it’s meant to reproduce. Peaks in different frequency ranges aren’t helpful for real work – although they do let you claim a higher peak-output number.

My disbelief, then, is rooted in the idea that any “affordable by mortals” loudspeaker model is probably not using an ultra-high performance, super-custom-built cone driver for the low-frequency bandpass. Sure, it might not be a driver that you can get off the shelf, but it’s tough for me to have faith that the very upper edge of loudspeaker performance is being tickled by whatever got bolted into the enclosure.

Now…I could be very wrong about this. In fact, I would prefer to be wrong, because I will always desire an affordable speaker that takes up no space, has no weight, and is infinitely loud from 20 Hz to 20,000 Hz. Anything that gets closer to that impossible goal is a box I can welcome. At the same time, I prefer (and encourage) pessimism when reading manufacturer ratings. Sure, they say the box can make 130 dB peaks, but under what circumstances? Only at 2 kHz? Only when combined with room reflections?

If the numbers you claim are difficult to achieve, I’m going to need more than your word to accept them.


ITRD

It’s the room, dude.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

It’s the room, dude.

You’ve mixed this band before. They were great. Now you’re somewhere else, and it’s just awful. You can’t make anything out; The intelligibility is somewhere south of “I can understand every fourth word, sort of.”

It’s, the room, dude.

You’ve used every EQ you have in a manner that could best be described as “neurosurgery with an artillery weapon.” The input channel EQs are carved up. The output channel curves look like the Himalayas. You’ve also inserted graphics on the outputs. The settings are not something you will share on Instagram. The show will NOT behave.

It’s the room, dude.

You’ve moved the speakers. You’ve tilted and twisted them, trying to miss the walls and ceiling just a little more. You could get a job as a civil engineer who designs bridges, because of your working knowledge of bizarre, load-bearing constructions. The system still sounds like the entirety of World War II being fought in an airplane hangar.

It’s the room, dude.

You’ve bought every toy and tweaker that the good folks at the gear retailer could sell you. You’ve got automatic feedback filters, frequency-dependent compression, wild-donkeyed gating, and a rack full of boutique, 500-series thingamabobs. It still sounds like you can’t mix your way out of a paper bag that’s been sitting outside in the rain for a month.

It’s the room, dude.

The lead singer gets your attention as soundcheck draws to a close. “Could you please pull down the reverb?” they ask. Nothing is going to any reverb processor that you have available.

It’s the room, dude.

The musicians are pretty happy. You have the monitors wound up to a level that frightens small children. You have the FOH mid-highs high-passed at 1 kHz. (I have done this in real life.) The sound in the seats is still a sort of indistinct, muddy garble.

It’s the room, dude.

Once you have tens or hundreds of arrivals of a single sonic event, you will never get the transients unsmeared. Once the low-mid builds into a seconds-long reverberant mash, you will never dig your way out. Once monitor-world hits that nice, huge, flat backstop behind the players, you will never get monitor-world out of FOH. Once the vocalist’s smashing crescendo slaps that back wall and starts racing home to their face, you will never stop them from getting walloped right in the chops with the world as it was 200ms ago.

It’s the room, dude. It’s the room.


Graphic Content

Transfer functions of various reasonable and unreasonable graphic EQ settings.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

An aphorism that I firmly believe goes like this: “If you can hear it, you can measure it.” Of course, there’s another twist to that – the one that reminds you that it’s possible to measure things you can’t hear.

The graphic equalizer, though still recognizable, is losing a bit of its commonality as an outboard device. With digital consoles invading en masse, making landings up and down the treasure-laden coasts of live audio, racks and racks of separate EQ devices are being virtualized inside computer-driven mix platforms. At the same time, hardware graphics are still a real thing that exists…and I would wager that most of us haven’t seen a transfer function of common uses (and abuses) of these units, which happen whether you’ve got a physical object or a digital representation of one.

So – let me dig up a spare Behringer Ultragraph Pro, and let’s graph a graphic. (An important note: Any measurement that you do is a measurement of EXACTLY that setup. Some parts of this exercise will be generally applicable, but please be aware that what we’re measuring is a specific Behringer EQ and not all graphic EQs in the world.)

The first thing to look at is the “flat” state. When you set the processing to “out,” is it really out?

In this case, very much so. The trace is laser flat, with +/- 0.2 dB of change across the entire audible spectrum. It’s indistinguishable from a “straight wire” measurement of my audio interface.

Now, we’ll allow audio to flow through the unit’s filtering, but with the high and low-pass filters swept to their maximums, and all the graph filters set to 0 dB.

The low and high-pass filters are still definitely having an effect in the audible range, though a minimal one. Half a decibel down at 45 Hz isn’t nothing, but it’s also pretty hard to hear.

What happens when the filters are swept to 75 Hz and 10 kHz?

The 3dB points are about where the labeling on the knobs tells you it should be (with a little bit of overshoot), and the filters roll off pretty gently (about 6 dB per octave).

Let’s sweep the filters out again, and make a small cut at 500 Hz.

Interestingly, the filter doesn’t seem to be located exactly where the faceplate says it should be – it’s about 40% of a third-octave space away from the indicated frequency center, if the trace is accurate in itself.

What if we drop the 500 Hz filter all the way down, and superimpose the new trace on the old one?

The filter might look a bit wider than what you expected, with easily measurable effects happening at a full octave below the selected frequency. Even so, that’s pretty selective compared to lots of wide-ranging, “ultra musical” EQ implementations you might run into.

What happens when we yank down two filters that are right next to each other?

There’s an interesting ripple between the cuts, amounting to a little bit less than 1 dB.

How about one of the classic graphic EQ abuses? Here’s a smiley-face curve:

Want to destroy all semblance of headroom in an audio system? It’s easy! Just kill the level of the frequency range that’s easiest to hear and most efficient to reproduce, then complain that the system has no power. No problem! :Rolls Eyes:

Here’s another EQ abuse, alternately called “Death To 100” or “I Was Too Cheap To Buy A Crossover:”

It could be worse, true, but…really? It’s not a true substitute for having the correct tool in the first place.


The Pro-Audio Guide For People Who Know Nothing About Pro-Audio, Part 3

Onward to the microphone preamp…or trim.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

“Signals at mic-level may require large, positive gain changes to correctly drive downstream electronics, and so a jack that can be connected to a microphone preamp is needed in that case.”

Read the whole thing, free, at Schwilly Family Musicians.


But WHY Can’t You Fix Acoustics With EQ?

EQ is meant for fixing a different set problems.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

A distinct inevitability is that someone will be in a “tough room.” They will look at the vast array of equalization functionalities offered by modern, digital, sound-reinforcement tools, and they will say, “Why can’t I fix the space?”

Here’s the deal. A difficult room – that is, one with environmental attributes that make our job harder – is a time problem in the acoustical domain. EQ, on the other hand, is a tool for changing frequency magnitude in the electronic domain.

When it comes right down to it, bad acoustics is just shorthand for “A sound arrived at a listener multiple times, and it was unpleasant.” A noise was made, and part of its energy traveled directly to somebody’s ears. Some other part of its energy splattered off a wall, ceiling, or floor…or a combination of all of those, at least once, and then also arrived at somebody’s ears. Maybe a lot of the high-frequency information was absorbed, causing the combined result to be a muddy garble. Of course, all the transients getting smeared around didn’t help much, either. It gets even more fun when a sound is created, and bounces around, and some of it goes into a monitor system, and gets made again, and bounces around, and some of it goes to the FOH PA, and gets made AGAIN, and bounces around, and all of that gets back into the microphone, and so that sound gets generated again, except at a different level and frequency response, and…

How’s an EQ going to fix that? I mean, really fix it?

You might be able to dig out a hole in the system’s response that compensates for annoying frequency buildup. If the room causes a big, wide bump at 250 Hz, dialing that out of the PA in correct proportion will certainly help a bit. It’s a very reasonable thing to do, and we engage in such an exercise on a regular basis.

But all the EQ did was change the magnitude response of the PA. Sure, equalization uses time to precipitate frequency-dependent gain changes, but it doesn’t do a thing in relation to environmental time issues. The noise from the PA is still bouncing madly off of a myriad of surfaces. It’s still arriving at the listener multiple times. The transients are still smeared. The information in the electronic domain got turned into acoustical information (by necessity), and at that point, the EQ stopped having any direct influence at all.

You can’t use EQ to fix the problem. You can alleviate frequency-related effects of the acoustical nightmare you have on your hands, but an actual solution involves changing the behavior of the room. Your EQ is not inserted across the environment, nor can it be, so recognize what it can and can’t do.


What’s Next?

I don’t know, but we’re probably not going to blow the lid off of audio in general.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

I get extremely suspicious when somebody claims to have solved a fundamental problem in audio. Mostly, this is because all the basic gremlins have been thoroughly killed and dried. It’s also because sonic physics is a system of laws that tolerate zero BS. (When somebody claims that they have a breakthrough technology that sounds great by way of spraying sound like a leaky hose, I know they are full of something brown and stinky.)

Modern audio is what I would definitely call a mature technology. In mature technologies, the bedrock science of the technology’s behavior is very well understood. The apparent breakthroughs, then, come when another technology allows a top-shelf behavior to be made available to the masses, or when it creates an opportunity to make a theoretical idea a reality.

A great example is the two-way, fullrange loudspeaker. They’re better than they have ever been. Anyone who remembers wrestling Peavey SP2 TI boxes is almost tearfully grateful to have small, light, loud enclosures available for a rock-bottom price. Obviously, there have been advances. We’ve figured out how to make loudspeaker drivers more efficient and more reliable. Commercially viable neodymium magnets give us the same field strength for less mass. The constant-directivity horn (and its refined derivatives) have delivered improved pattern control.

These are important developments!

Yet, the unit, as an overall object, would be entirely recognizable to someone magically transported to us from three decades in the past. The rules are the same. You’ve got a cone driver in a box, and a compression driver mated to a horn. The cone driver has certain characteristics which the main box has to be built around. It’s not as though we’ve evolved to exotic, crystalline sound-emitters that work by magic.

The palpable improvements aren’t really to do with audio, in a direct sense. They have to do with miniaturization, computerization, and commoditization. An active loudspeaker in the 21st century is likely to sound better than a 1980s or 1990s unit, not because it’s a completely different technology, but because the manufacturer can design, test, tune, and package the product as a bundle of known (and very carefully controlled) quantities. When a manufacturer ships a passive loudspeaker, there’s a lot that they just can’t know – and can’t even do. Stuff everything into the enclosure, and the situation changes dramatically. You know exactly what the amplifier and the driver are going to do to each other. You know just exactly how much excursion that LF driver will endure, and you can limit the amplifier at exactly the point to get maximum performance without damage. You can use steeper crossover slopes to (maybe) cross that HF driver a little lower, improving linearity in the intelligibility zone. You can precisely line up the drivers in time. You can EQ the whole business within an inch of its life.

Again, that’s not because the basic idea got better. It’s because we can put high-speed computation and high-powered amplification in a small space, for (relatively) cheap. Everything I’ve described above has been possible to do for a long time. It’s just that it wasn’t feasible to package it for the masses. You either had to do it externally and expensively, shipping a large, complicated product package to an educated end user…or just let the customer do whatever, and hope for the best.

I can’t say that I have an angle on what the next big jump will be for audio. I’m even skeptical on whether there will be another major leap. I’m excited for more features to become more affordable, though, so I’ll keep looking for those gear catalogs to arrive in the mail.


The Pro-Audio Guide For People Who Know Nothing About Pro-Audio, Part 1

A series I’m starting on Schwilly Family Musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

From the article:

“The fundamental key to all audio production is that we MUST have sound information in the form of electricity. Certain instruments, like synthesizers and sample players don’t produce any actual sound at all; They go straight to producing electricity.

For actual sound, though, we have to perform a conversion, or “transduction.” Transduction, especially input transduction, is THE most important part of audio production. If the conversion from sound to electricity is poor, nothing happening down the line will be able to fully compensate.”


Read the whole thing here, for free!


EQ Propagation

The question of where to EQ is, of course, tied inextricably to what to EQ.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

On occasion, I get the opportunity to guest-lecture to live-sound students. When things go the way I want them to, the students get a chance to experience the dialing up of monitor world (or part of it). One of the inevitable and important questions that arises is, “Why did you reach for the channel EQ when you were solving that one problem, but then use the EQ across the bus for this other problem?”

I’ve been able to give good answers to those questions, but I’ve also wanted to offer better explanations. I think I’ve finally hit upon an elegant way to describe my decision making process in regards to which EQ I use to solve different problems. It turns out that everything comes down to the primary “propagation direction” that I want for a given EQ change:

Effectively speaking, equalization on an input propagates downstream to all outputs. Equalization on an output effectively propagates upstream to all inputs.


What I’ve just said is, admittedly, rather abstract. That being so, let’s take a look at it concretely.

Let’s say we’re in the process of dialing up monitor world. It’s one of those all-too-rare occasions where we get the chance to measure the output of our wedges and apply an appropriate tuning. That equalization is applied across the appropriate bus. What we’re trying to do is equalize the box itself, so we can get acoustical output that follows a “reference curve.” (I advocate for a flat reference curve, myself.)

It might seem counter-intuitive, but if we’re going to tune the wedge electronically, what we actually have to do is transform all of the INPUTS to the box. Changing the loudspeaker itself to get our preferred reference curve would be ideal, but also very difficult. So, we use an EQ across a system output to change all the signals traveling to the wedge, counteracting the filtering that the drivers and enclosure impose on whatever makes it to them. If the monitor is making everything too crisp (for example), the “output” EQ lets us effectively dial high-frequency information out of every input traveling to the wedge.

Now, we put the signal from a microphone into one of our wedges. It starts off sounding generally good, although the channel in question is a vocal and we can tell there’s too much energy in the deep, low-frequency area. To fix the problem, we apply equalization to the microphone’s channel – the input. We want the exact change we’ve made to apply to every monitor that the channel might be sent to, and EQ across an input effectively transforms all the outputs that signal might arrive at.

There’s certainly nothing to stop us from going to each output EQ and pulling down the LF, but:

1) If we have a lot of mixes to work with, that’s pretty tedious, even with copy and paste, and…

2) We’ve now pushed away from our desired reference curve for the wedges, potentially robbing desired low-end information from inputs that would benefit from it. A ton of bottom isn’t necessary for vocals on deck, but what if somebody wants bass guitar? Or kick?

It makes much more sense to make the change at the channel if we can.

This also applies to the mud and midrange feedback weirdness that tends to pile up as one channel gets routed to multiple monitors. The problems aren’t necessarily the result of individual wedges being tuned badly. Rather, they are the result of multiple tunings interacting in a way that’s “wrong” for one particular mic at one particular location. What we need, then, is to EQ our input. The change then propagates to all the outputs, creating an overall solution with relative ease (and, again, we haven’t carved up each individual monitor’s curve into something that sounds weird in the process).

The same idea applies to FOH. If the whole mix seems “out of whack,” then a change to the main EQ effectively tweaks all the inputs to fix the offending frequency range.

So, when it’s time to grab an EQ, think about which way you want your changes to flow. Changes to inputs flow to all the connected outputs. Changes to outputs flow to all connected inputs.