Tag Archives: Routing

A Plan For Delays

I think this should probably work. Maybe.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Last year, I did a show at Gallivan Plaza that really ought to have had delays, but didn’t. As a result, the folks sitting on the upper tiers of lawn didn’t get quite as much volume as they would have liked. This year, I intend to try to fix that problem. Of course, deploying delays is NOT as simple as saying “we’ll just deploy delays.” There’s a bit of doing involved, and I figured I would set out my mental process here, before actually having a go.

Then, after all is said and done, we can review. Exciting, no?

So, here’s the idea:

A) Set primary FOH as a “double-hung” system. Cluster the subs down center, prep to put vocals through the inner pair of full-range boxes, and prep to send everything else to the outer pair. Drive the main PA with L/R output.

B) Have the FOH tent sit on the concrete pad about 60 feet from the stage.

C) At roughly an 80 foot distance, place the delays. The PA SPL in full-space at that point is expected to be down about 28 dB from the close-range (3 feet/ 1 meter) SPL.

D) Place a mic directly in front of one side of the main PA, and another mic in the center of the audience space, at the 80-foot line. (The propagation time to the delays will be slightly different depending on where people sit, so a center position should be a decent compromise.) Using both mics, record an impulse being reproduced only by the main PA. Analyze the recording to find the delay between the mics.

E) Send L/R to Matrix 1, assign Matrix 1 to an output, then apply the measured delay to that output. Connect the output to the delays. Also, consider blending the subwoofer feed into Matrix 1 if necessary.

F) Set an initial drive level to the delays so that their SPL level is +6 dB when compared to the output of the main PA. The added volume should help mask phase errors with the delays for listeners in front of the delay speakers, due to the contribution from the main PA being of much reduced significance…but it may also be possible that the added volume will be a problem for people sitting between the delays and the main PA. “Seasoning to taste” will be necessary. (For people sitting between the main PA and the delays, the time correction actually makes the delays seem to be MORE out of alignment than less, so the delays being more audible is a problem.)

So, there you go! I’ll let everybody know how this works. Or how it doesn’t.


A Weird LFE Routing Solution

Getting creative to obtain more bottom end.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This is another one of those case studies where you get to see how strange my mind is. As such, be aware that it may not be applicable to you at all. I had a bit of a conundrum, and I solved it in a creative way. Some folks might call it “too creative.”

Maybe those people are boring.

Or they’re reasonable and I’m a little nuts.

Anyway.

I’ve previously mentioned that I handle the audio at my church. We’ve recently added some light percussion to complement our bass-guitar situation, and there was a point where our previous worship leader/ music director wanted more thump. That is, low frequency material that was audible AND a bit “tactile.” In any case, the amount of bass we had happening wasn’t really satisfying.

Part of our problem was how I use system limiting. I’ve long nursed a habit of using a very aggressive limiter across the main mix bus as a “stop the volume here” utility. I decide how loud I want to get (which is really not very loud on Sundays), set the dynamics across the output such that we can’t get any louder, and then smack that processor with a good deal of signal. I’ve gotten to a point where I can get it right most of the time, and “put the band in a box” in terms of volume. Drive the vocals hard and they stay on top, while not jumping out and tearing anyone’s face off when the singers push harder.

At the relatively quiet volume levels that we run things, though, this presents a problem for LF content. To get that extended low-frequency effect that can be oh-so-satisfying, you need to be able to run the bass frequencies rather hotter than everything else. The limiter, though, puts a stop to that. If you’re already hitting the threshold with midrange and high-frequency information, you don’t have anywhere to go.

So, what can you do?

For a while, we took the route of patching into the house system’s subwoofer drive “line.” I would run (effectively) unlimited aux-fed subs to that line, while keeping the mains in check as normal, and we got what we wanted.

But it was a bit of a pain, as patching to the house system required unpatching some of their frontend, pulling an amp partially out of a cabinet, doing our thing, and then reversing the process at the end. I’m not opposed to work, but I like “easy” when I can get it. I eventually came to the conclusion that I didn’t really need the house subs.

This was because:

1) We were far, far below the maximum output capacity of our main speakers.

2) Our main speakers were entirely capable of producing content between 50 – 100 Hz at the level I needed for people to feel the low end a little bit. (Not a lot, just a touch.)

If we wouldn’t have had significant headroom, we would have been sunk. Low Frequency Effects (LFE) require significant power, as I said before. If my artificial headroom reduction was close to the actual maximum output of the system, finding a way around it for bass frequencies wouldn’t have done much. Also, I had to be realistic about what we could get. A full-range, pro-audio box with a 15″ or 12″ LF driver can do the “thump” range at low to moderate volumes without too much trouble. Asking for a bunch of building-rattling boom, which is what you get below about 50 Hz, is not really in line with what such an enclosure can deliver.

With those concerns handled, I simply had to solve a routing problem. For all intents and purposes, I had to create a multiband limiter that was bypassed in the low-frequency band. If you look at the diagram above, that’s what I did.

I now have one bus which is filtered to pass content at 100 Hz and above. It gets the same, super-aggressive limiter as it’s always had.

I also have a separate bus for LFE. That bus is filtered to restrict its information to the range between 50 Hz and 100 Hz, with no limiter included in the path.

Those two buses are then combined into the console’s main output bus.

With this configuration, I can “get on the gas” with low end, while retaining my smashing and smooshing of midrange content. I can have a little bit of fun with percussion and bass, while retaining a small, self-contained system that’s easy to patch. I would certainly not recommend this as a general-purpose solution, but hey – it fits my needs for now.


Maybe The Only Way Out Is “Thru”

Out may be “thru,” but “thru” usually isn’t out.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

xlrshellWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The labeling of jacks and connections is an inexact science.

Really.

For instance, there are audio devices with “in” and “out” jacks where you can connect a source to either point and be just fine. It might be confusing, though, to have two areas labeled “input” (or even “parallel input’), so one jack gets picked to be “in,” with the other as its opposite.

At some point, you just get used to this kind of thing. You trundle along happily, connecting things together without a care in the world.

…and then, somebody asks you a question, and you have to think about what you’re doing. Just why is that jack labeled as it is? You’re taking signal from that connector and sending it somewhere else, so that’s “out,” right? Why is it labeled “through” or “thru,” then?

The best way I can put it to you is this: Usually, when a manufacturer takes the trouble to label something as “thru,” what appears on that connector is the input signal, having gone through the minimum necessary electronics to make the connection practical and easy to use. A label that reads “out” may be a signal that passed through a lot of electronics, or it may be a “thru” that’s simply been called something that’s easier to understand.

“Thru,” From Simple To Complicated

thru-wire

That up there is a simplified depiction of the simplest possible “thru.” It’s two connection points, with nothing but some sort of conductive connection between them. Also on that connection is some sort of internal arrangement of electronics. In this kind of thru, you might see male and female jacks on the different points (if the connections are XLR), but the reality is that both connectors can work for incoming or outgoing signals. Put electricity on either jack, and the simple conductors between those jacks ensure that the signal is present on the other connection point.

This kind of thru is very common on passive loudspeakers and a good many DI boxes. You might see a connector that says “in,” and one that says “out,” but they’re really a parallel setup that feeds both an internal pathway and the “jumper” to the other connector. Because the electrical arrangement is truly parallel, the upstream device driving the signal lines sees the impedance of each connected unit simultaneously. This leads to a total impedance DROP as more units are connected; More electrical pathways are available, which means lower opposition to current overall.

thru-buffer

So, what’s this, then?

This is a buffered thru. In this case, the two jacks are NOT interchangeable. One connector is meant to receive a signal that gets passed on to internal electronics. That connector is linked to a jack with outgoing signal, but in between them is a gain stage (such as an op-amp). The gain stage probably is not meant to perform meaningful voltage amplification on the input. If two volts RMS show up at the input, two volts RMS should be present at the output. The idea is to use that gain stage as an impedance buffer. The op-amp presents a very high input impedance to the upstream signal source, which makes the line easy to drive. That is, the buffer amp makes the input impedance of the next device “invisible” to the upstream signal provider. A very long chain of devices is made possible by this setup, because significant signal loss due to dropping impedance is prevented.

(Then again, the noise floor does go up as each gain stage feeds another. There’s no free lunch.)

In this case, you no longer have a parallel connection between devices. You instead have a serial connection from buffer amp to buffer amp.

thru-logic

The most sophisticated kind of thru (that I know of) is a connection that has intervening logic. There can be several gradations of complexity on that front, and a “thru” with logic isn’t something that you tend to see in audio-signal applications. It’s more for connection networks that involve data, like MIDI, DMX, and computing. The logic may be very simple, like the basic inversion of the output of an opto-isolator. It can also be more complex, like receiving an input signal and then making a whole new copy of that signal to transmit down the chain.

A connection this complex might not really seem like a “thru,” but the point remains that what’s available at the send connection is meant to be, as much as possible, the original signal that was present at the receive connection…or a new signal that behaves identically to the original.

Moving Out

So, if all of the above is “thru,” what is “out?”

In my experience the point of an “out” is to deliver a signal which is intended to have been noticeably transformed in some way by internal processing. For instance, with a mixing console, an input signal has probably gone through (at the very least) an EQ section and a summing amplifier. It’s entirely possible to route the signal in such a way that an input is basically transferred straight through, but that’s not really what the signal path is for.

With connection jacks, the label doesn’t always tell you exactly what’s going on. There might be a whole lot happening, or there might be almost nothing at all between the input and output side. You have to look at your owner’s manual – or pop open an access cover – to find out.


A Vocal Group Can Be Very Helpful

Microsurgery is great, but sometimes you need a sledgehammer.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Folks tend to get set in their ways, and I’m no exception. For ages, I have resisted doing a lot of “grouping” or “busing” in a live context, leaving such things for the times when I’ve been putting together a studio mix. I think this stems from wanting maximum flexibility, disliking the idea of hacking at an EQ that affects lots of inputs, and just generally being in a small-venue context.

Stems. Ha! Funny, because that’s a term that’s used for submixes that feed a larger mix. Submixes that are derived from grouping/ busing tracks together. SEE WHAT I DID THERE?

I’m in an odd mood today.

Anyway…

See, in a small-venue context, you don’t often get to mix in the same way as you would for a recording. It’s often not much help to, say, bus the guitars and bass together into a “tonal backline” group. It’s not usually useful because getting a proper mix solution so commonly comes down to pushing individual channels – or just bits of those channels – into cohesion with the acoustic contribution that’s already in the room with you. That is, I rarely need to create a bed for the vocals to sit in that I can carefully and subtly re-blend on a moment’s notice. No…what I usually need to do is work on the filling in of individual pieces of a mix in an individual way. One guitar might have its fader down just far enough that the contribution from the PA is inaudible (but not so far down that I can’t quickly push a solo over the top), while the other guitar is very much a part of the FOH mix at all times.

The bass might be another issue entirely.

Anyway, I don’t need to bus things together for that. There’s no point. What I need to do for each channel is so individualized that a subgroup is redundant. Just push ’em all through the main mix, one at a time, and there you go. I don’t have to babysit the overall guitar/ bass backline level – I probably have plenty already, and my main problem is getting the vocals over the whole thing anyway.

The same overall reasoning works if you’ve only got one vocal mic. There’s no reason to chew up a submix bus with one vocal channel – I mean, there’s nothing there to “group.” It’s one channel. However, there are some very good reasons to bus multiple vocal inputs into one signal line, especially if you’re working in a small venue. It’s a little embarrassing that it’s taken me so long to embrace this thinking, but hey…here we are NOW, so let’s go!

The Efficient Killing Of Feedback Monsters

I’m convinced that a big part of the small venue life is the running of vocal mics at relatively high “loop gain.” That is, by virtue of being physically nearby to the FOH PA (not to mention being in an enclosed and often reflective space) your vocal mics “hear” a lot more of themselves than they might otherwise. As such, you very quickly can find yourself in a situation where the vocal sound is getting “ringy,” “weird,” “squirrely,” or even into full-on sustained feedback.

A great way to fight back is a vocal group with a flexible EQ across the group signal.

As I said, I’ve resisted this for years. Part of the resistance came from not having a console that could readily insert an EQ across a group. (I can’t figure out why the manufacturer didn’t allow for it. It seems like an incredibly bizarre limitation to put on a digital mixer.) Another bit of my resistance came from not wanting to do the whole “hack up the house graph” routine. I’ve prided myself on having a workflow where the channel with the problem gets a surgical fix, and everything else is left untouched. I think it’s actually a pretty good mentality overall, but there’s a point where a guy finally recognizes that he’s sacrificing results on the altar of ideology.

Anwyay, the point is that a vocals-only subgroup with an EQ is a pretty good (if not really good) compromise. When you’ve got a bunch of open vocal mics on deck, the ringing in the resonant acoustical circuit that I like to call “real music in a real room” is often a composite problem. If all the mics are relatively close in overall gain, then hunting around for the one vocal channel that’s the biggest problem is just busywork. All of them together are the problem, so you may as well work on a fix that’s all of them together. Ultra-granular control over individual sources is a great thing, and I applaud it, but pulling 4 kHz (or whatever) down a couple of dB on five individual channels is a waste of time.

You might as well just put all those potential problem-children into one signal pipe, pull your offending frequency out of the whole shebang, and be done with the problem in a snap. (Yup, I’m preaching to myself with this one.)

The Efficient Addition Of FX Seasoning

Now, you don’t always want every single vocal channel to have the same amount of reverb, or delay, or whatever else you might end up using. I definitely get that.

But sometimes you do.

So, instead of setting multiple aux sends to the same level, why not just bus all the vocals together, set a pleasing wet/ dry mix level on the FX processor, and be done? Yes, there are a number of situations where you should NOT do this: If you need FX in FOH and monitor world, then you definitely need a separate, 100% “wet” FX channel. (Even better is having separate FX for monitor world, but that’s a whole other topic.) Also, if you can’t easily bypass the FX chain between songs, you’ll want to go the traditional route of “aux to FX to mutable return channel.”

Even so, if the fast and easy way will work appropriately, you might as well go the fast and easy way.

Compress To Impress

Yet another reason to bus a bunch of vocals together is to deal with the whole issue of “when one guy sings, it’s in the right place, but when they all do a chorus it’s overwhelming.” You can handle the issue manually, of course, but you can also use compression on the vocal group to free your attention for other things. Just set the compressor to hold the big, loud choruses down to a comfortable level, and you’ll be most of the way (if not all the way) there.

In my own case, I have a super-variable brickwall limiter on my full-range output, a limiter that I use as an overall “keep the PA at a sane level” control. A strategy that’s worked very well for me over the last while is to set that limiter’s threshold as low as I can possibly get away with…and then HAMMER the limiter with my vocal channels. The overall level of the PA stays in the smallest box possible, while vocal intelligibility remains pretty decent.

Even if you don’t have the processing flexibility that my mix rig does, you can still achieve essentially the same thing by using compression on your vocal group. Just be aware that setting the threshold too low can cause you to push into feedback territory as you “fight” the compressor. You have to find the happy medium between letting too little and too much level through.

Busing your vocals into a subgroup can be a very handy thing for live-audio humans to do. It’s surprising that it’s taken me so long to truly embrace it as a technique, but hey – we’re all learning as we go, right?


The Order Matters

Getting your signal chain sorted out is key – especially when monitor world and FOH come together.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Sometimes, you have to do things that “break the rules.”

Audio-humans internalize a lot of pointers as they learn their craft, and those tactics are often in place for very good reasons. When a given way of making things happen has survived for decades, it’s usually because it’s either a really good idea, or we just haven’t found a way around it yet. The problem that arises, though, is that a lot of techs don’t know the “deep roots” of why certain signal flows are as they are.

For instance, just about everybody knows that a gate should – 99% of the time – be placed pre-compression. Not everybody can verbalize the “why” of that rule, though. The “deep root” of the rule is that dynamic range expansion (gating) works more effectively as the dynamic range of an input signal increases. The less of a level difference that you have between the material you want gated out and the material you want to keep, the less able you are to cause the gate to discriminate between the two. Compressing a signal at some point that’s pre-gate is just working against yourself, because compression is dynamic range reduction.

But I digress.

The point of this article isn’t to get into every kind of signal flow arrangement. The idea here is to relate an anecdote that shows why I had to “break some rules” recently. It was all in the name of getting FOH (Front Of House) and monitor world to play nicely.

FX Out Front and On Deck

As I was soundchecking a band, one of the players expressed a request to have reverb on his instrument. He also specifically requested that the reverb be routed to the monitors.

Here’s where the trouble can start.

See, “everybody knows” that reverbs are fed from post-fader sends. Most of the time, this is the right thing to do. You use the send to create a reverb proportionality, and if you end up pushing the channel level around, the proportionality stays the same. If the fader goes up 6 dB, the reverb level goes up 6 dB – the wet/ dry mix remains as it was set. That’s a good thing.

Except when it isn’t.

The problem in the “Curious Case of a Reverb That’s Going to FOH and Monitor World” is that you DON’T want the reverb level to track with level changes out front. If it does, then the wet/ dry blend on deck can go all over the place during the show. This is especially true in small venues, where a instrument may be completely “out” until a solo, at which point you drive the level up into audible territory. That could mean an effective dynamic range of 80 db or more. Possibly a lot more.

Obviously, appearing and disappearing reverb isn’t what the gents on stage are after. As a result, the “post-fader sends to FX” rule has to go out the window, because it’s no longer appropriate. Instead, the reverb has to be run from a pre-fader send. As long as you don’t fiddle with your preamp gain, the reverb level will be unaffected by what you’re doing out front.

Or will it?

The other thing you have to be aware of is where that pre-fader send lives in relation to your channel EQ. If you have something bizarre going on with the channel EQ for FOH (and you very well might), and that pre-fader send takes a split AFTER the EQ, your reverb may sound awfully strange.

What To Do, What To Do?

The first thing that you have to do is prioritize. In most cases, making a consistent blend “easy” for monitor world should come before making FOH easy. (There’s probably a whole article to be written about this, but the short version is that you can often hear, and act on, issues in FOH faster than issues on deck.)

The next thing to do is to figure out what you need for that prioritization to be fulfilled. In this case, I needed reverb that was driven from a pre-fader, pre-EQ signal. I also needed the “wet” audio from the reverb to be independently routable to FOH and the monitor wedges. Making this happen for me is no problem, because I run a console with insanely flexible routing. I can actually use “subchannels” within channels to pass audio “around” processors, and any channel can send to or receive from any other channel. I also have the built-in option to run sends pre or post any channel processing.

But, what if you don’t have all that?

Heck, what if you don’t have completely separate sets of channels for FOH and monitor land?

You can still make this happen. Take a look:

The “half-jacked” insert lets you mult (split) the original signal over to the reverb. At the same time, the signal continues to flow through the FOH channel and its monitor sends. You can then take the reverb processor’s output, put that in a different channel, and use the pre-fader sends to get reverb to monitor world. The reverb channel’s fader output can then be blended into FOH as necessary.

With this kind of setup, you can go hog-wild with your FOH levels, and monitor world won’t be directly affected. There are other ways of accomplishing this, of course, but this setup is one of the simpler ones.

Yes, this is a bit more complicated than what you might think of “off the cuff,” but it lets you have what you need out front without compromising what the folks on deck can have for themselves. I think it’s worth doing if you have the channels, and it’s not that hard to adapt to your own needs…

…you just have to remember that “the order matters.”


Rusted Moose Live Broadcast

Check out live music from Utah at AMR.fm.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

When you leave a large land-based mammal out in the rain, you might just end up with a Rusted Moose. The stream is scheduled to begin at 7:00 PM, Utah local time (MST). The stream will be accessible through AMR.fm.

…and yes, we are definitely aware of the issues that cropped up with last week’s show. A live broadcast of a show that’s also live (to an actual audience in the room) is a thing with many moving parts, and we failed to nail down one of those moving parts. Specifically, we never positively determined what the broadcast feed was “listening” to – and wouldn’t you know, the feed was listening to the laptop’s built-in microphone.

Yowza.

I should write an article about all this sometime. 🙂


Offline Measurement

Accessible recording gear means you don’t have to measure “live” if you don’t want to.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

I’m not an audio ninja. If you make a subtle change to a system EQ while the system is having pink noise run through it, I may or may not be able to tell that you’ve made a change, or I may not be able to tell you how wide or deep a filter you used. At the same time, I highly recognize the value of pink noise as an input to analysis systems.

“Wait! WAIT!,” I can hear you shouting, “What the heck are you talking about?”

I’m talking about measuring things. Objectivity. Using tools to figure out – to whatever extent is possible – exactly what is going on with an audio system. Audio humans use function and noise generators for measurement because of their predictability. For instance, unlike a recording of a song, I know that pink noise has equal power per octave and all audible frequencies present at any given moment. (White noise has equal power PER FREQUENCY, which means that each octave has twice as much power as the previous octave.)

If that paragraph sounded a little foreign to you, then don’t panic. Audio analysis is a GINORMOUS topic, with lots of pitfalls and blind corners. At the same time, I have a special place in my heart for objective measurement of audio devices. I get the “warm-n-fuzzies” for measurement traces because they are, in my mind, a tool for directly opposing a lot of the false mythology and bogus claims encountered in the business of sound.

Anyway.

Measurement is a great tool for dialing in live-sound rigs of all sorts. Because of its objectivity (assuming you actually use your measurement system correctly), it helps to calibrate your ears. You can look at a trace, listen to what something generating that trace sounds like, and have a reference point to work from. If you have a tendency to carve giant holes in a PA system’s frequency response when tuning by ear, measurement can help tame your overzealousness. If you’re not quite sure where that annoying, harsh, grating, high-mid peak is, measurement can help you find it and fix it.

…and one of the coolest things that I’ve discovered in recent years is that you don’t necessarily have to measure a system “live.” Offline measurement and tuning is much more possible than it ever has been before – mostly because digital tech has made recording so accessible.

How It Used To Be And Often Still Is

Back in the day, it was relatively expensive (as well as rather space-intensive and weight-intensive) to bring recording capabilities along with a PA system. Compact recording devices had limited capabilities, especially in terms of editing. Splicing tape while wrangling a PA wasn’t something that was going to happen.

As a result, if you wanted to tune a PA with the help of some kind of analyzer, you had to actually run a signal through the PA, into a measurement mic, and into the analysis device.

The sound you were measuring had to be audible. Very audible, actually, because test signals have to drown out the ambient noise in the room to be really usable. Sounds other than the test signal being audible to the measurement mic mean that your measurement’s accuracy is corrupted.

So, if you were using noise, the upshot was that you and everybody else in the room had to listen to a rather unpleasant blast of sound for as long as it took to get a reference tuning in place. It’s not much fun (unless you’re the person doing the work), and you can’t do it everywhere. Even when using a system that can take inputs other than noise, you still had to measure and make your adjustments “live,” with an audible signal in the room.

Taking A Different Route

The beautiful thing about today’s technology is that we have alternatives. In some cases, you might prefer to do a “fully live” tuning of a PA system or monitor rig – but if you’d prefer a different approach, it’s entirely possible.

It’s all because of how easy recording is, really.

The thing is, any audio-analysis system doesn’t really care where its input comes from. An analyzer really isn’t bothered about if its information is coming from a live measurement mic, or if the information is a recording of what came out of that measurement mic. All the analyzer knows is that some signal is being presented to it.

If you’re working with a single-input analyzer, offline measurement and tuning is basically about getting the “housekeeping” right:

  1. Run your measurement signal to the analyzer, without any intervening EQ or other processing. If that signal is supposed to give you a “flat” measurement trace, then make sure it does. You need a reference point that you can trust.
  2. Now, disconnect the signal from the analyzer and route that same measurement signal through the audio device(s) that you want to test. This includes the measurement mic if you’re working on something that produces acoustical output – like monitor wedges or an FOH (Front Of House) PA. The actual thing that delivers the signal to be captured and analyzed is the “device-under-test.” For the rest of this article, I’m effectively assuming that the device-under-test is a measurement mic.
  3. Connect the output of the device-under-test to something that can record the signal.
  4. Record at least several seconds of your test signal passing through what you want to analyze. I recommend getting at least 30 seconds of recorded audio. Remember that the measurement-signal to ambient-noise ratio needs to be pretty high – ideally, you shouldn’t be able to hear ambient noise when your test signal is running.
  5. If at all possible, find a way to loop the playback of your measurement recording. This will let you work without having to restart the playback all the time.
  6. Run the measurement recording through the signal chain that you will use to process the audio in a live setting.
  7. Send the output of that signal chain to the analyzer, but do NOT actually send the output to the PA or monitor rig.

Because the recorded measurement isn’t being sent to the “acoustical endpoints” (the loudspeakers) of your FOH PA or monitor rig, you don’t have to listen to loud noise while you adjust. As you make changes to, say, your system EQ, you’ll see the analyzer react. Get a curve that you’re comfortable with, and then you can reconnect your amps and speakers for a reality check. (Getting a reality check of what you just did in silence is VERY important – doubly so if you made drastic changes somewhere.)

Dual-FFT

So, all of that up there is fine and good, but…what if you’re not working with a simple, single input analyzer? What if you’re using a dual-FFT system like SMAART, EASERA, or Visual Analyzer?

Well, you can still do offline measurement, but things get a touch more complicated.

A dual-FFT (or “transfer function”) analysis system works by comparing a reference signal to a measurement signal. For offline measurement to work with comparative analysis, you have to be able to play back a copy of the EXACT signal that you’ll be using for measurement. You also have to be able to play that signal in sync with your measurement recording, but on a separate channel.

For me, the easiest way to accomplish this is to have a pre-recorded (as opposed to “live generated”) test signal. I set things up so that I can record the device-under-test while playing back the test signal through that device. For example, I could have the pre-recorded test signal on channel one, connect my measurement device so that it’s set to record on channel two, hit “record,” and be off to the races.

There is an additional wrinkle, though – time-alignment. Dual-FFT analyzers give skewed results if the measurement signal is early or late when compared to the reference signal, because, as far as the analyzer is concerned, the measurement signal is diverging from the reference. Of course, any measured signal is going to diverge from the reference, but you don’t want unnecessary divergence to corrupt the analysis. The problem, though, is that your test signal takes time to travel from the loudspeaker to the measurement microphone. The measurement recording, when compared to the reference recording, is inherently “late” because of this propagation delay.

Systems like SMAART and EASERA have a way of doing automatic delay compensation in a quick and painless way, but Visual Analyzer doesn’t. If your software doesn’t have an internal method for delay compensation, you’ll need to do it manually. This means:

  1. Preparing a test signal that includes an audible click, pop, or other transient that tells you where the signal starts.
  2. After recording the measurement signal, you’ll need to use that click or pop to line up the measurement recording with the test-signal, in terms of time. The more accurate the “sync,” the more stable your measurement trace will be.

If you’d rather not make your own test signal, you’re welcome to download and use this one. The “click” at the beginning is several cycles of a 2 kHz tone.

The bottom line is that you can certainly do “live” measurements if you want to, but you also have the option of capturing your measurement for “silent” tweaking. It’s ultimately about doing what’s best for your particular application…and remembering to do that “reality check” listen of your modifications, of course.


Mixing A Live Album: Drums

In a rock mix, you may find yourself “really turnin’ the knobs” when it comes to the drums.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.


Mixing A Live Album: Guitar

Sometimes, making something sound big means reducing dynamic range and narrowing the overall frequency response.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.


Split Monitor For The Little Guy

You don’t have to be in the big-leagues of production to get big-league functionality.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

So, I’ve already talked a bit about why “split monitoring” is a nifty idea. Independent signal paths for FOH and monitor world let you give the folks onstage what they want, while also giving FOH what you want – and without having to directly force either area’s decisions on the other.

…but, how to set this up?

Traditional split-monitor setups are usually accomplished with a (relatively) expensive onstage split. Individual mic lines are connected to the stagebox, which then “mults” the signal into at least two cable trunks. This can be as simple as bog-standard parallel wiring – like you can find in any “Y” cable – or it can be a more complex affair with isolation transformers.

While you can definitely use a splitter snake or stagebox to accomplish the separation of FOH from monitor world, the expense, weight, and hassle may not really be worth it. Traditional splitters are usually built with the assumption that there will be separate operators for FOH and monitor world, and that these operators will also be physically separated. As a result, the cable trunks tend to be different lengths. Also, those same cables are made of a lot of expensive copper and jacketing material, and the stagebox internals can be even more spendy.

Now, if you actually need the functionality of a full-blown splitter snake, you should definitely invest in one. However, if you just want to get in on the advantages of a split monitor configuration, what you really need to shift your spending to console functionality and connectivity.

General Principles

Whether you implement a split monitor solution via analog or digital means, there are some universally applicable particulars to keep in mind:

  • You need to have enough channels to handle all of your inputs twice, OR you need enough channels to handle the signals that are “critical for monitoring” twice. For instance, if you never put drums in the monitors, then being able to “double up” the drum channels isn’t necessary. On the other hand, only doubling certain channels can be more confusing, especially for mixes with lots of inputs.
  • You actually DON’T need to worry about having enough pre-fader aux sends. In a split monitor configuration, post-fader monitor sends can actually be very helpful. Because you don’t have to worry about FOH fader moves changing the monitor mixes, you can run all your monitor sends post fader. This lets you use the monitor-channel fader itself as a precise global trim.
  • If the performers need FX in the monitors, you need to have a way to return the FX to both the FOH and monitor signal paths.
  • You need to be wiling to take the necessary time to get comfortable with running a split monitor setup. If you’ve never done it before, it can be easy to get lost; try your first run on a very simple gig, or even a rehearsal.

With all of that managed, you can think about specific implementations.

Analog

To create an affordable split monitor rig with an analog console (or multiple consoles), you will need to have a way to split the output of one mic pre to both the FOH and monitor channels. You can do this by “Y” cabling the output of external pres, but external mic preamps tend to be pretty spendy. A much less expensive choice is to use the internal pres on insert-equipped consoles. Ideally, one pre should be the “driver” for each source, and the other pre should be bypassed. Whether you pick the FOH or monitor channel pre is purely a matter of choice.

Your actual mic lines will need to be connected to the “driver” pre. On most insert-equipped consoles, you can plug a TS cable into the insert jack halfway. This causes the preamp signal to appear on the cable tip, while also allowing the signal to continue flowing down the original channel. The free end of the TS cable should also be connected to the insert on the counterpart channel, but it will need to be fully inside the jack. This connects the split signal to the electronics that are downstream of the preamp.

If you are working on a single console, you will need to be extra careful with your routing. You’ll need to take care not to drive your monitor sends from FOH channels, and on the flipside, you should usually disconnect your monitor channel faders from all outputs. (If all your monitor auxes are set as pre-fader, you can connect your monitor channel faders to a subgroup to get one more mix. This costs you your “global trim” fader functionality, of course. Decisions, decisions…)

Digital

Some digital consoles can allow you to create a “virtual” monitor mixer without any extra cables at all. If the digital patchbay functions let you assign one input to multiple channels, then all you have to worry about is the post-split routing. Not all digi consoles will let you do this, however. There are some digital mixers on the market that are meant to bring certain aspects of digital functionality to an essentially analog workflow, and these units will not allow you to do “strange” patching at the digital level.

As with the analog setup, if you’re using a single console you have to be careful to avoid using the monitor auxiliaries on the FOH channels. You also have to disconnect the monitor faders from all post-fade buses and subgroups – usually. Once again, if you don’t mind losing the fader-as-trim ability, setting all your monitor auxes to pre-fader and connecting the fader to a subgroup can give you one more mix.

Split-monitor setups can be powerful tools for audio rigs with a single operator. The configuration releases you from the compromises that can’t be avoided when you drive FOH and monitor land from a single channel. I definitely recommend trying split monitors if you’re excited about sound as its own discipline, and want to take your system’s functionality to the next level. Just take your time, and get used to the added complexity gradually.