Tag Archives: Mixing

Case Study: FX When FOH Is Also Monitor World

Two reverbs can help you square certain circles.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The Video

The Script

Let’s say that a band has a new mixing console – one of those “digital rigs in a box” that have come on the scene. The musicians call you in because they need some help getting their monitors dialed up. At some point, the players ask for effects in the monitors: The vocals are too dry, and some reverb would be nice.

So, you crank up an FX send with a reverb inserted on the appropriate bus – and nothing happens.

You then remember that this is meant to be a basic setup, with one console handling both FOH and monitors. Your inputs from the band use pre-fader sends for monitor world, but post-fader sends for FX. Since you weren’t building a mix for FOH, all your faders were all the way down. You don’t know where they would be for a real FOH mix, anyway. If the faders are down, a post-fader send can’t get any signal to an FX bus.

Now, you typically don’t want the monitors to track every level tweak made for FOH, but you DO want the FX sends to be dependent on fader position – otherwise, the “wet-to-dry” ratio would change with every fader adjustment.

So, what do you do?

You can square the circle if you can change the pre/ post send configuration to the FX buses, AND if you can also have two reverbs.

Reverb One becomes the monitor reverb. The sends to that reverb are configured to be pre-fader, so that you don’t have to guess at a fader level. The sends from the reverb return channel should also be pre-fader, so that the monitor reverb doesn’t end up in the main mix.

Reverb Two is then setup to be the FOH reverb. The sends to this reverb from the channels are configured as post-fader. Reverb Two, unlike Reberb One, should have output that’s dependent on the channel fader position. Reverb Two is, of course, kept out of the monitor mixes.

With a setup like this, you don’t need to know the FOH mix in advance in order to dial up FX in the monitors. There is the small downside of having to chew up two FX processors, but that’s not a huge problem if it means getting the players what they need for the best performance.


The Difference Between The Record And The Show

Why is it that the live mix and the album mix end up being done differently?

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Jason Knoell runs H2Audio in Utah, and he recently sent me a question that essentially boils down to this: If you have a band with some recordings, you can play those recordings over the same PA in the same room as the upcoming show. Why is it that the live mix of that band, in that room, with that PA might not come together in the same way as the recording? The recording that you just played over the rig? Why would you NOT end up having the same relationship between the drums and guitars, or the guitars and the vocals, or [insert another sonic relationship here].

This is one of those questions where trying to address every tiny little detail isn’t practical. I will, however, try to get into the major factors I can readily identify. Please note that I’m ignoring room acoustics, as those are a common factor between a recording and a live performance being played into the same space.

Magnitude

It’s very likely that the recording you just pumped out over FOH (Front Of House) had a very large amount of separation between the various sources. Sure, the band might have recorded the songs in such a way as to all be together in one room, but even then, the “bleed” factor is very likely to be much smaller than what you get in a live environment. For instance, a band that’s in a single-room recording environment can be set up with gobos (go-betweens) screening the amps and drums. The players can also be physically arranged so that any particular mic has everything else approaching the element from off-axis.

They also probably recorded using headphones for monitors, and overdubbed the “keeper” vocals. They may also have gone for extreme separation and overdubbed EVERYTHING after putting down some basics.

Contrast this with a typical stage, where we’re blasting away with wedge loudspeakers, we have no gobos to speak of, and all the backline is pointed at the sensitive angles of the vocal mics. Effectively, everything is getting into everything else. Even if we oversimplify and look only at the relative magnitudes between sounds, it’s possible to recognize that there’s a much smaller degree of source-to-source distinctiveness. The band’s signals have been smashed together, and even if we “get on the gas” with the vocals, we might also be effectively pushing up part of the drumkit, or the guitars.

Time

Along with magnitude, we also have a time problem. With as much bleed as is likely in play, the oh-so-critical transients that help create vocal and musical intelligibility are very, very smeared. We might have a piece of backline, or a vocal, “arriving” at the listener several times over in quick succession. The recording, on the other hand, has far more sharply defined “timing information.” This can very likely lead to a requirement that vocals and lead parts be mixed rather hotter live than they would be otherwise. That is, I’m convinced that a “conservation of factors” situation exists: If we lose separation cues that come from timing, the only way to make up the deficit is through volume separation.

A factor that can make the timing problems even worse is those wedge monitors we’re using, combined with the PA handling reproduction out front. Not only are all the different sources getting into each other at different times, sources being run at high gain are arriving at their own mics several times significantly (until the loop decay becomes large enough to render the arrivals inaudible). This further “blurs” the timing information we’re working with.

Processing Limits

Because live audio happens in a loop that is partially closed, we can be rather more constrained in what we can do to a signal. For instance, it may be that the optimal choice for vocal separation would simply be a +3 dB, one-octave wide filter at 1 kHz. Unfortunately, that may also be the portion of the loop’s bandwidth that is on the verge of spiraling out of control like a jet with a meth-addicted Pomeranian at the controls. So, again, we can’t get exactly the same mix with the same factors. We might have to actually cut 1 kHz and just give the rest of the signal a big push.

Also, the acoustical contribution of the band limits the effectiveness of our processing. On the recording, a certain amount of compression on the snare might be very effective; All we hear is the playback with that exact dynamics solution applied. With everything live in the room, however, we hear two things: The reproduction with compression, and the original, acoustic sound without any compression at all. In every situation where the in-room sound is a significant factor, what we’re really doing is parallel compression/ EQ/ gating/ etc. Even our mutes are parallel – the band doesn’t simply drop into silence if we close all the channels.


Try as we might, live-sound humans can rarely exert the same amount of control over audio reproduction that a studio engineer has. In general, we are far more at the mercy of our environment. It’s very often impractical for us to simply duplicate the album mix and receive the same result (only louder).

But that’s just part of the fun, if you think about it.


What A Mixing Console Isn’t

Magically turning a band into something else isn’t what we’re here to do.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

I’m working on a new video, but it’s taking a while due to scheduling issues. (Being busy isn’t a bad thing, but still…) I figured I should put something up here to prove that I haven’t forgotten this site in the meantime.

So, in regards to a picture of a sophisticated mixing console: The device depicted is not a tool for fixing arrangement problems or interpersonal conflicts.

There, that should stir the pot a little. 🙂


Sounding “Good” Everywhere

This is actually about studio issues, but hey…

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

My latest article for Schwilly Family Musicians has to do with the recorded side of life. Even so, I thought some of you might be interested:

‘Even before the age of smartphones, “translation” was a big issue for folks making records. The question that was constantly asked was, “How do I make this tune sound good everywhere?”

In my mind, that’s the wrong question.

The real question is, “Does this mix continue to make sense, even if the playback system has major limitations?”’


Read the whole piece here.


Monitor-World Is Not A Junior-Level Position

Mixing monitors is a mission-critical task, not an “add-on” to FOH.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Worrying about Front Of House (FOH) doesn’t keep me up at night. Monitor-world, on the other hand…

It’s not just because an issue at FOH is much easier to hear, and thus much easier to correct swiftly and in detail. (Although that’s part of it.) It’s not just because midstream communication regarding monitor needs is difficult – exponentially so as the detail-level of a request rises. (Although that’s part of it, too.)

It’s because getting the monitors right is absolutely crucial to a successful show. If monitor-world isn’t doing its best, the musicians won’t be able to do their best, and if they can’t do their best, the most stupenfuciously awesome-sauce FOH mix will be a mix of musicians WHO ARE STRUGGLING. I don’t want to be forced to choose, but if I am compelled, I will take incredible monitors and mediocre FOH without hesitation.

Every day of the week.

And twice on Sunday.

Yet, for some reason, there has been a tendency to elevate the FOH audio human’s position above that of the monitor engineer. It’s as if there are two species of noise louderizer in the world, Homo Sapiens Mixus Audienceus and Homo Sapiens Musicius Keepem-Happyus, with the latter being an underdeveloped version of the former. Well, that’s a load of droppings from an angry, male cow if ever there was such a thing.

For FOH, you basically mix one show, a show that, as I mentioned, you yourself hear in detail. You generally get to make decisions unilaterally, and your path to those decisions is through your own interpretation of your hearing.

In contrast, monitor-world is the mixing of many shows to multiple audiences of one (sometimes eight or more). Those shows may have wildly different needs, and with wedges, each show bleeds into and heavily influences all the other shows. There may be a subtle detail that’s driving somebody crazy which is difficult for the operator to hear. Every significant choice has to filtered through the interpretation of another person, and nuanced communication is anywhere from challenging to outright impossible. At any given moment, you have to keep some sort of mental map about what’s going where, and also about what was recently changed (in case a problem suddenly crops up). Modifications have to be made swiftly and smoothly, and if you make a mistake, you have to be able to backtrack surgically. Panic is lethal.

To crib from The Barking Road Dog, mixing rock-and-roll monitors in realtime is not a skill possessed by a large number of people involved in the noise louderization profession.

…and then, there’s the gear side. It’s not uncommon to hear of a smaller audio provider upgrading a “point-and-shoot” FOH rig, with the old boxes being “demoted” to monitor duty. This sometimes happens by default or necessity. It’s certainly the reality in my case. But to do that intentionally doesn’t make sense to me. The boxes where being laser-flat across the audible spectrum helps stave off disaster? The boxes that have to stay “hospital clean” at high volume? The boxes that have to be able to produce large, uncompressed peaks, so that performers can “track” their own output? Those boxes are needed in monitor-land! (Seriously, if I ever get my hands on a bunch of disposable income, I’m going to bring my monitor rig UP to parity with my FOH system.)

So, no. Monitor-world is not for the intern or second-banana. The person running it is not a “junior” or “second” engineer. The gear is not the stuff that couldn’t cut the mustard at FOH.

What happens on deck is the bedrock, THE crucial and critical foundation for the show as a whole. It should be treated as such at all times.


EQ: Separating The Problems

You have to know what you’re solving if you want to solve a problem effectively.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

There’s a private area on Facebook for “musicpreneurs” to hang out in. I’ve been trying to get more involved, so I’ve asked people to pose their sonic quandaries to me. One person was asking how to set up their system so as to get a certain, desired tone from their instrument.

I won’t rehash the whole answer here, but I will definitely tell you the key to my answer: Separate your problems. Figure out which “domain” a particular issue resides in, and then work within that area to find a solution.

That’s a statement that you can definitely generalize, but the particular discussion was mostly in the context of equalization. Equalization of live-audio signal chains seems to invite unfocused flailing at least as much as anything else. Somebody gets into a jam (not a jam session, but rather a difficult situation), and they start tweaking every tonal control they can get their hands on. Several minutes later, they’ve solved and unsolved several different problems, and might be happy with some part of their fix. Of course, they may have broken something else in the process.

If you’re like me, you’d prefer not to do that.

Not doing that involves being very clear about where your problem actually is.


Lots of people use the “wrong” EQ to address a perceived shortcoming with their sound. I think I’ve mentioned before that a place to find this kind of approach is with vocal processors. I’ve encountered more than person who, as far as I could tell, was trying to fix a PA system through the processing of an individual channel. That is, at a regular gig or rehearsal, they were faced with a system that exhibited poor tonality. For instance, for whatever reason, they might have felt that the PA lacked in high-end crispness.

So, they reach down to their processor, and throw a truckload of high-frequency boost onto their voice. Problem solved!

Except they just solved the problem everywhere, even if the problem doesn’t exist everywhere. They plug that vocal processor into a rig which has been nicely tuned, and now their voice is a raspy, irritating, sand-paper-esque noise that’s constantly on the verge of hard feedback.

They used channel-specific processing to manage a system-level problem, and the result was a channel that only works with one system – or a system with one channel that sounds right, while everything else is still a mess. They found a fix, but the fix was in the wrong domain.

The converse case of this is also common. An engineer gets into a bind when listening to a channel or two, and reaches for the EQ across the main speakers. Well, no problem…except that any new solution has now been applied to EVERYTHING running through the mains. That might be helpful, or it might mean that a whole new hole has just been dug. If the PA is well-tuned, then the problem isn’t the PA. Rather, the thing to solve is specific to a channel or group of channels, and should be addressed there if possible.

If you find yourself gunning the bottom end on every channel of your console, you’d be better served by changing the main EQ instead. If everything sounds fine except for one channel, leave the main processing alone and build a fix specific to your problem-child.

Obviously, there are “heat of the moment” situations where you just have to grab-n-go. At the same time, taking a minute to figure out which bridge actually has the troll living under it is a big help. Find the actual offender, correct that offender, leave everything else alone, and get better results overall.


Panning

Localization is a great idea, but it’s not my top priority at FOH.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

As an FOH guy, I haven’t really given two hoots about regular stereo for many years. Since I also sit in the monitor-beach chair, though, I find stereo – or rather, multichannel output, interesting and helpful on occasion.

Why the difference?

Your Friend, Localization

Let’s start by saying that “localization” is a good thing. A listener being able to recognize a specific point in space where a particular sound comes from is very useful when many sounds are happening together. It increases perceived clarity and/ or intelligibility; Instead of hearing one giant sound that has to be picked apart, it’s far more mentally apparent that multiple sounds are combining into a whole.

When localization gets tossed out the window, volume and tone are pretty much all you have available for differentiation of sources. This can lead to a volume war, or just high-volume in general, because it’s tougher to get any particular source to really stand out. The fewer differences you have available, the bigger the remaining differences have to be in order to generate contrast.

The thing with localization, though, is that its helpfulness erodes as the consistency of its perception decreases. In other words, it’s best when the entire intended audience is getting the same experience.

Everybody Getting The Show That’s Right For Them

In monitor world, consistency of perception is generally not much of a problem. I’m basically mixing for an audience of one, multiple times over. Even with wedges and fills all banging away and bleeding into one another, we can construct a (relatively) small number of solutions that are “as right as possible” for each band member. Very nifty things are possible with enough boxes and sends. For instance, everybody in the downstage line might get two wedges. Wedge one might be just vocals, with each singer’s mic emphasized in their own mix, and the others faded into the background. Wedge two could be reserved for instruments only. With the vocals having their own position in space, they become easier to differentiate from everything else. These benefits of localization are consistent and maximized, because everybody has a solution that’s built for just them (and then balanced with all the other solutions happening on deck).

So, that’s monitor world. Do you see the potential problem with FOH?

In monitor world, assuming I have the resources, I get to hit each listener with at least one box each.

At FOH, I have to hit MANY listeners in many positions with only a few localized boxes in total. (A PA can be built of arrayed speakers, of course, but you generally don’t separately perceive each element in an array.)

This creates a consistency problem. The folks sitting right down the center of the venue are usually in a great position to hear all the localized boxes. Start getting significantly off to one side or another, though, and that begins to fall apart. More and more, one “side” of the PA tends to get emphasized as the audible, direct source, with the other side dropping off. If different channels are significantly panned around, then, the panning can be a large contributor to different people getting a very different, and possibly incorrect “solution.”

It’s not that the people in the center never get a different show than the people off to the sides anyway, it’s that trying to mix in stereo can make that difference even bigger.

As much as is practicable, I want to be mixing the same show for everybody in the seats. That means that each speaker/ array/ side is producing the same show. (Now, if I get to have a dedicated center box or array that hits everybody equally and lets me localize vocals, well, that’s something.)

Another reason that I don’t generally expend energy on stereo mixing for FOH is because the stage tends to work against me. In plenty of cases, a particular source on deck is VERY audible, even with the PA, and basically seems to be localized in the center. This tends to collapse any stereo effect that might be going on, unless the PA gets wound up enough to be far louder than the on-stage source. Quite often, that amount of volume would be overwhelming to the people in the seats.

Caveats

First, I want to make sure that I’m NOT saying that mixing a live show in stereo is “wrong.” I don’t advise it, and I generally think that it’s not the best use of limited resources, but hey – if it’s working for you, and you like it, and it’s not causing you any problems, then that’s your thing.

Also, Dave Rat is a proponent of using relatively subtle differences from one PA “side” to another to help reduce comb-filtering issues in the middle. I think that’s an astute observation and solution on his part. For me, it’s not quite worth worrying about, but maybe it is for you.


The First Rule Of FOH

It definitely isn’t “Get control over everything.”

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Well, I’ve done it. I’ve gone and had my first, real disagreement on Twitter. I may be a real boy now!

The (actually very mild) dust-up occurred between myself and another engineer. He was miffed at my “Pre or Post EQ” article, because – for him – my approach was far, far too passive. His response was that the first rule of FOH is to get control over the show.

Well, I’m sorry, but I can’t agree.

First of all, Rule #1 for all audio engineering is, “First, do no harm.” This job is very much like medicine: Shut your trap, listen to the musicians, try to get to the root of the problem, treat people like human beings, and don’t rush to a diagnosis.

Second: Not everybody is like this, but the process of getting in control over everything is basically installing a dictatorship. Not everybody is on board, and they may swallow their tongues for a while, but a rebellion will brew.

…and, if they aren’t afraid of you, folks may do nasty things to you out of spite. Does that sound like a fun show? That sounds like a TERRIBLE show, one that flat-out sucks for you, the players, and the audience.

I’ve said this before, and I’ll say it again. Being an audio human for live shows has basically nothing to do with molding every second of the proceedings to your will. That kind of thing can (and does) happen, but I don’t see it as the normative case for folks doing shows where muting the PA doesn’t totally mute the band. That’s the vast majority of us, by the way. Rather, this gig is a sort of collaborative Judo, wherein we utilize the momentum of the band to transfer the best possible show to the audience. Forcing your way to maximum control is the opposite of that – I’ve seen it in action. Wrestling control of the show away from the musicians has an overwhelming tendency to KILL their momentum.

The musicians’ momentum is what the audience came to see. In the grand scheme of things, nobody truly cares about how “fat and punchy” the drums are. Nobody truly cares about how radio-ready the vocals seem to sound. If the show momentum is off, that will be the thing that the patrons notice. They’ll be impressed by the mixing for a few moments, but they didn’t buy those tickets for that purpose.

Now, if you can get complete control and also maintain musician momentum, I’m all for it. I’m not saying you shouldn’t have full control if that’s the natural state of the show. If it’s not the natural state, though, you’re wasting a ton of energy (literally and figuratively) by swimming against the current.

Folks, it’s not “our” show. It’s the band’s show, and we are helping with it. We do get partial credit, and we may get an outsize portion of the blame, but – deep breaths, people! I’ve mixed plenty of shows that, to my mind, sounded rather poor. Some of them, in the opinions of audience members, were my fault when they really weren’t. Some of them, also in the opinions of audience members, sounded absolutely stellar (while I was grinding my teeth into fine powder over how terrible everything was). It’s okay! There are people who think I’m an idiot, but there are enough people who think the opposite that I’m not worried.

If something’s really amiss, comment on it, but don’t force your way into the captain’s chair. Interestingly, you’re far more likely to be promoted to that seat if you demonstrate an ability to collaborate with what’s already going on.


Pre Or Post EQ?

Stop agonizing and just go with post to start.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Oh, the hand-wringing.

Should the audio-human take the pre-EQ split from the amplifier, or the post-EQ split? Isn’t there more control if we choose pre-EQ? If we choose incorrectly, will we ruin the show? HELP!

Actually, I shouldn’t be so dismissive. Shows are important to people – very important, actually – and so taking some time to chew on the many and various decisions involved is a sign of respect and maturity. If you’re actually stopping to think about this, “good on ya.”

What I will not stop rolling my eyes at, though, are live-sound techs who get their underwear mis-configured over not getting a pre-EQ feed from the bass/ keys/ guitar/ whatever. Folks, let’s take a breath. Getting a post-EQ signal is generally unlikely to sink any metaphorical ship, sailboat, or inflatable canoe that we happen to be paddling. In fact, I would say that we should tend to PREFER a post-EQ direct line. Really.


First of all, if this terminology sounds mysterious, it really isn’t. You almost certainly know that “pre” means “before” and “post” means “after.” If you’re deducing, then, that setting a line-out to “pre-EQ” gets you a signal from before the EQ happens, then you’re right. You’re also right in thinking that post-EQ splits happen after all the EQ tweaking has been applied to the signal.

And I think we should generally be comfortable with, and even gravitate toward getting our feed to the console from a point which has the EQ applied.

1) It’s consistent with lots of other things we do. Have you ever mic’ed a guitar amp? A drum? A vocalist? Of course you have. In all of those cases (and many others), you are effectively getting a post-EQ signal. Whether the tone controls are electronic, related to tuning, or just part of how someone sings, you are still subject to how those tonal choices are playing out. So, why are you willing to cut people the slack to make choices that affect your signal when it’s a mic that’s involved, but not a direct line?

2) There’s no reason to be afraid of letting people dial up an overall sound that they want. In fact, if it makes it easier on you, the audio-human, why would that be a bad thing? I’ve been in situations where a player was trying desperately to get their monitor mix to sound right, but was having to fight with an unfamiliar set of tone controls (a parametric EQ) through an engineer. It very well might have gone much faster to just have given the musician a good amount of level through their send, and then let them turn their own rig’s knobs until they felt happy. You can do that with a post-EQ line.

3) Along the same track, what if the player changes their EQ from song to song? What if there are FX going in and out that appear at the post-EQ split, but not from the pre-EQ option? Why throw all that work out the window, just to have “more control” at the console? That sounds like a huge waste of time and effort to me.

4) In any venue of even somewhat reasonable size, having pre-EQ control over the sound from an amplifier doesn’t mean as much as you think it might. If the player does call up a completely horrific, pants-wettingly terrible tone, the chances are that the amplifier is going to be making a LOT of that odious racket anyway. If the music is even somewhat loud, using your sweetly-tweaked, pre-EQ signal to blast over the caterwauling will just be overwhelming to the audience.

Ladies and gents, as I say over and over, we don’t have to fix everything – especially not by default. If we have the option, let’s trust the musicians and go post-EQ as our first attempt. If things turn out badly, toggling the switch takes seconds. (And even taking the other option might not be enough to fix things, so take some deep breaths.) If things go well, we get to ride the momentum of what the players are doing instead of swimming upstream. I say that’s a win.


Virtually Unusable Soundcheck

Virtual soundchecks are a neat idea, but in reality they have lots of limitations.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

virtual-soundcheckWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Before we dive in to anything, let’s go over what I’m not saying:

I’m not saying that virtual soundchecks can never be useful in any situation.

I’m not saying that you shouldn’t try them out.

I’m not saying that you’re dumb for using them if you’re using them.

What I am definitely saying, though, is that the virtual soundcheck is of limited usefulness to folks working in small rooms.

What The Heck Is A Virtual Soundcheck?

A virtual soundcheck starts with a recording. This recording is a multitrack capture of the band playing “live,” using all the same mics and DI boxes as would be set up for the show. The multitrack is then fed, channel-per-channel, into a live-sound console. The idea is that the audio-human can tweak everything to their heart’s delight, without having the band on deck for hours at a time. The promise is that you can dial up those EQs, compressors, and FX blends “just so,” maybe even while sitting at home.

This is a great idea. Brilliant, even.

But it’s flawed.

Flaw 1: Home is not where the show is.

It may be possible to make your headphones or studio monitors sound like a live venue. You may even be able to use a convolution reverb to make a playback system in one space sound almost exactly like a PA system in another space. Unless you go to that trouble, though, you’re mixing for a different “target” than what’s actually going to be in play during the actual show. Using a virtual soundcheck system to rough things in is plenty possible, even with a mix solution that’s not exactly tailored for the real thing, but spending a large amount of time on tiny details isn’t worth it. In the end, you’re still going to have to mix the concert in the real space, for that EXACT, real space. You just can’t get around that entirely.

As such, a virtual soundcheck might as well be done in the venue it concerns, using the audio rig deployed for the show.

Flaw 2: Live audio is not an open loop.

A virtual soundcheck removes one of the major difficulties involved in live audio; It opens the feedback loop. Because it’s all driven from playback which the system output cannot directly affect, it’s immune from many of the oddities and pitfalls inherent with mics and speakers that “talk” to each other. A playback-based shakedown might lead an operator to believe that they can crank up the total gain applied to a channel with impunity, but physics will ALWAYS throw the book at you for trying to bend the rules.

The further implication is that “going offline” is about as helpful to the process of mixing wedge monitors as a house stuffed with meth-addled meerkats. In-ears are a different story, but a huge part of getting wedges right is knowing exactly what you can and can not pull off for that band in that space. Knowing what you can get away with requires having the feedback loop factored in, but a virtual check deletes the loop entirely.

Flaw 3: We’re not going to be listening to only the sound rig.

As I’ve been mentioning here, over and over, anybody who has ever heard a real band in a real room knows that real bands make a LOT of noise. Even acoustic shows can have very large “stage wash” components to their total acoustical output. A virtual soundcheck means that the band isn’t there to make noise, and so your mix gets built without taking that into account. The problem is that, in small venues, taking the band’s acoustical contribution into account is critical.

And yes, you could certainly set up the feeds so that monitor-world also gets fed – but that still doesn’t fully fix the issue. Drummers and players of amplified instruments have a lot to say, even before the roar of monitor loudspeakers gets added. This is even true for “unplugged” shows. If the PA isn’t supposed to be drowningly loud, you might be surprised at just how well an acoustic guitar can carry.


As I said before, the whole idea is not useless. You can certainly get something out of playback. You might be able to chase down some weird rattle or other artifact from an instrument that you couldn’t find when everything was banging away in realtime. Virtual soundchecks also become much more helpful when you’re in a big space, with a big PA that’s going to be – far and away – the loudest thing that the audience is listening to.

For those of us in smaller spaces, though, the value of dialing up a simulation is pretty small. For my part, the whole point of soundcheck is to get THE band and THE backline ready for THE show in THE room with THE monitors and THE Front-Of-House system. In my situation, a virtual soundcheck does none of that.