Tag Archives: How-To

Not Everybody, Not All The Time

Care about everything you can, then be okay with everything else.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

A letter to myself and others:

You can’t please everybody all the time.

You can try, of course, and you should. Show production is a service industry that’s always been a service industry. It always will be. Getting the maximum number of people to be delighted with the show IS your job.

But 100% satisfaction for everybody is very difficult to get to. Somebody will always manage to sit in the seat where the PA coverage isn’t quite right. Somebody will inevitably wonder why you didn’t make Band A sound like Band B, even though Band A has made arrangement choices such that they CAN’T sound like Band B. You will never have enough subwoofer for “that one guy.” Someone is going to lecture you on how their preferred snare-drum sound is THE key to a rock mix.

There is nothing so good that someone, somewhere will not hate it. So says Pohl’s law, if the Intertubes are to be believed.

You’re going to have to make choices about what to prioritize. That’s part of sitting in any of the chairs involved in show control. By necessity, you will be making choices (many of them, at high speed) that have real – though usually ephemeral and ultimately benign – effects on the lives of a sizable number of people. You must therefore cultivate an assuredness, an appropriate level of confidence that you are doing the right thing. Beyond having a strong appreciation of personal and collective aesthetics, this confidence will be greatly bolstered by understanding the physics involved in this job. If you know what’s possible and what’s not, you will be less rattled when someone accuses you of not having done the right thing…when their right thing wasn’t a feasible thing anyway.

It’s right to take all concerns seriously, but not all concerns can be treated with the same level of seriousness. Start by making as many musicians as happy as you can. That’s your baseline. If you get the baseline done, and somebody else isn’t happy, consider if that person is writing the checks for the event. If so, working out a compromise will probably be in order. An extreme case might require that you just do as you’re told. After you get that squared away, you can start being concerned about other considerations brought to your attention. If you can take care of them without changing the happiness level of the check-writer or the players, go ahead.

If not, be polite, but don’t worry too much. Even big-dollar gigs can’t deploy enough gear to fix everything.

Do your best, have fun, and try to get as many other people to have at least as much fun as you’re having. Do maintain care for the outliers, but don’t agonize. It won’t get you anything, anyway.


Console Questions

A few simple queries can get you going on just about any console.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Back when I was in school, we were introduced to “The Four Console Questions.” The idea behind the questions was that, if you walked up to a strange mixer, you could get answers to the questions and be able to get work done. Mixing desks come in many varieties, but there aren’t very many truly different ways to build them that make sense. In any case, all the basic concepts have to essentially stay the same. If a console can’t take some number “a” of audio inputs, and route those inputs to some number “o” of outputs, you don’t have a mixing console anyway.

With the growing commonality of digital mix systems, I feel that the essential “console questions” need some expansion and tweaking. As such, here’s my take on the material that was presented to me over a decade and a half (GEEZE!) ago.

1. Do I Know What I Want To Do?

You might say that this isn’t a console question at all, but in truth, it’s THE most important one. If you don’t know what you want to do with the console, then knowing a bunch of information about the console’s operation won’t help you one iota. The unfortunate reality is that many people try to engage in this whole exercise backwards; They don’t know what they want to accomplish, but they figure that learning the mixer’s whys and wherefores will help them figure it out.

Certainly, learning about a new feature that you haven’t had access to previously can lead you to new techniques. However, at a bedrock level, you have to have some preconceived notion of what you want to accomplish with the tool. Do you want to get a vocal into the FOH PA? Do you want to get three electric guitars, a kazoo, and a capybara playing Tibetan singing-bowls into 12 different monitor mixes?

You have to know your application.

2. How Do I Correctly Clock The Console?

For an analog console, the answer to this is always: “No clock is required.”

For a digital rig, though, it’s very important. I just recently befuddled myself for an agonizing minute with why a digital console wasn’t showing any input. Whoops! It was because I had set it to receive external clock from a master console a few weeks before, and hadn’t returned it to internal clocking now that it was on its own.

You need to know how to indicate to the console which clock source and sample rate is appropriate for the current situation.

3. How Do I Choose What Inputs Are Available To The Channels?

This is particularly important with consoles that support both on-board input and remote stageboxes. You will very likely have to pick and choose which of those options is available to an individual channel or group of channels. What you need to discover is how those selections are accomplished.

4. How Do I Connect A Particular Input To A Particular Channel?

You might think this was covered in the previous question, but it wasn’t. Your global input options aren’t the end of the story. Many consoles will let you do per-channel “soft-patching,” which is the connection of a certain available signal to a certain channel without having to change a physical connection. Whether on a remote stagebox or directly at the desk, input 1 may NOT necessarily be appearing on channel 1. You have to find out how those connections are chosen.

5. How Do I Insert Channel Processing?

In some situations, this means a physical insert connection that may be automatically enabled…or not. In other cases, this means the enabling and disabling of per-channel dynamics and/ or EQ, and maybe even other DSP processing available onboard in some way. You will need to know how that takes place, and with all the possible variations that might have to do with your particular application, it is CRITICAL that you know what you want to do.

6. How Do I Route A Channel To An Auxiliary, Mix Bus, Or The Main Bus?

Sometimes, this is dead-simple and “locked in.” You might have four auxiliaries and four submix buses implemented in hardware, such that they can only be auxiliary or mix buses, with the same knobs always pushing the same aux and a routing matrix with pan-based bus selection. On the other hand, you might have a pool of buses that can behave in various ways depending on global configuration, per-channel configuration, or both.

So, you’ll need to figure out what you’ve got, and how to connect a given channel to a given bus so that you get the results you want.

7. How Do I Insert Bus Processing?

This might be just like question 5, or wildly different. You will need to sort out which reality is currently in play.

8. How Do I Connect A Given Signal To A Physical Output?

Just because you have a signal running to a bus, there’s no guarantee that the bus is actually going to transfer signal to any other piece of equipment. Especially in the digital world, there may be another layer of patching to assign signals to either digital or analog outputs. Bus 1 might be on output 7, because six matrices might be connected to the first six outputs. Maybe output 16 is a pre-fader direct out from channel 4.

You’ll have to figure out where all that gets specified.


Obviously, there’s more to being a whiz at any particular console than eight basic questions. However, if you can get a given signal into the desk, through some processing, combined with other signals you want to combine, and then off to the next destination, you can at least make some real noise in the room.


EQ Propagation

The question of where to EQ is, of course, tied inextricably to what to EQ.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

On occasion, I get the opportunity to guest-lecture to live-sound students. When things go the way I want them to, the students get a chance to experience the dialing up of monitor world (or part of it). One of the inevitable and important questions that arises is, “Why did you reach for the channel EQ when you were solving that one problem, but then use the EQ across the bus for this other problem?”

I’ve been able to give good answers to those questions, but I’ve also wanted to offer better explanations. I think I’ve finally hit upon an elegant way to describe my decision making process in regards to which EQ I use to solve different problems. It turns out that everything comes down to the primary “propagation direction” that I want for a given EQ change:

Effectively speaking, equalization on an input propagates downstream to all outputs. Equalization on an output effectively propagates upstream to all inputs.


What I’ve just said is, admittedly, rather abstract. That being so, let’s take a look at it concretely.

Let’s say we’re in the process of dialing up monitor world. It’s one of those all-too-rare occasions where we get the chance to measure the output of our wedges and apply an appropriate tuning. That equalization is applied across the appropriate bus. What we’re trying to do is equalize the box itself, so we can get acoustical output that follows a “reference curve.” (I advocate for a flat reference curve, myself.)

It might seem counter-intuitive, but if we’re going to tune the wedge electronically, what we actually have to do is transform all of the INPUTS to the box. Changing the loudspeaker itself to get our preferred reference curve would be ideal, but also very difficult. So, we use an EQ across a system output to change all the signals traveling to the wedge, counteracting the filtering that the drivers and enclosure impose on whatever makes it to them. If the monitor is making everything too crisp (for example), the “output” EQ lets us effectively dial high-frequency information out of every input traveling to the wedge.

Now, we put the signal from a microphone into one of our wedges. It starts off sounding generally good, although the channel in question is a vocal and we can tell there’s too much energy in the deep, low-frequency area. To fix the problem, we apply equalization to the microphone’s channel – the input. We want the exact change we’ve made to apply to every monitor that the channel might be sent to, and EQ across an input effectively transforms all the outputs that signal might arrive at.

There’s certainly nothing to stop us from going to each output EQ and pulling down the LF, but:

1) If we have a lot of mixes to work with, that’s pretty tedious, even with copy and paste, and…

2) We’ve now pushed away from our desired reference curve for the wedges, potentially robbing desired low-end information from inputs that would benefit from it. A ton of bottom isn’t necessary for vocals on deck, but what if somebody wants bass guitar? Or kick?

It makes much more sense to make the change at the channel if we can.

This also applies to the mud and midrange feedback weirdness that tends to pile up as one channel gets routed to multiple monitors. The problems aren’t necessarily the result of individual wedges being tuned badly. Rather, they are the result of multiple tunings interacting in a way that’s “wrong” for one particular mic at one particular location. What we need, then, is to EQ our input. The change then propagates to all the outputs, creating an overall solution with relative ease (and, again, we haven’t carved up each individual monitor’s curve into something that sounds weird in the process).

The same idea applies to FOH. If the whole mix seems “out of whack,” then a change to the main EQ effectively tweaks all the inputs to fix the offending frequency range.

So, when it’s time to grab an EQ, think about which way you want your changes to flow. Changes to inputs flow to all the connected outputs. Changes to outputs flow to all connected inputs.


Livestreaming Is The New Taping – Here Are Some Helpful Hints For The Audio

An article for Schwilly Family Musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

“The thing with taping or livestreaming is that the physics and logistics have not really changed. Sure, the delivery endpoints are different, especially with livestreaming being a whole bunch of intangible data being fired over the Internet, but how you get usable material is still the same. As such, here are some hints from the production-staff side for maximum effectiveness, at least as far as the sound is concerned…”


The rest is here. You can read it for free!


Communicating With The Sound Engineer (Another Letter)

An article for “The Schwillies” about how to pass messages to audio humans.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

From the article:

“When it comes to a complex topic, especially in a pressure situation, the ability of spoken language to convey nuance and relay information unambiguously is a huge bit of leverage.”


You can read the whole thing here.


Entering Flatland

I encourage live-audio humans to spend lots of time listening to studio monitors.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Do you work in live-audio? Are you new to the field? An old hand? Somewhere in between?

I want to encourage you to do something.

I want you to get yourself a pair of basically decent studio monitors. They shouldn’t be huge, or expensive. They just have to be basically flat in terms of their magnitude response. Do NOT add a subwoofer. You don’t need LF drivers bigger than 8″ – anything advertised to play down to about 40 Hz or 50 Hz is probably fine.

I want you to run them as “flat” as possible. I want you to do as much listening with them as possible. Play your favorite music through them. Watch YouTube videos with them passing the audio. When you play computer games, let the monitors make all the noises.

I want you to get used to how they sound.

Oh, and try to tune your car stereo to sound like your studio monitors. If you can only do so coarsely, still do so.

Why?

Because I think it’s very helpful to “calibrate” yourself to un-hyped audio.

A real problem in live music is the tendency to try to make everything “super enhanced.” It’s the idea that loud, deep bass and razor-sharp HF information are the keys to good sound. There’s a problem, though. The extreme ends of the audible spectrum actually aren’t that helpful in concert audio. They are nice to have available, of course. The very best systems can reproduce all (or almost all) of the audible range at high volume, with very low distortion. The issue is over-emphasis. The sacrifice of the absolutely critical midrange – where almost all the musical information actually lives – on the altar of being impressive for 10 seconds.

I’m convinced that part of what drives a tendency to dial up “hyped” audio in a live situation is audio humans listening to similar tonalities when they’re off-duty. They build a recreational system that produces booming bass and slashing treble, yank the midrange down, and get used to that as being “right.” Then, when they’re louderizing noises for a real band in a real room, they try to get the same effect at large scale. This eats power at an incredible rate (especially the low-end), and greatly reduces the ability of the different musical parts to take their appointed place in the mix. If everything gets homogenized into a collection of crispy thuds, the chance of distinctly hearing everything drops like a bag of rocks tied to an even bigger rock that’s been thrown off a cliff made of other rocks.

But it does sound cool!

At first.

A few minutes in, especially at high volume, and the coolness gives way to fatigue.

In my mind, it’s a far better approach to try to get the midrange, or about 100 Hz to 5 kHz, really worked out as well as possible first. Then, you can start thinking about where you are with the four octaves on the top and bottom, and what’s appropriate to do there.

In my opinion, “natural” is actually much more impressive than “impressive,” especially when you don’t have massive reserves of output available. Getting a handle on what’s truly natural is much easier when that kind of sonic experience is what you’ve trained yourself to think of as normal and correct.

So get yourself some studio monitors, and make them your new reference point for what everything is supposed to sound like. I can’t guarantee that it will make you better at mixing bands, but I think there’s a real chance of it.


A Weird LFE Routing Solution

Getting creative to obtain more bottom end.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This is another one of those case studies where you get to see how strange my mind is. As such, be aware that it may not be applicable to you at all. I had a bit of a conundrum, and I solved it in a creative way. Some folks might call it “too creative.”

Maybe those people are boring.

Or they’re reasonable and I’m a little nuts.

Anyway.

I’ve previously mentioned that I handle the audio at my church. We’ve recently added some light percussion to complement our bass-guitar situation, and there was a point where our previous worship leader/ music director wanted more thump. That is, low frequency material that was audible AND a bit “tactile.” In any case, the amount of bass we had happening wasn’t really satisfying.

Part of our problem was how I use system limiting. I’ve long nursed a habit of using a very aggressive limiter across the main mix bus as a “stop the volume here” utility. I decide how loud I want to get (which is really not very loud on Sundays), set the dynamics across the output such that we can’t get any louder, and then smack that processor with a good deal of signal. I’ve gotten to a point where I can get it right most of the time, and “put the band in a box” in terms of volume. Drive the vocals hard and they stay on top, while not jumping out and tearing anyone’s face off when the singers push harder.

At the relatively quiet volume levels that we run things, though, this presents a problem for LF content. To get that extended low-frequency effect that can be oh-so-satisfying, you need to be able to run the bass frequencies rather hotter than everything else. The limiter, though, puts a stop to that. If you’re already hitting the threshold with midrange and high-frequency information, you don’t have anywhere to go.

So, what can you do?

For a while, we took the route of patching into the house system’s subwoofer drive “line.” I would run (effectively) unlimited aux-fed subs to that line, while keeping the mains in check as normal, and we got what we wanted.

But it was a bit of a pain, as patching to the house system required unpatching some of their frontend, pulling an amp partially out of a cabinet, doing our thing, and then reversing the process at the end. I’m not opposed to work, but I like “easy” when I can get it. I eventually came to the conclusion that I didn’t really need the house subs.

This was because:

1) We were far, far below the maximum output capacity of our main speakers.

2) Our main speakers were entirely capable of producing content between 50 – 100 Hz at the level I needed for people to feel the low end a little bit. (Not a lot, just a touch.)

If we wouldn’t have had significant headroom, we would have been sunk. Low Frequency Effects (LFE) require significant power, as I said before. If my artificial headroom reduction was close to the actual maximum output of the system, finding a way around it for bass frequencies wouldn’t have done much. Also, I had to be realistic about what we could get. A full-range, pro-audio box with a 15″ or 12″ LF driver can do the “thump” range at low to moderate volumes without too much trouble. Asking for a bunch of building-rattling boom, which is what you get below about 50 Hz, is not really in line with what such an enclosure can deliver.

With those concerns handled, I simply had to solve a routing problem. For all intents and purposes, I had to create a multiband limiter that was bypassed in the low-frequency band. If you look at the diagram above, that’s what I did.

I now have one bus which is filtered to pass content at 100 Hz and above. It gets the same, super-aggressive limiter as it’s always had.

I also have a separate bus for LFE. That bus is filtered to restrict its information to the range between 50 Hz and 100 Hz, with no limiter included in the path.

Those two buses are then combined into the console’s main output bus.

With this configuration, I can “get on the gas” with low end, while retaining my smashing and smooshing of midrange content. I can have a little bit of fun with percussion and bass, while retaining a small, self-contained system that’s easy to patch. I would certainly not recommend this as a general-purpose solution, but hey – it fits my needs for now.


The Unterminated Line

If nothing’s connected and there’s still a lot of noise, you might want to call the repair shop.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

“I thought we fixed the noise on the drum-brain inputs?” I mused aloud, as one of the channels in question hummed like hymenoptera in flight. I had come in to help with another rehearsal for the band called SALT, and I was perplexed. We had previously chased down a bit of noise that was due to a ground loop; Getting everything connected to a common earthing conductor seemed to have helped.

Yet here we were, channel two stubbornly buzzing away.

Another change to the power distribution scheme didn’t help.

Then, I disconnected the cables from the drum-brain. Suddenly – the noise continued, unchanged. Curious. I pulled the connections at the mixer side. Abruptly, nothing happened. Or rather, the noise continued to happen. Oh, dear.


When chasing unwanted noise, disconnecting things is one of your most powerful tools. As you move along a signal chain, you can break the connection at successive places. When you open the circuit and the noise stops, you know that the supplier of your spurious signal is upstream of the break.

Disconnecting the cable to the mixer input should have resulted in relative silence. An unterminated line, that is, an input that is NOT connected to upstream electronics, should be very quiet in this day and age. If something unexplained is driving a console input hard enough to show up on an input meter, yanking out the patch should yield a big drop in the visible and audible level. When that didn’t happen, logic dictated an uncomfortable reality:

1) The problem was still audible, and sounded the same.

3) The input meter was unchanged, continuing to show electrical activity.

4) Muting the input stopped the noise.

5) The problem was, therefore, post the signal cable and pre the channel mute.

In a digital console, this strongly indicates that something to do with the analog input has suffered some sort of failure. Maybe the jack’s internals weren’t quite up to spec. Maybe a solder joint was just good enough to make it through Quality Control, but then let go after some time passed.

In any case, we didn’t have a problem we could fix directly. Luckily, we had some spare channels at the other end of the input count, so we moved the drum-brain connections there. The result was a pair of inputs that were free of the annoying hum, which was nice.

But if you looked at the meter for channel two, there it still was: A surprisingly large amount of input on an unterminated line.


Case Study: FX When FOH Is Also Monitor World

Two reverbs can help you square certain circles.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The Video

The Script

Let’s say that a band has a new mixing console – one of those “digital rigs in a box” that have come on the scene. The musicians call you in because they need some help getting their monitors dialed up. At some point, the players ask for effects in the monitors: The vocals are too dry, and some reverb would be nice.

So, you crank up an FX send with a reverb inserted on the appropriate bus – and nothing happens.

You then remember that this is meant to be a basic setup, with one console handling both FOH and monitors. Your inputs from the band use pre-fader sends for monitor world, but post-fader sends for FX. Since you weren’t building a mix for FOH, all your faders were all the way down. You don’t know where they would be for a real FOH mix, anyway. If the faders are down, a post-fader send can’t get any signal to an FX bus.

Now, you typically don’t want the monitors to track every level tweak made for FOH, but you DO want the FX sends to be dependent on fader position – otherwise, the “wet-to-dry” ratio would change with every fader adjustment.

So, what do you do?

You can square the circle if you can change the pre/ post send configuration to the FX buses, AND if you can also have two reverbs.

Reverb One becomes the monitor reverb. The sends to that reverb are configured to be pre-fader, so that you don’t have to guess at a fader level. The sends from the reverb return channel should also be pre-fader, so that the monitor reverb doesn’t end up in the main mix.

Reverb Two is then setup to be the FOH reverb. The sends to this reverb from the channels are configured as post-fader. Reverb Two, unlike Reberb One, should have output that’s dependent on the channel fader position. Reverb Two is, of course, kept out of the monitor mixes.

With a setup like this, you don’t need to know the FOH mix in advance in order to dial up FX in the monitors. There is the small downside of having to chew up two FX processors, but that’s not a huge problem if it means getting the players what they need for the best performance.


Case Study: Creating A Virtual Guitar Rig In An Emergency

Distortion + filtering = something that can pass as a guitar amplifier in an emergency.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The Video

The Script

Imagine the scene: You’re setting up a band that has exactly one player with an electric guitar. They get to the gig, and suddenly discover a problem: The power supply for their setup has been left at home. Nobody has a spare, because it’s a specialized power supply – and nobody else plays an electric guitar anyway. The musician in question has no way to get a guitar sound without their rig.

At all.

As in, what they have that you can work with is a guitar and a cable. That’s it.

So, what do you do?

Well, in the worst-case scenario, you just find a direct box, run the guitar completely dry, and limp through it all as best you can.

But that’s not your only option. If you’re willing to get a little creative, you can do better than just having everybody grit their teeth and suffer. To get creative, you need to be able to take their guitar rig apart and put it back together again.

Metaphorically, I mean. You can put the screwdriver away.

What I’m getting at is this question: If you break the guitar rig into signal-processing blocks, what does each block do?

When it comes right down to it, a super-simple guitar amp amounts to three things: Some amount of distortion (including no distortion at all), tone controls, and an output filter stack.
The first two parts might make sense, but what’s that third bit?

The output filtering is either an actual loudspeaker, or something that simulates a loudspeaker for a direct feed. If you remove a speaker’s conversion of electricity to sound pressure waves, what’s left over is essentially a non-adjustable equalizer. Take a look at this frequency-response plot for a 12″ guitar speaker by Eminence: It’s basically a 100 Hz to 5 kHz bandpass filter with some extra bumps and dips.

It’s a fair point to note that different guitar amps and amp sims may have these different blocks happening in different orders. Some might forget about the tone-control block entirely. Some might have additional processing available.

Now then.

The first thing to do is to find an active DI, if you can. Active DI boxes have very high input impedances, which (in short) means that just about any guitar pickup will drive that input without a problem.

Next, if you’re as lucky as I am, you have at your disposal a digital console with a guitar-amp simulation effect. The simulator puts all the processing I talked about into a handy package that gets inserted into a channel.

What if you’re not so lucky, though?

The first component is distortion. If you can’t get distortion that’s basically agreeable, you should skip it entirely. If you must generate your own clipping, your best bet is to find some analog device that you can drive hard. Overloading a digital device almost always sounds terrible, unless that digital device is meant to simulate some other type of circuit.
For instance, if you can dig up an analog mini-mixer, you can drive the snot out of both the input and output sides to get a good bit of crunch. (You can also use far less gain on either or both ends, if you prefer.)

Of course, the result of that sounds pretty terrible. The distortion products are unfiltered, so there’s a huge amount of information up in the high reaches of the audible spectrum. To fix that, let’s put some guitar-speaker-esque filtering across the whole business. A high and low-pass filter, plus a parametric boost in the high mids will help us recreate what a 12″ driver might do.
Now that we’ve done that, we can add another parametric filter to act as our tone control.

And there we go! It may not be the greatest guitar sound ever created, but this is an emergency and it’s better than nothing.

There is one more wrinkle, though, and that’s monitoring. Under normal circumstances, our personal monitoring network gets its signals just after each channel’s head amp. Usually that’s great, because nothing I do with a channel that’s post the mic pre ends up directly affecting the monitors. In this case, however, it was important for me to switch the “monitor pick point” on the guitar channel to a spot that was post all my channel processing – but still pre-fader.

In your case, this may not be a problem at all.

But what if it is, and you don’t have very much flexibility in picking where your monitor sends come from?

If you’re in a real bind, you could switch the monitor send on the guitar channel to be post-fader. Set the fader at a point you can live with, and then assign the channel output to an otherwise unused subgroup. Put the subgroup through the main mix, and use the subgroup fader as your main-mix level control for the guitar. You’ll still be able to tweak the level of the guitar in the mix, but the monitor mixes won’t be directly affected if you do.