Tag Archives: Science

The Pro-Audio Guide For People Who Know Nothing About Pro-Audio, Part 1

A series I’m starting on Schwilly Family Musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

From the article:

“The fundamental key to all audio production is that we MUST have sound information in the form of electricity. Certain instruments, like synthesizers and sample players don’t produce any actual sound at all; They go straight to producing electricity.

For actual sound, though, we have to perform a conversion, or “transduction.” Transduction, especially input transduction, is THE most important part of audio production. If the conversion from sound to electricity is poor, nothing happening down the line will be able to fully compensate.”


Read the whole thing here, for free!


EQ Propagation

The question of where to EQ is, of course, tied inextricably to what to EQ.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

On occasion, I get the opportunity to guest-lecture to live-sound students. When things go the way I want them to, the students get a chance to experience the dialing up of monitor world (or part of it). One of the inevitable and important questions that arises is, “Why did you reach for the channel EQ when you were solving that one problem, but then use the EQ across the bus for this other problem?”

I’ve been able to give good answers to those questions, but I’ve also wanted to offer better explanations. I think I’ve finally hit upon an elegant way to describe my decision making process in regards to which EQ I use to solve different problems. It turns out that everything comes down to the primary “propagation direction” that I want for a given EQ change:

Effectively speaking, equalization on an input propagates downstream to all outputs. Equalization on an output effectively propagates upstream to all inputs.


What I’ve just said is, admittedly, rather abstract. That being so, let’s take a look at it concretely.

Let’s say we’re in the process of dialing up monitor world. It’s one of those all-too-rare occasions where we get the chance to measure the output of our wedges and apply an appropriate tuning. That equalization is applied across the appropriate bus. What we’re trying to do is equalize the box itself, so we can get acoustical output that follows a “reference curve.” (I advocate for a flat reference curve, myself.)

It might seem counter-intuitive, but if we’re going to tune the wedge electronically, what we actually have to do is transform all of the INPUTS to the box. Changing the loudspeaker itself to get our preferred reference curve would be ideal, but also very difficult. So, we use an EQ across a system output to change all the signals traveling to the wedge, counteracting the filtering that the drivers and enclosure impose on whatever makes it to them. If the monitor is making everything too crisp (for example), the “output” EQ lets us effectively dial high-frequency information out of every input traveling to the wedge.

Now, we put the signal from a microphone into one of our wedges. It starts off sounding generally good, although the channel in question is a vocal and we can tell there’s too much energy in the deep, low-frequency area. To fix the problem, we apply equalization to the microphone’s channel – the input. We want the exact change we’ve made to apply to every monitor that the channel might be sent to, and EQ across an input effectively transforms all the outputs that signal might arrive at.

There’s certainly nothing to stop us from going to each output EQ and pulling down the LF, but:

1) If we have a lot of mixes to work with, that’s pretty tedious, even with copy and paste, and…

2) We’ve now pushed away from our desired reference curve for the wedges, potentially robbing desired low-end information from inputs that would benefit from it. A ton of bottom isn’t necessary for vocals on deck, but what if somebody wants bass guitar? Or kick?

It makes much more sense to make the change at the channel if we can.

This also applies to the mud and midrange feedback weirdness that tends to pile up as one channel gets routed to multiple monitors. The problems aren’t necessarily the result of individual wedges being tuned badly. Rather, they are the result of multiple tunings interacting in a way that’s “wrong” for one particular mic at one particular location. What we need, then, is to EQ our input. The change then propagates to all the outputs, creating an overall solution with relative ease (and, again, we haven’t carved up each individual monitor’s curve into something that sounds weird in the process).

The same idea applies to FOH. If the whole mix seems “out of whack,” then a change to the main EQ effectively tweaks all the inputs to fix the offending frequency range.

So, when it’s time to grab an EQ, think about which way you want your changes to flow. Changes to inputs flow to all the connected outputs. Changes to outputs flow to all connected inputs.


Livestreaming Is The New Taping – Here Are Some Helpful Hints For The Audio

An article for Schwilly Family Musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

“The thing with taping or livestreaming is that the physics and logistics have not really changed. Sure, the delivery endpoints are different, especially with livestreaming being a whole bunch of intangible data being fired over the Internet, but how you get usable material is still the same. As such, here are some hints from the production-staff side for maximum effectiveness, at least as far as the sound is concerned…”


The rest is here. You can read it for free!


Hitting The Far Seats

A few solutions to the “even coverage” problem, as it relates to distance.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This article, like the one before it, isn’t really “small venue” in nature. However, I think it’s good to spend time on audio concepts which small-venue folk might still run across. I’m certainly not “big-time,” but I still do the occasional show that involves more people and space. I (like you) really don’t need to get engaged with a detailed discussion regarding an enormous system that I probably won’t ever get my hands on, but the fundamentals of covering the people sitting in the back are still valuable tools.

This article is also very much a follow up to the piece linked above. Via that lens, you can view it as a discussion of what the viable options are for solving the difficulties I ran into.

So…

The way that you get “throw” to the farthest audience members is dependent upon the overall PA deployment strategy you’re using. Deployment strategies are dependent upon the gear in question being appropriate for that strategy, of course; You can’t choose to deploy a bunch of point-source boxes as a line-array and have it work out very well. (Some have tried. Some have thought it was okay. I don’t feel comfortable recommending it.)

Option 1: Single Arrival, “Point Source” Flavor

You can build a tall stack or hang an array with built-in, non-changeable angles, but both cases use the same idea: Any given audience member should really only hear one box (per side) at a time. Getting the kind of directivity necessary for that to be strictly true is quite a challenge at lower frequencies, so the ideal tends to not be reached. Nevertheless, this method remains viable.

I’ve termed this deployment flavor as “single arrival” because all sound essentially originates at the same distance from any given audience member. In other words, all the PA loudspeakers for each “side” are clustered as closely as is practical. The boxes meant to be heard up close are run at a significantly lower level than the boxes meant to cover the far-field. A person standing 50 feet from the stage might be hearing a loudspeaker making 120 dB SPL at 3 feet, whereas the patrons sitting 150 feet away would be hearing a different box – possibly stacked atop the first speaker – making 130 dB SPL at 3 feet. As such, the close-range listener is getting about 96 dB SPL, and the far-field audience member also hears a show at roughly 96 dB SPL.

This solution is relatively simple in some respects, though it requires the capability of “zone” tuning, as well as loudspeakers capable of high-output and high directivity. (You don’t want the up-close audience to get cooked by the loudspeaker that’s making a ton of noise for the long-distance people.)

Option 2: Single Arrival, Line-Array Flavor

As in the point source flavor, you have one array deployed “per side,” with each individual box as close to the other boxes as is achievable. The difference is that an honest-to-goodness line-array is meant to work by the audible combination of multiple loudspeakers. At very close distances, it may be possible to only truly hear a small part of the line, and this does help in keeping the nearby listeners from having their faces ripped off. However, the overall idea is to create a radiation pattern that resembles a section of a cylinder. (Perfect achievement of such a pattern isn’t really feasible.) This is in contrast to point-source systems, where the pattern tends towards a section of a sphere.

As is the case in many areas of life, everything comes down to surface area. A sphere’s surface area is 4*pi*radius^2, whereas the lateral surface area of a cylinder is 2*pi*radius*height. The perceived intensity of sound is the audible radiation spread across the surface area of the radiation geometry. More surface area means less intensity.

To keep the calculations manageable, I’ll have to simplify from sections of shapes to entire shapes. Even so, some comparisons can be made: At a distance of 150 feet, the sound power radiating in a spherical pattern is spread over a surface area of 282,743 square feet. For a 10-foot high cylinder, the surface area is 9424 square feet.

For the sphere, 4 watts of sound power (NOT electrical power!) means that a listener at the 150 foot radius gets a show that’s about 71 dB. For the cylinder, the listener at 100 feet should be getting about 86 dB. At the close-range distance of 50 feet, the cylindrical radiation pattern results in a sound level of 91 dB, whereas a spherical pattern gets 81 dB.

Putting aside for the moment that I’m assuming ideal and mathematically easy conditions, the line-array has a clear advantage in terms of consistency (level difference in the near and far fields) without a lot of work at tuning individual boxes. At the same time, it might not be quite as easily customizable as some point-source configurations, and a real line-source capable of rock-n-roll volume involves a good number of relatively expensive elements. Plus, a real line has to be flown, and with generous trim height as well.

Option 3: Multiple Arrival, Any Flavor

This is otherwise known as “delays.” At some convenient point away from the main PA system, a supplementary PA is set. The signal to that supplementary PA is made to be late, such that the far system aligns pleasingly with the sound from the main system. The hope is that most people will overwhelmingly hear one system over the other.

The point with this solution is to run everything more quietly and more evenly by making sure that no audience member is truly in the deep distance. If each PA only has to cover a distance of 75 feet, then an SPL of 90 dB at that distance requires 118 dB at 3 feet.

The upside to this approach is that the systems don’t have to individually be as powerful, nor do they strictly need to have high-directivity (although it’s quite helpful in keeping the two PA systems separate for the listeners behind the delays). The downside is that it requires more space and more rigging – whether actual rigging or just loudspeakers raised on poles, stacks, or platforms. Additionally, you have to deal with more signal and/ or power runs, possibly in difficult or high-traffic areas. It also requires careful tuning of the delay time to work properly, and even then, being behind or to the side of the delays causes the solution to be invalid. In such a condition where both systems are quite audible, the coherence of the reproduced audio suffers tremendously.


If I end up trying the Gallivan show again, I think I’ll go with delays. I don’t have the logistical resources to handle big, high-output point-source boxes or a real array. I can, on the other hand, find a way to boxes up on sticks with delay applied. I can’t say that I’m happy about the potential coherence issues, but everything in audio is a compromise in some way.


What Went Wrong At The Big Gig

Sometimes a show will really kick your butt.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Do this type of work long enough, and there will come a certain day. On that day, you will think, “If just about half of this audience goes home being totally pissed at me, I’ll call that a win.”

For me, that day came last weekend.

I was handling a show out at the Gallivan Center, a large, outdoor event space in the heart of Salt Lake. The day started well (I didn’t have to fight for parking, and I had both a volunteer crew and my ultra-smart assistant to help me out), and actually ended on a pretty okay note (dancing and cheering), but I would like to have skipped over the middle part.

It all basically boils down to disappointing a large portion of an audience.

I’ve come to terms with the reality that I’m always going to disappoint someone. There will always be “THAT guy” in the crowd who wants the show to have one kind of sound, a sound that you’ve never prioritized (or a sound that you simply don’t want). That person is just going to have to deal – and interestingly, they are often NOT the person writing the checks, so there’s a certain safety in being unruffled by their kerfuffle. However, when a good number of people are in agreement that things just aren’t right, well, that can turn a gig into “40 miles of bad road.”

Disappointment is a case of mismatched expectations. The thing with a show is that a mismatch can happen very early…and then proceed to snowball.

For instance, someone might say to me: “You didn’t seriously expect to do The Gallivan with your mini-concert rig, did you?”

No, I did not expect that, and therein lies a major contributing factor. “Doing The Gallivan” means covering a spread-out crowd of 1500+ people with rock-n-roll volume. I am under no illusions as to my capability in that space (which is no capability at all). What I thought I was going to do was to hit a couple hundred merry-makers with acoustic folk, Bluegrass, and “Newgrass” tunes. I thought they’d be packed pretty closely together near the stage, with maybe the far end of the crowd being up on the second tier of lawn.

I suppose you can guess that’s not what happened.

For most of the night, the area in front of the stage was barely populated at all. I remembered that particular piece of the venue as being turf (back in the day), but now it’s a dancefloor. That meant that the patrons who wanted to sit – and that was the vast majority – basically started where I was at FOH. Effectively, this created a condition like what you would see at a larger festival, where the barricade might be 40 – 50 feet from the stage.

Now add to this that we had a pretty ample crowd, and that they ended about 150 feet away from the deck.

Also add in that a lot of what we were doing was “traditional,” or in other words, acoustic instruments that were miced. Folk and Bluegrass really are not that loud in the final analysis, which means that making them unnaturally loud in order to get “throw” from a single source is a difficult proposition.

Fifty feet out, there were points where I was lucky to make about 85 dB SPL C-weighted. After that, gain-before-feedback started to become a real conundrum. Now, imagine that you’re three times that distance, at where the lawn ends. That meant that all you got was about 75 dB C, which isn’t much to compete against traffic noise and conversations.

Things got louder later. The closing acts were acoustic-electric “Newgrass,” which meant I could make as much noise as the rig would give me. That would have gotten us music lovers to about 94 – 97 dB C at FOH (by my guess). The folks in the back, then, were just starting to hear home-stereo level noise.

In any case, I was complained at quite a bit (by my standards). I think I spent at least 50% of the show wanting to crawl into a hole and hide. That we had some feedback issues didn’t help…when you’re riding the ragged edge trying to make more volume, you sometimes fall off the surfboard. We also had some connectivity problems with the middle act that put us behind, and further aggravated my sense of not delivering a standout performance.

Like I said, there was some good news by the time we shut the power off. Even before then, too. The people who were getting the volume they wanted appeared to be enjoying themselves. Most of the bands seemed happy with how the sound worked out on the stage itself, and the audience as a whole was joyous enough at the end that I no longer felt the oppressive weight of imagining the crowd as a disgruntled gestalt entity. Still, I wasn’t going to win any awards for how everything turned out. I was smarting pretty badly during the strike and van pack.

But, you know, some of the most effective learning in life happens when you fall over and tear up your knees. I can certainly tell you what I think could be done to make the next go-around a bit more comfortable.

That will have to wait for the next installment, though.


The Great, Quantitative, Live-Mic Shootout

A tool to help figure out what (inexpensive) mic to buy.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

See that link up there in the header?

It takes you to The Great, Quantitative, Live-Mic Shootout, just like this link does. (Courtesy of the Department of Redundancy Department.)

And that’s a big deal, because I’ve been thinking and dreaming about doing that very research project for the past four years. Yup! The Small Venue Survivalist is four years old now. Thanks to my Patreon supporters, past and present, for helping to make this idea a reality.

I invite you to go over and take a look.


The Unterminated Line

If nothing’s connected and there’s still a lot of noise, you might want to call the repair shop.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

“I thought we fixed the noise on the drum-brain inputs?” I mused aloud, as one of the channels in question hummed like hymenoptera in flight. I had come in to help with another rehearsal for the band called SALT, and I was perplexed. We had previously chased down a bit of noise that was due to a ground loop; Getting everything connected to a common earthing conductor seemed to have helped.

Yet here we were, channel two stubbornly buzzing away.

Another change to the power distribution scheme didn’t help.

Then, I disconnected the cables from the drum-brain. Suddenly – the noise continued, unchanged. Curious. I pulled the connections at the mixer side. Abruptly, nothing happened. Or rather, the noise continued to happen. Oh, dear.


When chasing unwanted noise, disconnecting things is one of your most powerful tools. As you move along a signal chain, you can break the connection at successive places. When you open the circuit and the noise stops, you know that the supplier of your spurious signal is upstream of the break.

Disconnecting the cable to the mixer input should have resulted in relative silence. An unterminated line, that is, an input that is NOT connected to upstream electronics, should be very quiet in this day and age. If something unexplained is driving a console input hard enough to show up on an input meter, yanking out the patch should yield a big drop in the visible and audible level. When that didn’t happen, logic dictated an uncomfortable reality:

1) The problem was still audible, and sounded the same.

3) The input meter was unchanged, continuing to show electrical activity.

4) Muting the input stopped the noise.

5) The problem was, therefore, post the signal cable and pre the channel mute.

In a digital console, this strongly indicates that something to do with the analog input has suffered some sort of failure. Maybe the jack’s internals weren’t quite up to spec. Maybe a solder joint was just good enough to make it through Quality Control, but then let go after some time passed.

In any case, we didn’t have a problem we could fix directly. Luckily, we had some spare channels at the other end of the input count, so we moved the drum-brain connections there. The result was a pair of inputs that were free of the annoying hum, which was nice.

But if you looked at the meter for channel two, there it still was: A surprisingly large amount of input on an unterminated line.


The Grand Experiment

A plan for an objective comparison of the SM58 to various other “live sound” microphones.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Purpose And Explanation

Ever since The Small Venue Survivalist became a reality, I have wanted to do a big experiment. I’ve been itching to round up a bunch of microphones that can be purchased for either below, or slightly above the price point of the SM58, and then to objectively compare them to an SM58. (The Shure SM58 continues to be an industry standard microphone that is recognized and accepted everywhere as a sound-reinforcement tool.)

The key word above is “objectively.” Finding subjective microphone comparisons isn’t too hard. Sweetwater just put together (in 2017) a massive studio-mic shootout, and it was subjective. That is, the measurement data is audio files that you must listen to. This isn’t a bad thing, and it makes sense for studio mics – what matters most is how the mic sounds to you. Listening tests are everywhere, and they have their place.

In live audio, though, the mic’s sound is only one factor amongst many important variables. Further, these variables can be quantified. Resistance to mechanically-induced noise can be expressed as a decibel number. So can resistance to wind noise. So can feedback rejection. Knowing how different transducers stack up to one another is critical for making good purchasing decisions, and yet this kind of quantitative information just doesn’t seem to be available.

So, it seems that some attempt at compiling such measurements might be helpful.

Planned Experimental Procedure

Measure Proximity Effect

1) Generate a 100Hz tone through a loudspeaker at a repeatable SPL.

2) Place the microphone such that it is pointed directly at the center of the driver producing the tone. The front of the grill should be 6 inches from the loudspeaker baffle.

3) Establish an input level from the microphone, and note the value.

4) Without changing the orientation of the microphone relative to the driver, move the microphone to a point where the front of the grill is 1 inch from the loudspeaker baffle.

5) Note the difference in the input level, relative to the level obtained in step 3.

Assumptions: Microphones with greater resistance to proximity effect will exhibit a smaller level differential. Greater proximity effect resistance is considered desirable.

Establish “Equivalent Gain” For Further Testing

1) Place a monitor loudspeaker on the floor, and position the microphone on a tripod stand. The stand leg nearest the monitor should be at a repeatable distance, at least 1 foot from the monitor enclosure.

2) Set the height of the microphone stand to a repeatable position that would be appropriate for an average-height performer.

3) Changing the height of the microphone as little as possible, point the microphone directly at the center of the monitor.

4) Generate pink-noise through the monitor at a repeatable SPL.

5) Using a meter capable of RMS averaging, establish a -40 dBFS RMS input level.

Measure Mechanical Noise Susceptibility

1) Set the microphone such that it is parallel to the floor.

2) Directly above the point where the microphone grill meets the body, hold a solid, semi-rigid object (like an eraser, or small rubber ball) at a repeatable distance at least 1 inch over the mic.

3) Allow the object to fall and strike the microphone.

4) Note the peak input level created by the strike.

Assumptions: Microphones with greater resistance to mechanically induced noise will exhibit a lower input level. Greater resistance to mechanically induced noise is considered desirable.

Measure Wind Noise Susceptibility

1) Position the microphone on the stand such that it is parallel to the floor.

2) Place a small fan (or other source of airflow which has repeatable windspeed and air displacement volume) 6 inches from the mic’s grill.

3) Activate the fan for 10 seconds. Note the peak input level created.

Assumptions: Microphones with greater resistance to wind noise will exhibit a lower input level. Greater resistance to wind noise is considered desirable.

Measure Feedback Resistance

1) Set the microphone in a working position. For cardioid mics, the rear of the microphone should be pointed directly at the monitor. For supercardioid and hypercardioid mics, the the microphone should be parallel with the floor.

2a) SM58 ONLY: Set a send level to the monitor that is just below noticeable ringing/ feedback.

2b) Use the send level determined in 2a to create loop-gain for the microphone.

3) Set a delay of 1000ms to the monitor.

4) Begin a recording of the mic’s output.

5) Generate a 500ms burst of pink-noise through the monitor. Allow the delayed feedback loop to sound several times.

6) Stop the recording, and make note of the peak level of the first repeat of the loop.

Assumptions: Microphones with greater feedback resistance will exhibit a lower input level on the first repeat. Greater feedback resistance is considered desirable.

Measure Cupping Resistance

1) Mute the send from the microphone to the monitor.

2) Obtain a frequency magnitude measurement of the microphone in the working position, using the monitor as the test audio source.

3) Place a hand around as much of the mic’s windscreen as is possible.

4) Re-run the frequency magnitude measurement.

5) On the “cupped” measurement, note the difference between the highest response peak, and that frequency’s level on the normal measurement.

Assumptions: Microphones with greater cupping resistance will exhibit a smaller level differential between the highest peak of the cupped response and that frequency’s magnitude on the normal trace. Greater cupping resistance is considered desirable.


THD Troubleshooting

I might have discovered something, or I might not.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Over the last little while, I’ve done some shows where I could swear that something strange was going on. Under certain conditions, like with a loud, rich vocal that had nothing else around it, I was sure that I could hear something in FOH distort.

So, I tried soloing up the vocal channel in my phones. Clean as a whistle.

I soloed up the the main mix. That seemed okay.

Well – crap. That meant that the problem was somewhere after the console. Maybe it was the stagebox output, but that seemed unlikely. No…the most likely problem was with a loudspeaker’s drive electronics or transducers. The boxes weren’t being driven into their limiters, though. Maybe a voice coil was just a tiny bit out of true, and rubbing?

Yeesh.

Of course, the very best testing is done “In Situ.” You get exactly the same signal to go through exactly the same gear in exactly the same place. If you’re going to reproduce a problem, that’s your top-shelf bet. Unfortunately, that’s hard to do right in the middle of a show. It’s also hard to do after a show, when Priority One is “get out in a hurry so they can lock the facility behind you.”

Failing that – or, perhaps, in parallel with it – I’m becoming a stronger and stronger believer in objective testing: Experiments where we use sensory equipment other than our ears and brains. Don’t get me wrong! I think ears and brains are powerful tools. They sometimes miss things, however, and don’t natively handle observations in an analytical way. Translating something you hear onto a graph is difficult. Translating a graph into an imagined sonic event tends to be easier. (Sometimes. Maybe. I think.)

This is why I do things like measure the off-axis response of a cupped microphone.

In this case, though, a simple magnitude measurement wasn’t going to do the job. What I really needed was distortion-per-frequency. Room EQ Wizard will do that, so I fired up my software, plugged in my Turbos (one at a time), and ran some trials. I did a set of measurements at a lower volume, which I discarded in favor of traces captured at a higher SPL. If something was going to go wrong, I wanted to give it a fighting chance of going wrong.

Here’s what I got out of the software, which plotted the magnitude curve and the THD curve for each loudspeaker unit:

I expected to see at least one box exhibit a bit of misbehavior which would dramatically affect the graph, but that’s not what I got. What I can say is that the first measurement’s overall distortion curve is different, lacking the THD “dip” at 200 Hz that the other boxes exhibit, significantly more distortion in the “ultra-deep” LF range, and with the “hump” shifted downwards. (The three more similar boxes center that bump in distortion at 1.2 kHz. The odd one out seems to put the center at about 800 Hz.)

So, maybe the box that’s a little different is my culprit. That’s my strong suspicion, anyway.

Or maybe it’s just fine.

Hmmmmm…


Measuring A Cupped Mic

What you might think would happen isn’t what happens.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The most popular article on this site to date is the one where I talk about why cupping a vocal mic is generally a “bad things category” sort of experience. In that piece, I explain some general issues with wrapping one’s hand around a microphone grill, but there’s something I didn’t do:

I didn’t measure anything.

That reality finally dawned on me, so I decided to do a quck-n-dirty experiment on how a microphone’s transfer function changes when cupping comes into play. Different mics will do different things, so any measurement is only valid for one mic in one situation. However, even if the results can’t truly be generalized, they are illuminating.

In the following picture, the red trace is a mic pointing away from a speaker, as you would want to happen in monitor-world. The black trace is the mic in the same position, except with my hand covering a large portion of the windscreen mesh.

You would think that covering a large part of the mic’s business-end would kill off a lot of midrange and high-frequency information, but the measurement says otherwise. The high-mid and HF information is actually rather hotter, with large peaks at 1800 Hz, 3900 Hz, and 9000 Hz. The low frequency response below 200 Hz is also given a small kick in the pants. Overall, the microphone transfer function is “wild,” with more pronounced differences between peaks and dips.

The upshot? The transducer’s feedback characteristics get harder to manage, and the sonic characteristics of the unit begin to favor the most annoying parts of the audible spectrum.

Like I said, this experiment is only valid for one mic (a Sennheiser e822s that I had handy). At the same time, my experience is that other mics have “cupping behavior” which is not entirely dissimilar.