Tag Archives: Measurement

The Grand Experiment

A plan for an objective comparison of the SM58 to various other “live sound” microphones.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Purpose And Explanation

Ever since The Small Venue Survivalist became a reality, I have wanted to do a big experiment. I’ve been itching to round up a bunch of microphones that can be purchased for either below, or slightly above the price point of the SM58, and then to objectively compare them to an SM58. (The Shure SM58 continues to be an industry standard microphone that is recognized and accepted everywhere as a sound-reinforcement tool.)

The key word above is “objectively.” Finding subjective microphone comparisons isn’t too hard. Sweetwater just put together (in 2017) a massive studio-mic shootout, and it was subjective. That is, the measurement data is audio files that you must listen to. This isn’t a bad thing, and it makes sense for studio mics – what matters most is how the mic sounds to you. Listening tests are everywhere, and they have their place.

In live audio, though, the mic’s sound is only one factor amongst many important variables. Further, these variables can be quantified. Resistance to mechanically-induced noise can be expressed as a decibel number. So can resistance to wind noise. So can feedback rejection. Knowing how different transducers stack up to one another is critical for making good purchasing decisions, and yet this kind of quantitative information just doesn’t seem to be available.

So, it seems that some attempt at compiling such measurements might be helpful.

Planned Experimental Procedure

Measure Proximity Effect

1) Generate a 100Hz tone through a loudspeaker at a repeatable SPL.

2) Place the microphone such that it is pointed directly at the center of the driver producing the tone. The front of the grill should be 6 inches from the loudspeaker baffle.

3) Establish an input level from the microphone, and note the value.

4) Without changing the orientation of the microphone relative to the driver, move the microphone to a point where the front of the grill is 1 inch from the loudspeaker baffle.

5) Note the difference in the input level, relative to the level obtained in step 3.

Assumptions: Microphones with greater resistance to proximity effect will exhibit a smaller level differential. Greater proximity effect resistance is considered desirable.

Establish “Equivalent Gain” For Further Testing

1) Place a monitor loudspeaker on the floor, and position the microphone on a tripod stand. The stand leg nearest the monitor should be 3 feet from the monitor enclosure.

2) Set the height of the microphone stand to a repeatable position that would be appropriate for an average-height performer.

3) Changing the height of the microphone as little as possible, point the microphone directly at the center of the monitor.

4) Generate pink-noise through the monitor at a repeatable SPL.

5) Using a meter capable of RMS averaging, establish a -20 dBFS RMS input level.

Measure Mechanical Noise Susceptibility

1) Set the microphone such that it is parallel to the floor.

2) Directly above the point where the microphone grill meets the body, hold a solid, semi-rigid object (like an eraser, or small rubber ball) 6 inches over the mic.

3) Allow the object to fall and strike the microphone.

4) Note the peak input level created by the strike.

Assumptions: Microphones with greater resistance to mechanically induced noise will exhibit a lower input level. Greater resistance to mechanically induced noise is considered desirable.

Measure Wind Noise Susceptibility

1) Position the microphone on the stand such that it is parallel to the floor.

2) Place a small fan (or other source of airflow which has repeatable windspeed and air displacement volume) 6 inches from the mic’s grill.

3) Activate the fan for 10 seconds. Note the peak input level created.

Assumptions: Microphones with greater resistance to wind noise will exhibit a lower input level. Greater resistance to wind noise is considered desirable.

Measure Feedback Resistance

1) Set the microphone in a working position. For cardioid mics, the rear of the microphone should be pointed directly at the monitor. For supercardioid and hypercardioid mics, the the microphone should be parallel with the floor.

2a) SM58 ONLY: Set a send level to the monitor that is just below noticeable ringing/ feedback.

2b) Use the send level determined in 2a to create loop-gain for the microphone.

3) Set a delay of 1000ms to the monitor.

4) Begin a recording of the mic’s output.

5) Generate a 500ms burst of pink-noise through the monitor. Allow the delayed feedback loop to sound four times.

6) Stop the recording, and make note of the peak level of the fourth repeat of the loop.

Assumptions: Microphones with greater feedback resistance will exhibit a lower input level on the fourth repeat. Greater feedback resistance is considered desirable.

Measure Cupping Resistance

1) Mute the send from the microphone to the monitor.

2) Obtain a frequency magnitude measurement of the microphone in the working position, using the monitor as the test audio source.

3) Place a hand around as much of the mic’s windscreen as is possible.

4) Re-run the frequency magnitude measurement.

5) On the “cupped” measurement, note the difference between the highest response peak, and that frequency’s level on the normal measurement.

Assumptions: Microphones with greater cupping resistance will exhibit a smaller level differential between the highest peak of the cupped response and that frequency’s magnitude on the normal trace. Greater cupping resistance is considered desirable.


THD Troubleshooting

I might have discovered something, or I might not.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Over the last little while, I’ve done some shows where I could swear that something strange was going on. Under certain conditions, like with a loud, rich vocal that had nothing else around it, I was sure that I could hear something in FOH distort.

So, I tried soloing up the vocal channel in my phones. Clean as a whistle.

I soloed up the the main mix. That seemed okay.

Well – crap. That meant that the problem was somewhere after the console. Maybe it was the stagebox output, but that seemed unlikely. No…the most likely problem was with a loudspeaker’s drive electronics or transducers. The boxes weren’t being driven into their limiters, though. Maybe a voice coil was just a tiny bit out of true, and rubbing?

Yeesh.

Of course, the very best testing is done “In Situ.” You get exactly the same signal to go through exactly the same gear in exactly the same place. If you’re going to reproduce a problem, that’s your top-shelf bet. Unfortunately, that’s hard to do right in the middle of a show. It’s also hard to do after a show, when Priority One is “get out in a hurry so they can lock the facility behind you.”

Failing that – or, perhaps, in parallel with it – I’m becoming a stronger and stronger believer in objective testing: Experiments where we use sensory equipment other than our ears and brains. Don’t get me wrong! I think ears and brains are powerful tools. They sometimes miss things, however, and don’t natively handle observations in an analytical way. Translating something you hear onto a graph is difficult. Translating a graph into an imagined sonic event tends to be easier. (Sometimes. Maybe. I think.)

This is why I do things like measure the off-axis response of a cupped microphone.

In this case, though, a simple magnitude measurement wasn’t going to do the job. What I really needed was distortion-per-frequency. Room EQ Wizard will do that, so I fired up my software, plugged in my Turbos (one at a time), and ran some trials. I did a set of measurements at a lower volume, which I discarded in favor of traces captured at a higher SPL. If something was going to go wrong, I wanted to give it a fighting chance of going wrong.

Here’s what I got out of the software, which plotted the magnitude curve and the THD curve for each loudspeaker unit:

I expected to see at least one box exhibit a bit of misbehavior which would dramatically affect the graph, but that’s not what I got. What I can say is that the first measurement’s overall distortion curve is different, lacking the THD “dip” at 200 Hz that the other boxes exhibit, significantly more distortion in the “ultra-deep” LF range, and with the “hump” shifted downwards. (The three more similar boxes center that bump in distortion at 1.2 kHz. The odd one out seems to put the center at about 800 Hz.)

So, maybe the box that’s a little different is my culprit. That’s my strong suspicion, anyway.

Or maybe it’s just fine.

Hmmmmm…


Measuring A Cupped Mic

What you might think would happen isn’t what happens.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The most popular article on this site to date is the one where I talk about why cupping a vocal mic is generally a “bad things category” sort of experience. In that piece, I explain some general issues with wrapping one’s hand around a microphone grill, but there’s something I didn’t do:

I didn’t measure anything.

That reality finally dawned on me, so I decided to do a quck-n-dirty experiment on how a microphone’s transfer function changes when cupping comes into play. Different mics will do different things, so any measurement is only valid for one mic in one situation. However, even if the results can’t truly be generalized, they are illuminating.

In the following picture, the red trace is a mic pointing away from a speaker, as you would want to happen in monitor-world. The black trace is the mic in the same position, except with my hand covering a large portion of the windscreen mesh.

You would think that covering a large part of the mic’s business-end would kill off a lot of midrange and high-frequency information, but the measurement says otherwise. The high-mid and HF information is actually rather hotter, with large peaks at 1800 Hz, 3900 Hz, and 9000 Hz. The low frequency response below 200 Hz is also given a small kick in the pants. Overall, the microphone transfer function is “wild,” with more pronounced differences between peaks and dips.

The upshot? The transducer’s feedback characteristics get harder to manage, and the sonic characteristics of the unit begin to favor the most annoying parts of the audible spectrum.

Like I said, this experiment is only valid for one mic (a Sennheiser e822s that I had handy). At the same time, my experience is that other mics have “cupping behavior” which is not entirely dissimilar.


Single-Ended Measurement

I really prefer it over minutes on-end of loud pink-noise.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

single-ended-measurementWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Today, I helped teach a live-sound class at Broadview Entertainment Arts University. We put a stage together, ran power, and set both Front Of House (FOH) and monitor-world loudspeakers. To cap-off the day, I decided to show the students a bit about measuring and tuning the boxes that we had just finished squaring away.

The software we used was Room EQ Wizard.

The more I use “REW,” the more I like the way it works. Its particular mode of operation isn’t exactly industry-standard, but I do have a tendency to ignore the trends when they aren’t really helpful or interesting to me. Rather than continually blasting pink-noise (statistically uncorrelated audio signals with equal energy per octave from 20 Hz to 20 kHz) into a room for several minutes while you tweak your EQ, Room EQ Wizard plays a predetermined sine-sweep. It then shows you a graph, you make your tweaks based on the graph, you re-measure, and iterate as many times as needed.

I prefer this workflow for more than one reason.

Single Ended Measurements Are Harder To Screw Up

The industry-standard method for measuring and tuning loudspeakers is that of the dual-FFT. If you’ve used or heard of SysTune or SMAART, among others, those are dual-FFT systems. You run an essentially arbitrary signal through your rig, with that signal not necessarily being “known” ahead of time. That signal has to be captured at two points:

1) Before it enters the signal chain you actually want to test.

2) After it exits the signal chain in question.

And, of course, you have to compensate for any propagation delay between those two points. Otherwise, your measurement will get contaminated with statistical “noise,” and become harder to read in a useful way – especially if phase matters to you. Averaging does help with this, to be fair, and I do average my “REW” curves to make them easier to parse. Anybody who has taken and examined a measurement trace in a real room knows that unsmoothed results look pretty terrifying.

In any event, dual-FFT measurements tend to be more difficult to set up and run effectively. On top of how easy it is to screw up ANY measurement, whether by measuring the wrong thing, forgetting an upstream EQ, or putting the mic in a terrible spot, you have the added hassles of getting your two measurement points routed and delay-compensated.

Over the years, dual-FFT packages have gotten much better at guiding users through the process, internally looping back the reference signal, and automatically picking compensation delay times. Even so, automating a complicated process doesn’t make the process less complicated. It just shields you from the complexity for as long as the automation can help you. (I’m not bagging on SMAART and SysTune here. They’re good bits of software that plenty of folks use successfully. I’m just pointing some things out.)

Single Ended, “Sweep” Measurements Can Be Quieter (And Less Annoying)

Another issue with measurements involving broadband signals is that they have greater susceptibility to in-room noise. As a whole, the noise may be quite loud. However, any given frequency can’t be running very “hot,” as the entire signal has to make it cleanly through the signal path. As such, noise in the room easily contaminates the test at the frequencies contained within that noise, unless you run the test signal loudly enough. With a single-ended, sine-sweep measurement, the instant that the measurement tone is at a certain frequency, the entire system output is dedicated to that frequency alone. As such, if you have in-room noise of 50 dB SPL at 1 kHz, running your measurement signal at 70 dB SPL should completely blow past the noise – while remaining comfortable to hear. With broadband noise, the measurement signal in the same situation might have to be 90 dB SPL.

Please note that single-ended measurements of broadband signals DO exist, and they have similar problems with noise as compared to broadband-noise, dual-FFT solutions.

The other nice thing about “sweep” measurements is that everybody gets a break from the noise. For 10 seconds or so, a rising tone sounds through the system, and then it stops. This is a stark contrast to minutes of “KSSSSSHHHHHH” that otherwise have to be endured.

Quality, Single Ended Measurement Software Can Be Cheaper

A person could conceivably design and build single-ended measurement software, and then sell it for a large amount of money. A person could also create dual-FFT software and give it away for free (Visual Analyzer is a good example).

However, on average, it seems that when it comes time to bring “easy to use” and “affordable” together, single-ended is where you’ll have to look. I really like Visual Analyzer, but you really, really have to know what you’re doing to use it effectively. SMAART and SysTune are user-friendly while also being incredibly powerful, but cost $700 – $1000 to acquire.

Room EQ Wizard is friendly (at least to me), and free. It’s hard to beat free when it’s also good.


I want to be careful to say (again) that I’m not trying to get people away from the highly-developed and widely accepted toolsets available in dual-FFT measurement packages. What I’m trying to say is that “dual-FFT with broadband noise in pseudo-realtime” isn’t the only way to measure and tune a sound system. There are other options that are easier to get into, and you can always step up later.


How Much Light For Your Dollar?

Measurements and observations regarding a handful of relatively inexpensive LED PARs.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

box_of_lightsWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

I’m in the process of getting ready for a pretty special show. The album “Clarity” by Sons Of Nothing is turning 10, and a number of us are trying to put together one smasher of a party.

Of course, that means video.

And our master of all things videographic is concerned about having enough light. We can’t have anybody in the band who’s permanently stuck in “shadow.” You only get one chance to shoot a 10th anniversary concert, and we want to get it right.

As such, I’m looking at how to beef up my available lighting instruments. It’s been a long while since I’ve truly gone shopping for that old mainstay of small-venue lighting, the LED wash PAR, but I do take a look around every so often. There’s a lot to see, and most of it isn’t very well documented. Lighting manufacturers love to tell you how many diodes are in a luminaire, and they also like to tell you how much power the thing consumes, but there appears to be something of an allergy to coughing up output numbers.

Lux, that is. Lumens per square meter. The actual effectiveness of a light at…you know…LIGHTING things.

So, I thought to myself, “Self, wouldn’t it be interesting to buy some inexpensive lights and make an attempt at some objective measurement?”

I agreed with myself. I especially agreed because Android 4.4 devices can run a cool little Google App called “Science Journal.” The software translates the output from a phone’s ambient light sensor into units of lux. For free (plus the cost of the phone, of course). Neat!

I got onto Amazon, found myself a lighting brand (GBGS) that had numerous fixtures available for fulfillment by Amazon, and spent a few dollars. The reason for choosing fulfillment from Amazon basically comes down to this: I wanted to avoid dealing with an unknown in terms of shipping time. Small vendors can sometimes take a while to pack and ship an order. Amazon, on the other hand, is fast.

The Experiment

Step 1: Find a hallway that can be made as dark as possible – ideally, dark enough that a light meter registers 0 lux.

Step 2: At one end, put the light meter on a stand. (A mic stand with a friction clip is actually pretty good at holding a smartphone, by the way.)

Step 3: At the other end, situate a lighting stand with the “fixture under test” clamped firmly to that stand.

Step 4: Measure the distance from the lighting stand to the light meter position. (In my case, the distance was 19 feet.)

Step 5: Darken the hallway.

Step 6: Set the fixture under test to maximum output using a DMX controller.

Step 7: Allow the fixture to operate at full power for roughly 10 minutes, in case light output is reduced as the fixture’s heat increases.

Step 8: Ensure the fixture under test is aimed directly at the light meter.

Step 9: Note the value indicated by the meter.

Important Notes

A relatively long distance between the light and the meter is recommended. This is so that any positioning variance introduced by placing and replacing either the lights or the meter has a reduced effect. At close range, a small variance in distance can skew a measurement noticeably. At longer distances, that same variance value has almost no effect. A four-inch length difference at 19 feet is about a 2% error, whereas that same length differential at 3 feet is an 11% error.

It’s important to note that the hallway used for the measurement had white walls. This may have pushed the readings higher, as – similarly to audio – energy that would otherwise be lost to absorption is re-emitted and potentially measurable.

It was somewhat difficult to get a “steady” measurement using the phone as a meter. As such, I have estimated lux readings that are slightly lower than the peak numbers I observed.

These fixtures may or may not be suitable for your application. These tests cannot meaningfully speak to durability, reliability, acceptability in a given setting, and so on.

The calculation for 1 meter lux was as follows:

19′ = 5.7912 m

5.7912 = 2^2.53 (2.53 doublings of distance from 1m)

Assumed inverse square law for intensity; For each doubling of distance, intensity quadruples.

Multiply 19′ lux by 4^2.53 (33.53)

Calculated 1 meter lux values are just that – calculated. LED PAR lights are not a point-source of light, and so do not behave like one. It requires a certain distance from the fixture for all the emitters to combine and appear as though they are a single source of light.

The Data

The data display requires Javascript to work. I’m sorry about that – I personally dislike it when sites can’t display content without Javascript. However, for the moment I’m backed into a corner by the way that WordPress works with PHP, so Javascript it is.


Ascending sort by:


Why Chaining Distortion Doesn’t Sound So Great

More dirt is not necessarily cool dirt.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

ampsWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

One day, just before Fats closed, I was talking with Christian from Blue Zen. We were discussing the pursuit of tone, and a discovery that Christian had made (with the help of Gary at Guitar Czar). Christian had been trying to get more drive from his amp, which already had a fair bit of crunch happening. So, he had put a distortion pedal between the guitar and the amplifier input.

He hadn’t liked the results. He found the sound to be too scratchy and thin.

Upon consultation with Gary, the distortion pedal had been removed, and a much cleaner boost substituted. Christian was definitely happier.

But why hadn’t the original solution worked?

The Frequency Domain

Distortion can be something of a complex creature, but it does have a “simple” form. The simple form is harmonic distortion. Harmonic distortion occurs when the transfer function of an audio chain becomes nonlinear, and a tone is passed with additional products that follow a mathematical pattern: For a given frequency in a signal, the generated products are integer multiples of that frequency.

Integers are “whole” numbers, so, for a 200 Hz tone undergoing harmonic distortion, additional tones are generated at 200 Hz X 2, 3, 4, 5, 6, etc. Different circuits generate the additional tones at different intensities, and which pattern you prefer is a matter of taste.

For example, here’s an RTA trace of a 200 Hz tone being run through a saturation plugin.

pure-tone-distortion

(The odd-numbered harmonics are definitely favored by this particular saturation processor’s virtual circuit.)

The thing is that harmonics are always higher in frequency than the fundamental. The “hotter” the harmonic content, the more the signal’s overall frequency response “tilts” toward the high end. As distortion piles up, the overall timbre of a signal can start to overwhelm the lower-frequency information, resulting in a sound that is no longer “warm,” “thick,” “fat,” “chunky,” “creamy,” or whatever adjective you like to use.

Take a look at this transfer function trace comparing a signal run through one distortion stage and two distortion stages. The top end is very pronounced, with plenty of energy that’s not much more than “fizz” or “hiss”:

transfer-function-dualdistortion

If you chain distortion into distortion, you’re quite likely to just pile up more and more harmonic content, thus emphasizing the high end more than you’d prefer. There’s more to it than that, though. Look at this RTA trace of a tone being run through chained saturation plugins:

pure-tone-doubledistortion

To make things easier to see, you can also take a look at this overlay of the two traces:

pure-tone-overlay

There’s noticeably more energy in the high-end, and the distortion products are also present at many more frequencies. The original harmonic distortion tones are being distorted themselves, and there may also be some intermodulation distortion occurring. Intermodulation distortion is also a nonlinearity in a system’s transfer function, but the additional tones aren’t multiples of the original tones. Rather, they are sums and differences.

IM distortion is generally thought to sound pretty ugly when compared to harmonic distortion.

So, yes, chaining distortion does give you more drive, but it can also give you way more “dirt” than you actually want. If you like the sound of your amp’s crunch, and want more of it, you’re better off finding a way to run your clean signal at a higher (but still clean) level. As the amp saturates, the distortion products will go up – but at least it will be only one set of distortion products.

Dynamic Range

The other problem with heaping distortion on top of distortion is that of emphasizing all kinds of noises that you’d prefer not to. Distortion is, for all intents and purposes, a “dirty” limiter. Limiting, being an extreme form of compression, reduces dynamic range (the difference between high and low amplitude signals). This can be very handy up to a point. Being able to crank up quieter sounds means that tricks like high-speed runs and pinch-harmonics are easier to pull off effectively.

There’s a point, though, where sounds that you’d prefer to de-emphasize are smashed right up into the things you do want to hear. To use a metaphor, the problem with holding the ceiling steady and raising the floor is that you eventually get that nasty old carpet in your face. The noise of your pickups and instrument processors? Loud. Your picking? Loud. Your finger movement on the strings? Loud. Any other sloppiness? Loud.

Running distortion into distortion is a very effective way to make what you’d prefer to be quiet into a screaming vortex of noise.

Is Chaining Distortion Wrong?

I want to close with this point.

Chaining distortion is not “wrong.” You shouldn’t be scared to try it as a science experiment, or to get a wild effect.

The point of all this is merely to say that serial distortion is not the best practice for a certain, common application – the application of merely running a given circuit at a higher level. For that particular result, which is quite commonly desired, you will be far better served by feeding the circuit with more “clean” gain. In all likelihood, your control over your sound will be more fine-grained, and also more predictable overall.


A Statistics-Based Case Against “Going Viral” As A Career Strategy

Going viral is neat, but you can’t count on it unless you can manage to do it all the time.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

normaldistributionWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

“Going viral is not a business plan.” -Jackal Group CEO Gail Berman

There are plenty of musicians (and other entrepreneurs, not just in the music biz) out there who believe that all they need is “one big hit.” If they get that one big hit, then they will have sustained success at a level that’s similar to that of the breakthrough.

But…

Have you ever heard of a one-hit wonder? I thought so. There are plenty to choose from: Bands and brands that did one thing that lit up the world for a while, and then faded back into obscurity.

Don’t get me wrong. When something you’ve created really catches on, it’s a great feeling. It DOES create momentum. It IS helpful for your work. It IS NOT enough, though, to guarantee long-term viability. It’s a nice bit of luck, but it’s not a business plan in any sense I would agree with.

Why? Because of an assumption which I think is correct.

Consistency

In my mind, two hallmarks of a viable, long-term, entrepreneurial strategy are:

A) You avoid being at the mercy of the public’s rapidly-shifting attention.

B) Your product, and its positive effect on your business, are consistent and predictable.

Part A disqualifies “going viral” as a core strategy because going viral rests tremendously upon the whims of the public. It’s so far out of your control (as an individual creator), and so unpredictable that it can’t be relied on. It’s as if you were to try farming land where the weather was almost completely random – one day of rain, then a tornado, then a month of scorching heat, then an hour of hail, then a week of arctic freeze, then two days of sun, then…

You might manage to grow something if you got lucky, but you’d be much more likely to starve to death.

Part B connects to Part A. If you can produce a product every day, but you can’t meaningfully predict what kind of revenue it will generate, you don’t have a basis for a business. If your product is completely at the mercy of the public’s attention-span, and will only help you if the public goes completely mad over it, you are standing on very shaky ground. Sure, you may get a surge in popularity, but when will that surge come? Will it be long-term? A transient hit will not keep you afloat. It can give you a nice infusion of cash. It can give you something to build on. It can be capitalized on, but it can’t be counted on.

A viable business rests on things that can be counted on, and this is where the statistics come in. If I reduce my opinion to a single statement, I come up with this:

Long-term business viability is found within one standard deviation, if it’s found at all.

Now, what in blazes does that mean?

One Sigma

When we talk about a “normal distribution,” we say that a vast majority of what we can expect to find – almost all of it, in fact – will be between plus/ minus two standard deviations. A standard deviation is represented as “sigma,” and is a measure of variation. If you release ten songs, and all of them get between 90 and 110 listens every day, then there’s not much variation in their popularity. The standard deviation is small. If you release ten songs, and one of them gets 10,000 listens per day, another gets 100, another gets 20, and so on all over the map, then standard deviation is large. There are wild variations in popularity from song to song.

When I say that “Long-term business viability is found within one standard deviation, if it’s found at all,” what I’m saying is that strategy has to be built on things you can reasonably expect. It’s true that you might have an exceptionally bad day here and there, and you might also have an exceptionally good day, but you can’t build your business on either of those two things. You have to look at what is probably going to happen the majority of the time.

Do I have some examples? You bet!

I once ran a heavily subsidized (we wouldn’t have made it otherwise) venue that admitted all-ages. When it was all over and the dust settled, I did some number crunching. Our average revenue per show was $77. The standard deviation in show revenue was $64. That’s an enormous spread in value. Just one standard deviation in either direction covered a range of revenue from $13 to $141. With a variation that enormous, the only long term strategy would have been to stay subsidized. Not much money was made, and “duds” were plenty common.

We can also look at the daily traffic for this site. In fact, it’s a great example because I recently had an article go viral. My post about why audio humans get so bent out of shape when a mic is cupped took off like a rocket. During the course of the article’s major “viralness” (that might not be a real word, but whatever), this site got 110,000 views. If you look at the same length of time just before the article was published, the site got 373 views.

That’s a heck of an outlier. Even if we keep that outlier in the data and let it push things off to the high side, the average view-count per day is 162, with a standard deviation of over 2000. In that case, the very peak of the article’s viral activity is +22 standard deviations (holy smoke!) from the mean.

I can’t build a business on that. I can’t predict based on that. I can’t assume that anything else will ever do that well. I would never have dreamed that particular article would catch fire as it did. There are plenty of posts on this site that I consider more interesting, yet didn’t have that kind of popularity. The public made their decision, and I didn’t expect it.

It was really cool to go viral, and it did help me out. However, I have not been “crowned king of show production sites,” or anything like that. My day to day traffic is higher than it was before, but my life and the site’s life haven’t fundamentally changed. The day to day is back to normal, and normal is what I can think about in the long-term. This doesn’t mean I can’t dream big, or take an occasional risk – it just means that my expectations have to be in the right place: About one standard deviation. (Actually, less than that.)


Where’s Your Data?

I don’t think audio-humans are skeptical enough.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

traceWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

If I’m going to editorialize on this, I first need to be clear about one thing: I’m not against certain things being taken on faith. There are plenty of assumptions in my life that can’t be empirically tested. I don’t have a problem with that in any way. I subscribe quite strongly to that old saw:

You ARE entitled to your opinion. You ARE NOT entitled to your own set of “facts.”

But, of course, that means that I subscribe to both sides of it. As I’ve gotten farther and farther along in the show-production craft, especially the audio part, I’ve gotten more and more dismayed with how opinion is used in place of fact. I’ve found myself getting more and more “riled” with discussions where all kinds of assertions are used as conversational currency, unbacked by any visible, objective defense. People claim something, and I want to shout, “Where’s your data, dude? Back that up. Defend your answer!”

I would say that part of the problem lies in how we describe the job. We have (or at least had) the tendency to say, “It’s a mix of art and science.” Unfortunately, my impression is that this has come to be a sort of handwaving of the science part. “Oh…the nuts and bolts of how things work aren’t all that important. If you’re pleased with the results, then you’re okay.” While this is a fair statement on the grounds of having reached a workable endpoint through unorthodox or uneducated means, I worry about the disservice it does to the craft when it’s overapplied.

To be brutally frank, I wish the “mix of art and science” thing would go away. I would replace it with, “What we’re doing is science in the service of art.”

Everything that an audio human does or encounters is precipitated by physics – and not “exotic” physics, either. We’re talking about Newtonian interactions and well-understood electronics here, not quantum entanglement, subatomic particles, and speeds approaching that of light. The processes that cause sound stuff to happen are entirely understandable, wieldable, and measurable by ordinary humans – and this means that audio is not any sort of arcane magic. A show’s audio coming off well or poorly always has a logical explanation, even if that explanation is obscure at the time.

I Should Be Able To Measure It

Here’s where the rubber truly meets the road on all this.

There seems to be a very small number of audio humans who are willing to do any actual science. That is to say, investigating something in such a way as to get objective, quantitative data. This causes huge problems with troubleshooting, consulting, and system building. All manner of rabbit trails may be followed while trying to fix something, and all manner of moneys are spent in the process, but the problem stays un-fixed. Our enormous pool of myth, legend, and hearsay seems to be great for swatting at symptoms, but it’s not so hot for tracking down the root cause of what’s ailing us.

Part of our problem – I include myself because I AM susceptible – is that listening is easy and measuring is hard. Or, rather, scientific measuring is hard.

Listening tests of all kinds are ubiquitous in this business. They’re easy to do, because they aren’t demanding in terms of setup or parameter control. You try to get your levels matched, setup some fast signal switching, maybe (if you’re very lucky) make it all double-blind so that nobody knows what switch setting corresponds to a particular signal, and go for it.

Direct observation via the senses has been used in science for a long time. It’s not that it’s completely invalid. It’s just that it has problems. The biggest problem is that our senses are interpreted through our brains, an organ which develops strong biases and filters information so that we don’t die. The next problem is that the experimental parameter control actually tends to be quite shoddy. In the worst cases, you get people claiming that, say, console A has a better sound than console B. But…they heard console A in one place, with one band, and console B in a totally different place with a totally different band. There’s no meaningful comparison, because the devices under test AND the test signals were different.

As a result, listening tests produce all kinds of impressions that aren’t actually helpful. Heck, we don’t even know what “sounds better” means. For this person over here, it means lots of high-frequency information. For some other person, it means a slight bass boost. This guy wants a touch of distortion that emphasizes the even-numbered harmonics. That gal wants a device that resembles a “straight wire” as much as possible. Nobody can even agree on what they like! You can’t actually get a rigorous comparison out of that sort of thing.

The flipside is, if we can actually hear it, we should be able to measure it. If a given input signal actually sounds different when listened to through different signal paths, then those signal paths MUST have different transfer functions. A measurement transducer that meets or exceeds the bandwidth and transient response of a human ear should be able to detect that output signal reliably. (A measurement mic that, at the very least, significantly exceeds the bandwidth of human hearing is only about $700.)

As I said, measuring – real measuring – is hard. If the analysis rig is setup incorrectly, we get unusable results, and it’s frighteningly easy to screw up an experimental procedure. Also, we have to be very, very defined about what we’re trying to measure. We have to start with an input signal that is EXACTLY the same for all measurements. None of this “we’ll set up the drums in this room, play them, then tear them down and set them up in this other room,” can be tolerated as valid. Then, we have to make every other parameter agree for each device being tested. No fair running one preamp closer to clipping than the other! (For example.)

Question Everything

So…what to do now?

If I had to propose an initial solution to the problems I see (which may not be seen by others, because this is my own opinion – oh, the IRONY), I would NOT say that the solution is for everyone to graph everything. I don’t see that as being necessary. What I DO see as being necessary is for more production craftspersons to embrace their inner skeptic. The lesser amount of coherent explanation that’s attached to an assertion, the more we should doubt that assertion. We can even develop a “hierarchy of dubiousness.”

If something can be backed up with an actual experiment that produces quantitative data, that something is probably true until disproved by someone else running the same experiment. Failure to disclose the experimental procedure makes the measurement suspect however – how exactly did they arrive at the conclusion that the loudspeaker will tolerate 1 kW of continuous input? No details? Hmmm…

If a statement is made and backed up with an accepted scientific model, the statement is probably true…but should be examined to make sure the model was applied correctly. There are lots of people who know audio words, but not what those words really mean. Also, the model might change, though that’s unlikely in basic physics.

Experience and anecdotes (“I heard this thing, and I liked it better”) are individually valid, but only in the very limited context of the person relating them. A large set of similar experiences across a diverse range of people expands the validity of the declaration, however.

You get the idea.

The point is that a growing lack of desire to just accept any old statement about audio will, hopefully, start to weed out some of the mythological monsters that periodically stomp through the production-tech village. If the myths can’t propagate, they stand a chance of dying off. Maybe. A guy can hope.

So, question your peers. Question yourself. Especially if there’s a problem, and the proposed fix involves a significant amount of money, question the fix.

A group of us were once troubleshooting an issue. A producer wasn’t liking the sound quality he was getting from his mic. The discussion quickly turned to preamps, and whether he should save up to buy a whole new audio interface for his computer. It finally dawned on me that we hadn’t bothered to ask anything about how he was using the mic, and when I did ask, he stated that he was standing several feet from the unit. If that’s not a recipe for sound that can be described as “thin,” I don’t know what is. His problem had everything to do with the acoustic physics of using a microphone, and nothing substantial AT ALL to do with the preamp he was using.

A little bit of critical thinking can save you a good pile of cash, it would seem.

(By the way, I am biased like MAD against the the crowd that craves expensive mic pres, so be aware of that when I’m making assertions. Just to be fair. Question everything. Question EVERYTHING. Ask where the data is. Verify.)


The Glorious Spectrograph

They’re better than other real time analysis systems.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

thegloriousspectrographWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

I don’t really know how common it is, but there are at least a few of us who like to do a particular thing with our console solo bus:

We connect some sort of analyzer across the output.

This is really handy, because you can look at different audio paths very easily – no patching required. You do what you have to do to enable “solo” on the appropriate channel(s), and BOOM! What you’ve selected, and ONLY what you’ve selected, is getting chewed on by the analyzer.

The measurement solution that seems to be picked the most often is the conventional RTA. You’ve almost certainly encountered one at some point. Software media players all seem to feature at least one “visualization” that plots signal magnitude versus frequency. Pro-audio versions of the RTA have more frequency bands (often 31, to match up with 1/3 octave graphic EQs), and more objectively useful metering. They’re great for finding frequency areas that are really going out of whack while you’re watching the display, but I have to admit that regular spectrum displays have often failed to be truly useful to me.

It’s mostly because of their two-dimensional nature.

I Need More “Ds,” Please

A bog-standard spectrum analyzer is a device for measuring and displaying two dimensions. One dimension is amplitude, and the other is frequency. These dimensions are plotted in terms of each other at quasi-instantaneous points in time. I say “quasi” because, of course, the display does not react instantaneously. The metering may be capable of reacting very quickly, and it may also have an averaging function to smooth out wild jumpiness. Even so, the device is only meant to show you what’s happening at a particular moment. A moment might last a mere 50ms (enough time to “see” a full cycle of 20 Hz wave), or the moment might be a full-second average. In either case, once the moment has passed, it’s lost. You can’t view it anymore, and the analyzer’s reality no longer includes it meaningfully.

This really isn’t a helpful behavior, ironically because it’s exactly what live production is. A live-show is a series of moments that can’t be stopped and replayed. If you get into a trouble spot at a particular time, and then that problem stops manifesting, you can’t cause that exact event to happen again. Yes, you CAN replicate the overall circumstances in an attempt to make the problem manifest itself again, but you can’t return to the previous event. The “arrow of time,” and all that.

This is where the spectrograph reveals its gloriousness: It’s a three-dimensional device.

You might not believe me, especially if you’re looking at the spectrograph image up there. It doesn’t look 3D. It seems like a flat plot of colors.

A plot of colors.

Colors!

When we think of 3D, we’re used to all of the dimensions being represented spatially. We look for height, width, and depth – or as much depth as we can approximate on displays that don’t actually show it. A spectrograph uses height and width for two dimensions, and displays the third with a color ramp.

The magic of the spectrograph is that it uses the color ramp for the magnitude parameter. This means that height and width can be assigned, in whatever way is most useful, to frequency and TIME.

Time is the key.

Good Timing

With a spectrograph, an event that has been measured is stored and displayed alongside the events that follow it. You can see the sonic imprint of those past events at whatever time you want, as long as the unit hasn’t overwritten that measurement. This is incredibly useful in live-audio, especially as it relates to feedback.

The classic “feedback monster” shows up when a certain frequency’s loop gain (the total gain applied to the signal as it enters a transducer, traverses a signal path, exits another transducer, and re-enters the original transducer) becomes too large. With each pass through the loop, that frequency’s magnitude doesn’t drop as much as is desired, doesn’t drop at all, or even increases. The problem isn’t the frequency in and of itself, and the problem isn’t the frequency’s magnitude in and of itself. The problem is the change in magnitude over time being inappropriate.

There’s that “time” thing again.

On a basic analyzer, a feedback problem only has a chance of being visible if it results in a large enough magnitude that it’s distinguishable from everything else being measured at that moment. At that moment, you can look at the analyzer, make a mental note about which frequency was getting out of hand, and then try to fix it. If the problem disappears because you yanked the fader back, or a guitar player put their hand on their strings, or a mic got temporarily moved to a better spot, all you have to go on is your memory of where the “spike” was. Again, the basic RTA doesn’t show you measurements in terms of time, except within the limitations of its own attack and release rates.

But a spectrograph DOES show you time. Since a feedback problem is a limited range of frequencies that are failing to decay swiftly enough, a spectrograph will show that lack of decay as a distinctive “smear” across the unit’s time axis. If the magnitude of the problem area is large enough, the visual representation is very obvious. Further, the persistence of that representation on the display means that you have some time to freeze the analyzer…at which point you can zero in on exactly where your problem is, so as to kill it with surgical precision. No remembering required.

So, if you’ve got a spectrograph inserted on your solo bus, you can solo up a problem channel, very briefly drive it into feedback, drop the gain, freeze the analyzer, and start fixing things without having to let the system ring for an annoyingly long time. This is a big deal when trying to solve a problem during a show that’s actually running, and it’s also extremely useful when ringing out a monitor rig by yourself. If all this doesn’t make the spectrograph far more glorious than a basic, 2D analyzer, I just don’t know what to do for you.


How Much To Worry About Phone Audio

Maybe a little, but not much.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

phonetubeWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

A perennial moan-n-groan amongst pro-audio types is: “Ya cain’t trust dem portable music players, Sonny!” At the core of this angst is the idea that the inexpensive output circuitry of Ipods, phones, tablets, and [insert whatever else here] simply can not handle audio very well. It MUST be doing something nasty to the passing signal. It’s an affront to plug that device into something serious, like a mixing console. An affront, I say!

But here’s the thing.

Up until recently, I have never seen any kind of measurement made to prove or disprove the complaint. No numbers. Nothing quantitative at all. Just an overall sense of hoity-toity superiority, and the odd anecdote in the vein of “We plugged the music player into the console, and I swear, Led Zeppelin didn’t sound as zeppelin-y through the thing.” I say that I haven’t seen anything quantitative until recently because of a page from this ProSoundWeb thread. Scroll down a bit, and you’ll find a link to a series of honest-to-goodness measurements on the Iphone 5.

The short story is that the Iphone 5 has output that is basically laser-flat from DC to dog-whistles. It does roll off a tiny bit above 5 kHz, but the curve is on the order of half a decibel per octave. If you plug the thing into a higher-impedance-than-headphones input (as you would if you were running the phone to a mixing console), the phone is definitely NOT the limiting factor in the audio.

Seeing that measurement inspired me to pull out my Samsung Galaxy SIII and run my own series of tests.

Dual FFT

The first thing I did was to get a sample of pink noise, put the WAV file in my phone’s storage, and play that audio back to my computer. In the process of getting the recorded signal from the phone aligned with the original noise, I heard something very curious:

When the start points were aligned and summed, there was the strange effect of a downward-sweeping comb filter. Not steady comb-filtering…SWEEPING. (Like a flanger effect.) Zooming into the ends of the audio regions, I could clearly see that the recording from the phone was ending earlier than the reference file. The Galaxy was very definitely not playing the test signal at the same speed that the computer played the original noise. On a hunch, I set the playback varispeed on the phone recording to 0.91875 of normal. The comb-filter sweep effect essentially disappeared.

See, my hunch was that the phone’s sampling rate was 48 kHz instead of 44.1 kHz. My hunch was also that the phone was not sample-rate converting the file, but just playing it at the higher rate. That’s why I chose the 0.91875 varispeed factor. Divide 44100 by 48000, and that’s the number that comes out – which would be the ratio of playback speeds if no rate-conversion was going on.

So, in the case of WAV files, the phone may very well not be playing them back at the speed it ought to be. That IS something to think about, although it’s hardly a fatal problem if the discrepancy is from 44.1 k sampling to 48 k. Also, that’s not a problem with audio circuit design. It’s a software implementation issue.

In the end, I ran the dual-FFT analysis with the phone audio playing at 1X speed, because the “corruption” introduced by the time-stretching algorithm was enough to make the measurement more uncertain (rather than less). Uncertainty in measurements like these manifests the same way as noise does. It causes the trace to become “wild” or “fuzzy,” because the noise creates a sort of statistical perturbation that follows the shape of the curve. The more severe that perturbation is, the tougher it is to read the measurement in a meaningful way.

Here’s what I got in terms of the phone’s frequency response:

phonedualfft

You can see what I mean in terms of the noise. Especially at higher frequencies, the measurement shows a bit of uncertainty. I used a very high averaging number in order to keep things under control.

In any case, the trace is statistically flat from 20 Hz to 20 kHz. The phone’s output circuitry sounds just fine, thanks.

FFT Noise

With the problems introduced by playback timing, I wanted to also try tests with a signal that “self references.” FFT noise fits this description. Run through a properly configured analyzer, FFT noise (which sounds like a series of clicks) does not require a “known” signal for comparison. It’s own properties are such that, when measured correctly, the unaltered signal should be completely flat.

As an aside, you may remember me talking about FFT noise in a bit more detail here.

In the article I just linked, I didn’t get into one of the main weaknesses of FFT noise, and that is its susceptibility to external noise. FFT noise is really great when you want to test, say, plugins used in a DAW, because digital silence is actual silence – 0 signal of any kind. There’s no electronic background noise to consider. The problem, though, is that FFT noise is a series of spaced “clicks.” In the empty spaces, anything other than digital silence is incredibly apparent, and easily corrupts the measurement trace.

Even so, I wanted to give the alternate stimulus a try.

This is what I got in Reaper’s Gfxanalyzer, which is meant to “mate” well with FFT noise:

phonefftnoisegfx

Again, the trace is statistically flat, although low-frequency noise is very apparent.

For an alternate trace, I tried to configure Visual Analyzer to play nicely with the noise.

phonefftnoiseva

Once more, the trace is noisy but otherwise flat.

Conclusions

It’s very important that I recognize a huge limitation in all of this: The sample size is very low. One Iphone 5 and one Samsung Galaxy SIII, with one measurement each, do not properly constitute a representative sample of every phone and media player that might ever get plugged into a mixing console.

At the same time, actually measuring these devices suggests that the categorical write-off of portable players as being unable to pass good audio is just worry to no purpose. There are probably some horrible media players out there, with really bad output circuitry. However, half-decent analog output stages are at the implementation point that I would venture to say is “trivial.” I would further guess that most product-design engineers are using output circuits that are functionally identical to each other. When it comes to plugging a phone, tablet, or MP3 player into a console, I simply can’t find a reason to be up in arms about the quality of the signal being handed to the output jack. I might worry a bit about the physical connection provided by that jack, but the signal on the contacts is a different matter.

I’m in agreement with the sentiments expressed by others. If the audio from a media device doesn’t sound so good, the problem is far more likely to be the source material than the output circuitry.

The phone is probably fine. What’s the file like? What’s the software implementation like?