Tag Archives: Measurement

A Statistics-Based Case Against “Going Viral” As A Career Strategy

Going viral is neat, but you can’t count on it unless you can manage to do it all the time.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

normaldistributionWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

“Going viral is not a business plan.” -Jackal Group CEO Gail Berman

There are plenty of musicians (and other entrepreneurs, not just in the music biz) out there who believe that all they need is “one big hit.” If they get that one big hit, then they will have sustained success at a level that’s similar to that of the breakthrough.

But…

Have you ever heard of a one-hit wonder? I thought so. There are plenty to choose from: Bands and brands that did one thing that lit up the world for a while, and then faded back into obscurity.

Don’t get me wrong. When something you’ve created really catches on, it’s a great feeling. It DOES create momentum. It IS helpful for your work. It IS NOT enough, though, to guarantee long-term viability. It’s a nice bit of luck, but it’s not a business plan in any sense I would agree with.

Why? Because of an assumption which I think is correct.

Consistency

In my mind, two hallmarks of a viable, long-term, entrepreneurial strategy are:

A) You avoid being at the mercy of the public’s rapidly-shifting attention.

B) Your product, and its positive effect on your business, are consistent and predictable.

Part A disqualifies “going viral” as a core strategy because going viral rests tremendously upon the whims of the public. It’s so far out of your control (as an individual creator), and so unpredictable that it can’t be relied on. It’s as if you were to try farming land where the weather was almost completely random – one day of rain, then a tornado, then a month of scorching heat, then an hour of hail, then a week of arctic freeze, then two days of sun, then…

You might manage to grow something if you got lucky, but you’d be much more likely to starve to death.

Part B connects to Part A. If you can produce a product every day, but you can’t meaningfully predict what kind of revenue it will generate, you don’t have a basis for a business. If your product is completely at the mercy of the public’s attention-span, and will only help you if the public goes completely mad over it, you are standing on very shaky ground. Sure, you may get a surge in popularity, but when will that surge come? Will it be long-term? A transient hit will not keep you afloat. It can give you a nice infusion of cash. It can give you something to build on. It can be capitalized on, but it can’t be counted on.

A viable business rests on things that can be counted on, and this is where the statistics come in. If I reduce my opinion to a single statement, I come up with this:

Long-term business viability is found within one standard deviation, if it’s found at all.

Now, what in blazes does that mean?

One Sigma

When we talk about a “normal distribution,” we say that a vast majority of what we can expect to find – almost all of it, in fact – will be between plus/ minus two standard deviations. A standard deviation is represented as “sigma,” and is a measure of variation. If you release ten songs, and all of them get between 90 and 110 listens every day, then there’s not much variation in their popularity. The standard deviation is small. If you release ten songs, and one of them gets 10,000 listens per day, another gets 100, another gets 20, and so on all over the map, then standard deviation is large. There are wild variations in popularity from song to song.

When I say that “Long-term business viability is found within one standard deviation, if it’s found at all,” what I’m saying is that strategy has to be built on things you can reasonably expect. It’s true that you might have an exceptionally bad day here and there, and you might also have an exceptionally good day, but you can’t build your business on either of those two things. You have to look at what is probably going to happen the majority of the time.

Do I have some examples? You bet!

I once ran a heavily subsidized (we wouldn’t have made it otherwise) venue that admitted all-ages. When it was all over and the dust settled, I did some number crunching. Our average revenue per show was $77. The standard deviation in show revenue was $64. That’s an enormous spread in value. Just one standard deviation in either direction covered a range of revenue from $13 to $141. With a variation that enormous, the only long term strategy would have been to stay subsidized. Not much money was made, and “duds” were plenty common.

We can also look at the daily traffic for this site. In fact, it’s a great example because I recently had an article go viral. My post about why audio humans get so bent out of shape when a mic is cupped took off like a rocket. During the course of the article’s major “viralness” (that might not be a real word, but whatever), this site got 110,000 views. If you look at the same length of time just before the article was published, the site got 373 views.

That’s a heck of an outlier. Even if we keep that outlier in the data and let it push things off to the high side, the average view-count per day is 162, with a standard deviation of over 2000. In that case, the very peak of the article’s viral activity is +22 standard deviations (holy smoke!) from the mean.

I can’t build a business on that. I can’t predict based on that. I can’t assume that anything else will ever do that well. I would never have dreamed that particular article would catch fire as it did. There are plenty of posts on this site that I consider more interesting, yet didn’t have that kind of popularity. The public made their decision, and I didn’t expect it.

It was really cool to go viral, and it did help me out. However, I have not been “crowned king of show production sites,” or anything like that. My day to day traffic is higher than it was before, but my life and the site’s life haven’t fundamentally changed. The day to day is back to normal, and normal is what I can think about in the long-term. This doesn’t mean I can’t dream big, or take an occasional risk – it just means that my expectations have to be in the right place: About one standard deviation. (Actually, less than that.)


Where’s Your Data?

I don’t think audio-humans are skeptical enough.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

traceWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

If I’m going to editorialize on this, I first need to be clear about one thing: I’m not against certain things being taken on faith. There are plenty of assumptions in my life that can’t be empirically tested. I don’t have a problem with that in any way. I subscribe quite strongly to that old saw:

You ARE entitled to your opinion. You ARE NOT entitled to your own set of “facts.”

But, of course, that means that I subscribe to both sides of it. As I’ve gotten farther and farther along in the show-production craft, especially the audio part, I’ve gotten more and more dismayed with how opinion is used in place of fact. I’ve found myself getting more and more “riled” with discussions where all kinds of assertions are used as conversational currency, unbacked by any visible, objective defense. People claim something, and I want to shout, “Where’s your data, dude? Back that up. Defend your answer!”

I would say that part of the problem lies in how we describe the job. We have (or at least had) the tendency to say, “It’s a mix of art and science.” Unfortunately, my impression is that this has come to be a sort of handwaving of the science part. “Oh…the nuts and bolts of how things work aren’t all that important. If you’re pleased with the results, then you’re okay.” While this is a fair statement on the grounds of having reached a workable endpoint through unorthodox or uneducated means, I worry about the disservice it does to the craft when it’s overapplied.

To be brutally frank, I wish the “mix of art and science” thing would go away. I would replace it with, “What we’re doing is science in the service of art.”

Everything that an audio human does or encounters is precipitated by physics – and not “exotic” physics, either. We’re talking about Newtonian interactions and well-understood electronics here, not quantum entanglement, subatomic particles, and speeds approaching that of light. The processes that cause sound stuff to happen are entirely understandable, wieldable, and measurable by ordinary humans – and this means that audio is not any sort of arcane magic. A show’s audio coming off well or poorly always has a logical explanation, even if that explanation is obscure at the time.

I Should Be Able To Measure It

Here’s where the rubber truly meets the road on all this.

There seems to be a very small number of audio humans who are willing to do any actual science. That is to say, investigating something in such a way as to get objective, quantitative data. This causes huge problems with troubleshooting, consulting, and system building. All manner of rabbit trails may be followed while trying to fix something, and all manner of moneys are spent in the process, but the problem stays un-fixed. Our enormous pool of myth, legend, and hearsay seems to be great for swatting at symptoms, but it’s not so hot for tracking down the root cause of what’s ailing us.

Part of our problem – I include myself because I AM susceptible – is that listening is easy and measuring is hard. Or, rather, scientific measuring is hard.

Listening tests of all kinds are ubiquitous in this business. They’re easy to do, because they aren’t demanding in terms of setup or parameter control. You try to get your levels matched, setup some fast signal switching, maybe (if you’re very lucky) make it all double-blind so that nobody knows what switch setting corresponds to a particular signal, and go for it.

Direct observation via the senses has been used in science for a long time. It’s not that it’s completely invalid. It’s just that it has problems. The biggest problem is that our senses are interpreted through our brains, an organ which develops strong biases and filters information so that we don’t die. The next problem is that the experimental parameter control actually tends to be quite shoddy. In the worst cases, you get people claiming that, say, console A has a better sound than console B. But…they heard console A in one place, with one band, and console B in a totally different place with a totally different band. There’s no meaningful comparison, because the devices under test AND the test signals were different.

As a result, listening tests produce all kinds of impressions that aren’t actually helpful. Heck, we don’t even know what “sounds better” means. For this person over here, it means lots of high-frequency information. For some other person, it means a slight bass boost. This guy wants a touch of distortion that emphasizes the even-numbered harmonics. That gal wants a device that resembles a “straight wire” as much as possible. Nobody can even agree on what they like! You can’t actually get a rigorous comparison out of that sort of thing.

The flipside is, if we can actually hear it, we should be able to measure it. If a given input signal actually sounds different when listened to through different signal paths, then those signal paths MUST have different transfer functions. A measurement transducer that meets or exceeds the bandwidth and transient response of a human ear should be able to detect that output signal reliably. (A measurement mic that, at the very least, significantly exceeds the bandwidth of human hearing is only about $700.)

As I said, measuring – real measuring – is hard. If the analysis rig is setup incorrectly, we get unusable results, and it’s frighteningly easy to screw up an experimental procedure. Also, we have to be very, very defined about what we’re trying to measure. We have to start with an input signal that is EXACTLY the same for all measurements. None of this “we’ll set up the drums in this room, play them, then tear them down and set them up in this other room,” can be tolerated as valid. Then, we have to make every other parameter agree for each device being tested. No fair running one preamp closer to clipping than the other! (For example.)

Question Everything

So…what to do now?

If I had to propose an initial solution to the problems I see (which may not be seen by others, because this is my own opinion – oh, the IRONY), I would NOT say that the solution is for everyone to graph everything. I don’t see that as being necessary. What I DO see as being necessary is for more production craftspersons to embrace their inner skeptic. The lesser amount of coherent explanation that’s attached to an assertion, the more we should doubt that assertion. We can even develop a “hierarchy of dubiousness.”

If something can be backed up with an actual experiment that produces quantitative data, that something is probably true until disproved by someone else running the same experiment. Failure to disclose the experimental procedure makes the measurement suspect however – how exactly did they arrive at the conclusion that the loudspeaker will tolerate 1 kW of continuous input? No details? Hmmm…

If a statement is made and backed up with an accepted scientific model, the statement is probably true…but should be examined to make sure the model was applied correctly. There are lots of people who know audio words, but not what those words really mean. Also, the model might change, though that’s unlikely in basic physics.

Experience and anecdotes (“I heard this thing, and I liked it better”) are individually valid, but only in the very limited context of the person relating them. A large set of similar experiences across a diverse range of people expands the validity of the declaration, however.

You get the idea.

The point is that a growing lack of desire to just accept any old statement about audio will, hopefully, start to weed out some of the mythological monsters that periodically stomp through the production-tech village. If the myths can’t propagate, they stand a chance of dying off. Maybe. A guy can hope.

So, question your peers. Question yourself. Especially if there’s a problem, and the proposed fix involves a significant amount of money, question the fix.

A group of us were once troubleshooting an issue. A producer wasn’t liking the sound quality he was getting from his mic. The discussion quickly turned to preamps, and whether he should save up to buy a whole new audio interface for his computer. It finally dawned on me that we hadn’t bothered to ask anything about how he was using the mic, and when I did ask, he stated that he was standing several feet from the unit. If that’s not a recipe for sound that can be described as “thin,” I don’t know what is. His problem had everything to do with the acoustic physics of using a microphone, and nothing substantial AT ALL to do with the preamp he was using.

A little bit of critical thinking can save you a good pile of cash, it would seem.

(By the way, I am biased like MAD against the the crowd that craves expensive mic pres, so be aware of that when I’m making assertions. Just to be fair. Question everything. Question EVERYTHING. Ask where the data is. Verify.)


The Glorious Spectrograph

They’re better than other real time analysis systems.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

thegloriousspectrographWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

I don’t really know how common it is, but there are at least a few of us who like to do a particular thing with our console solo bus:

We connect some sort of analyzer across the output.

This is really handy, because you can look at different audio paths very easily – no patching required. You do what you have to do to enable “solo” on the appropriate channel(s), and BOOM! What you’ve selected, and ONLY what you’ve selected, is getting chewed on by the analyzer.

The measurement solution that seems to be picked the most often is the conventional RTA. You’ve almost certainly encountered one at some point. Software media players all seem to feature at least one “visualization” that plots signal magnitude versus frequency. Pro-audio versions of the RTA have more frequency bands (often 31, to match up with 1/3 octave graphic EQs), and more objectively useful metering. They’re great for finding frequency areas that are really going out of whack while you’re watching the display, but I have to admit that regular spectrum displays have often failed to be truly useful to me.

It’s mostly because of their two-dimensional nature.

I Need More “Ds,” Please

A bog-standard spectrum analyzer is a device for measuring and displaying two dimensions. One dimension is amplitude, and the other is frequency. These dimensions are plotted in terms of each other at quasi-instantaneous points in time. I say “quasi” because, of course, the display does not react instantaneously. The metering may be capable of reacting very quickly, and it may also have an averaging function to smooth out wild jumpiness. Even so, the device is only meant to show you what’s happening at a particular moment. A moment might last a mere 50ms (enough time to “see” a full cycle of 20 Hz wave), or the moment might be a full-second average. In either case, once the moment has passed, it’s lost. You can’t view it anymore, and the analyzer’s reality no longer includes it meaningfully.

This really isn’t a helpful behavior, ironically because it’s exactly what live production is. A live-show is a series of moments that can’t be stopped and replayed. If you get into a trouble spot at a particular time, and then that problem stops manifesting, you can’t cause that exact event to happen again. Yes, you CAN replicate the overall circumstances in an attempt to make the problem manifest itself again, but you can’t return to the previous event. The “arrow of time,” and all that.

This is where the spectrograph reveals its gloriousness: It’s a three-dimensional device.

You might not believe me, especially if you’re looking at the spectrograph image up there. It doesn’t look 3D. It seems like a flat plot of colors.

A plot of colors.

Colors!

When we think of 3D, we’re used to all of the dimensions being represented spatially. We look for height, width, and depth – or as much depth as we can approximate on displays that don’t actually show it. A spectrograph uses height and width for two dimensions, and displays the third with a color ramp.

The magic of the spectrograph is that it uses the color ramp for the magnitude parameter. This means that height and width can be assigned, in whatever way is most useful, to frequency and TIME.

Time is the key.

Good Timing

With a spectrograph, an event that has been measured is stored and displayed alongside the events that follow it. You can see the sonic imprint of those past events at whatever time you want, as long as the unit hasn’t overwritten that measurement. This is incredibly useful in live-audio, especially as it relates to feedback.

The classic “feedback monster” shows up when a certain frequency’s loop gain (the total gain applied to the signal as it enters a transducer, traverses a signal path, exits another transducer, and re-enters the original transducer) becomes too large. With each pass through the loop, that frequency’s magnitude doesn’t drop as much as is desired, doesn’t drop at all, or even increases. The problem isn’t the frequency in and of itself, and the problem isn’t the frequency’s magnitude in and of itself. The problem is the change in magnitude over time being inappropriate.

There’s that “time” thing again.

On a basic analyzer, a feedback problem only has a chance of being visible if it results in a large enough magnitude that it’s distinguishable from everything else being measured at that moment. At that moment, you can look at the analyzer, make a mental note about which frequency was getting out of hand, and then try to fix it. If the problem disappears because you yanked the fader back, or a guitar player put their hand on their strings, or a mic got temporarily moved to a better spot, all you have to go on is your memory of where the “spike” was. Again, the basic RTA doesn’t show you measurements in terms of time, except within the limitations of its own attack and release rates.

But a spectrograph DOES show you time. Since a feedback problem is a limited range of frequencies that are failing to decay swiftly enough, a spectrograph will show that lack of decay as a distinctive “smear” across the unit’s time axis. If the magnitude of the problem area is large enough, the visual representation is very obvious. Further, the persistence of that representation on the display means that you have some time to freeze the analyzer…at which point you can zero in on exactly where your problem is, so as to kill it with surgical precision. No remembering required.

So, if you’ve got a spectrograph inserted on your solo bus, you can solo up a problem channel, very briefly drive it into feedback, drop the gain, freeze the analyzer, and start fixing things without having to let the system ring for an annoyingly long time. This is a big deal when trying to solve a problem during a show that’s actually running, and it’s also extremely useful when ringing out a monitor rig by yourself. If all this doesn’t make the spectrograph far more glorious than a basic, 2D analyzer, I just don’t know what to do for you.


How Much To Worry About Phone Audio

Maybe a little, but not much.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

phonetubeWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

A perennial moan-n-groan amongst pro-audio types is: “Ya cain’t trust dem portable music players, Sonny!” At the core of this angst is the idea that the inexpensive output circuitry of Ipods, phones, tablets, and [insert whatever else here] simply can not handle audio very well. It MUST be doing something nasty to the passing signal. It’s an affront to plug that device into something serious, like a mixing console. An affront, I say!

But here’s the thing.

Up until recently, I have never seen any kind of measurement made to prove or disprove the complaint. No numbers. Nothing quantitative at all. Just an overall sense of hoity-toity superiority, and the odd anecdote in the vein of “We plugged the music player into the console, and I swear, Led Zeppelin didn’t sound as zeppelin-y through the thing.” I say that I haven’t seen anything quantitative until recently because of a page from this ProSoundWeb thread. Scroll down a bit, and you’ll find a link to a series of honest-to-goodness measurements on the Iphone 5.

The short story is that the Iphone 5 has output that is basically laser-flat from DC to dog-whistles. It does roll off a tiny bit above 5 kHz, but the curve is on the order of half a decibel per octave. If you plug the thing into a higher-impedance-than-headphones input (as you would if you were running the phone to a mixing console), the phone is definitely NOT the limiting factor in the audio.

Seeing that measurement inspired me to pull out my Samsung Galaxy SIII and run my own series of tests.

Dual FFT

The first thing I did was to get a sample of pink noise, put the WAV file in my phone’s storage, and play that audio back to my computer. In the process of getting the recorded signal from the phone aligned with the original noise, I heard something very curious:

When the start points were aligned and summed, there was the strange effect of a downward-sweeping comb filter. Not steady comb-filtering…SWEEPING. (Like a flanger effect.) Zooming into the ends of the audio regions, I could clearly see that the recording from the phone was ending earlier than the reference file. The Galaxy was very definitely not playing the test signal at the same speed that the computer played the original noise. On a hunch, I set the playback varispeed on the phone recording to 0.91875 of normal. The comb-filter sweep effect essentially disappeared.

See, my hunch was that the phone’s sampling rate was 48 kHz instead of 44.1 kHz. My hunch was also that the phone was not sample-rate converting the file, but just playing it at the higher rate. That’s why I chose the 0.91875 varispeed factor. Divide 44100 by 48000, and that’s the number that comes out – which would be the ratio of playback speeds if no rate-conversion was going on.

So, in the case of WAV files, the phone may very well not be playing them back at the speed it ought to be. That IS something to think about, although it’s hardly a fatal problem if the discrepancy is from 44.1 k sampling to 48 k. Also, that’s not a problem with audio circuit design. It’s a software implementation issue.

In the end, I ran the dual-FFT analysis with the phone audio playing at 1X speed, because the “corruption” introduced by the time-stretching algorithm was enough to make the measurement more uncertain (rather than less). Uncertainty in measurements like these manifests the same way as noise does. It causes the trace to become “wild” or “fuzzy,” because the noise creates a sort of statistical perturbation that follows the shape of the curve. The more severe that perturbation is, the tougher it is to read the measurement in a meaningful way.

Here’s what I got in terms of the phone’s frequency response:

phonedualfft

You can see what I mean in terms of the noise. Especially at higher frequencies, the measurement shows a bit of uncertainty. I used a very high averaging number in order to keep things under control.

In any case, the trace is statistically flat from 20 Hz to 20 kHz. The phone’s output circuitry sounds just fine, thanks.

FFT Noise

With the problems introduced by playback timing, I wanted to also try tests with a signal that “self references.” FFT noise fits this description. Run through a properly configured analyzer, FFT noise (which sounds like a series of clicks) does not require a “known” signal for comparison. It’s own properties are such that, when measured correctly, the unaltered signal should be completely flat.

As an aside, you may remember me talking about FFT noise in a bit more detail here.

In the article I just linked, I didn’t get into one of the main weaknesses of FFT noise, and that is its susceptibility to external noise. FFT noise is really great when you want to test, say, plugins used in a DAW, because digital silence is actual silence – 0 signal of any kind. There’s no electronic background noise to consider. The problem, though, is that FFT noise is a series of spaced “clicks.” In the empty spaces, anything other than digital silence is incredibly apparent, and easily corrupts the measurement trace.

Even so, I wanted to give the alternate stimulus a try.

This is what I got in Reaper’s Gfxanalyzer, which is meant to “mate” well with FFT noise:

phonefftnoisegfx

Again, the trace is statistically flat, although low-frequency noise is very apparent.

For an alternate trace, I tried to configure Visual Analyzer to play nicely with the noise.

phonefftnoiseva

Once more, the trace is noisy but otherwise flat.

Conclusions

It’s very important that I recognize a huge limitation in all of this: The sample size is very low. One Iphone 5 and one Samsung Galaxy SIII, with one measurement each, do not properly constitute a representative sample of every phone and media player that might ever get plugged into a mixing console.

At the same time, actually measuring these devices suggests that the categorical write-off of portable players as being unable to pass good audio is just worry to no purpose. There are probably some horrible media players out there, with really bad output circuitry. However, half-decent analog output stages are at the implementation point that I would venture to say is “trivial.” I would further guess that most product-design engineers are using output circuits that are functionally identical to each other. When it comes to plugging a phone, tablet, or MP3 player into a console, I simply can’t find a reason to be up in arms about the quality of the signal being handed to the output jack. I might worry a bit about the physical connection provided by that jack, but the signal on the contacts is a different matter.

I’m in agreement with the sentiments expressed by others. If the audio from a media device doesn’t sound so good, the problem is far more likely to be the source material than the output circuitry.

The phone is probably fine. What’s the file like? What’s the software implementation like?


Gain Vs. Bandwidth

Some preamps do pass audio differently when cranked up, but you probably don’t need to worry about it.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

gainvbandwidthWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

After my article on making monitor mixes not suck, a fellow audio human asked me to address the issue of how bandwidth changes with gain. Op-amps, which are very common in modern audio gear, have a finite bandwidth. This bandwidth decreases as gain increases.

A real question, then, is how much an audio tech needs to worry about this issue – especially in the context of microphone preamps. Mic pres have to apply a great deal of gain to signals, because microphones don’t exactly spit out a ton of voltage. Your average, dynamic vocal mic probably delivers something like two millivolts RMS with 94 dB SPL occurring at the capsule. Getting that level up to 0 dBu (0.775 Vrms) is a nearly 52 decibel proposition.

That’s not a trivial amount of gain, at least as far as audio is concerned. For instance, if we could get 52 dB of gain over the 1 watt @ 1 meter sensitivity of a 95 dB SPL loudspeaker, that speaker could produce 147 dB SPL! (That’s REALLY LOUD, if you didn’t know.) While there are loudspeaker systems that can produce that kind of final output, they have to start at a much higher sensitivity. A Danley Labs J3-64 is claimed to be able to produce 150 dB SPL continuous, but its sensitivity is 112 dB. The “gain beyond sensitivity” is a mere 38 dB. (“Mere” when compared to what mic pres can do. Getting 38 dB above sensitivity is definitely “varsity level” performance for a loudspeaker.)

Anyway.

In the face of a question like this, my response of late has become that we should try to measure something. There are, of course, gajillions of people willing to offer anecdotes, theories, and mythology, but I don’t find that to be very satisfying. I much prefer to actually see real data from real testing.

As such, I decided to grab a couple of mic-pre examples, and “put them on the bench.”

Setting Up The Experiment

The first thing I do is to set up a DAW session with the interface running at 96 kHz. I also set up an analyzer session with the same sampling rate.

The purpose of this is to – hopefully – be able to clearly “see” beyond the audible spectrum. Although my opinion is that audio humans don’t have to worry about anything beyond the audible range (20 Hz to 20 kHz) in practice, part of this experiment’s purpose is to figure out how close to audible signals any particular bandwidth issue gets. Even if frequencies we can hear remain unaffected, it’s still good to have as complete a picture as possible.

The next thing I do is generate a precisely 15 second long sample of pink noise. The point of having a sample of precisely known length is to make compensating for time delays easier. The choice of a 15 second length is just to have a reasonably long “loop” for the analyzer to chew on.

At this point, it’s time to take a look at how the analyzer handles a transfer-function calculation where I know that both “sides” are the same. The trace I get is a touch jumpy, so I bump up the averaging to “2.” This settles the trace nicely.

steadytrace

At this point, it’s time to connect the noise to a mic pre. I do this from my Fast Track interface’s headphone amp through an active DI, because I want to be absolutely sure that I’m ultimately running through high-gain circuitry. Yes – it’s true that the DI might corrupt the measurement to some degree, but I think I have a partial solution: My reference point for all measurements will be the test noise played through the DI, with the mic pre at the minimum gain setting. Each test will use the original noise, so that any “error factors” associated with the signal path under test don’t stack up.

Preamp 1: Fast Track Ultra 8R

My M-audio Fast Track Ultra 8R is what I would call a reasonably solid piece of pro-sumer equipment. My guess is that the preamps in the box are basically decent pieces of engineering.

The first thing to do is to get my low-gain reference. I set the noise output level so that the input through the preamp registers about -20 dBFS RMS, and record the result. I’m now ready to proceed further.

My next order of business is to put my test noise through at a higher gain. I set the gain knob to the middle of its travel, which is about +10 dB of gain from the lowest setting. I roll down the level going to the pre to compensate.

The next test will be with the gain at the “three-o-clock” position. This is about +25 dB of gain from the reference.

The final test is at maximum gain. This causes an issue, because so much gain is applied that the output compensation is extreme. In the end, I opt to find a compromise by engaging the mic preamp’s pad. This allows me to keep the rest of the gain structure in a basically “sane” area.

At this point, I check the alignment on the recorded signals. What’s rather odd is that the signal recorded through the pad seems to have arrived a few samples earlier than the signals recorded straight through. (This is curious, because I would assume that a pad would INCREASE group delay rather than reduce it.)

timingwithpad

No matter what’s going on, though, the fix is as simple as nudging the max-gain measurement over by 10 samples, or 0.1ms.

Preamp 2: SL2442-FX

The first round of testing involved a preamp that I expect is pretty good. A more interesting case comes about when we test a device with a not-so-stellar reputation: A mic pre from an inexpensive Behringer console. My old Behringer SL2442-FX cost only a bit more than the Fast Track did, and the Behringer has a LOT more analog circuitry in it (as far as I can tell). My guess is that if I want to test a not-too-great mic pre, the Behringer is a good candidate.

(To be fair, in the situations where I’ve used the Behringer, I haven’t been unhappy with the preamps at all.)

I use the same DI to get signal to the Behringer. On the output side, I tap the console’s insert point so as to avoid the rest of the internal signal path. I want to test the preamp, not the whole console. The insert connection is fed to the line input of the Fast Track, which appears to bypass the high-gain circuitry in the Fast Track mic pre.

In basically the same way as I did the Fast Track, I get a reference by putting the test noise through the preamp at its lowest setting, aiming for an RMS level of -20 dBFS. My next test is with the gain at “half travel,” which on the Behringer is a difference of about 18 dB. The “three-o-clock” position on the Behringer preamp corresponds to a gain of about +30 dB from the lowest point. The final test is, as you might expect, the Behringer at maximum gain.

A quick check of the files revealed that everything appeared to be perfectly time-aligned across all tests.

The Traces

Getting audio into the analyzer is as simple as running the Fast Track’s headphone out back to the first two inputs. Before I really get going, though, I need to verify that I’m measuring what I think I’m measuring. To do that, I mute the test noise, and push up the levels on the Fast Track Reference and Fast Track +10 dB tracks. I pan them out so that the reference is hard left, and the +10 dB measurement is hard right. I then put a very obvious EQ on the +10 measurement:

testeq

If the test rig is set up correctly, I should see a transfer function with a similarly obvious curve. It appears that my setup is correct:

eqverification

Now it’s time to actually look at things. The Fast Track +10 test shows a curve that’s basically flat, albeit with some jumpiness below 100 Hz. (The jumpiness makes me expect that what we’re seeing is “experimental error” of some kind.)

fasttrack+10transfer

The +25 dB test looks very much the same.

fasttrack+25transfer

The maximum gain test is also about as flat as flat can be.

fasttrackmaxtransfer

I was, quite frankly, surprised by this. I thought I would see something happening, even if it was above 20 kHz. I decide to insert an EQ to see if the test system is just blind to what’s going on above 20 kHz, despite my best efforts. The answer, to my relief, is that if the test were actually missing something outside the audible range, we would see it:

fasttrackfakedrolloff

So – in the case of the Fast Track, we can conclude that any gain vs. bandwidth issues are taking place far beyond the audible range. They’re certainly going on above the measurable range.

What about the Behringer?

The +18 dB transfer function looks like this, compared to the minimum gain reference:

beh+18transfer

What about the +30 dB test?

Maybe I missed something similar on the Fast Track, but the Behringer does seem to be noisier up beyond 30 kHz. The level isn’t actually dropping off, though. It’s possible that the phase gets “weird” up there when the Behringer is run hard – even so, you can’t hear 30 kHz, so this shouldn’t be a problem in real life.

beh+30transfer

Now, for the Behringer max gain trace.

This is interesting indeed. The Behringer’s trace is now visibly curved, with some apparent dropoff below 50 Hz. On the high side, the Behringer is dropping down after 20 kHz, with obvious noise and what I think is some pretty gnarly distortion at around 37 kHz. The trace also shows a bit of noise overall, indicating that the Behringer pre isn’t too quiet when “cranked.”

behmaxtransfer

At the same time, though, it has to be acknowledged that these deficiencies are easy to see when graphed, but probably hard to actually hear. The distortion is occurring far above what humans can perceive, and picking out a loss of 0.3 dB from 10 kHz to 20 kHz isn’t something you’re likely to do casually. A small dip under 50 Hz is fixable with EQ (if you can even hear that), and let’s be honest – how often do you actually have to run a preamp at full throttle? I haven’t had to in ages.

Conclusion

This is not a be-all, end-all test. It was definitely informal, and two different preamps are not exactly a large sample size. I’m sure that my methodology could be tweaked to be more pure. At the very least, getting precisely comparable gain values between preamps would be a better bit of science.

At the same time, though, I think these results can suggest that losing sleep regarding gain vs. bandwidth isn’t worthwhile. A good, yet not-boutique-at-all preamp run at full throttle was essentially laser flat “from DC to dog whistles.” The el-cheapo preamp looked a little scary when running at maximum gain, but that’s the key – it LOOKED scary. The graphed issues actually causing a problem with a show seems unlikely to me, and again, there’s the whole issue of whether or not you actually have to run the preamp wide open on a regular basis.

If I had my guess, I’d say that gain vs. bandwidth is worth being aware of at an academic level, but not something to obsess about in the field.


Noise Notions

When measuring things, pink noise isn’t the only option.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to fftanalyzeruse this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Back in school, we learned how to measure the frequency response of live-audio rigs using a dual FFT system. I didn’t realize at the time how important the idea of measuring would become to me. As a disliker of audio mythology, I find myself having less and less patience for statements like “I swear, it sounds better when we change to this super-spendy cable over here.”

Does it? Are you sure? Did you measure anything? If you can hear it, you should be able to measure it. Make any statement you want, but at least try to back it up with real data.

Anyway.

I’m by no means an expert on all aspects of measuring sound equipment. At the same time, I’ve gotten to the point where I think I can pass on some observations. The point of this article is to have a bit of a chat regarding some signals that can be used to measure audio gear.

Tones and Noise

When trying to measure sound equipment, we almost always need some kind of “stimulus” signal. The stimulus chosen depends on what we’re trying to figure out. If we want to get our bearings regarding a device’s distortion characteristics, a pure tone (or succession of pure tones) is handy. If we want to know about our gear’s total frequency response, we either need a signal that’s “broadband” at any given point in time, or a succession of signals that can be integrated into a broadband measurement over several points in time.

(Pink noise is an example of a signal that’s broadband at any given time point, whereas a tone sweep has to be integrated over time.)

In any case, the critical characteristic that these stimuli share is this:

A generally-useful measurement stimulus is a signal whose key aspects are well defined and known in advance of the test.

With pure tones, for instance, we know the frequency and voltage-level of the tone being generated. With pink noise, we know that all audio frequencies are present and that the overall signal has equal power per octave. (The level of any particular frequency at any particular point in time is not known in advance, unless a specific sample of noise is used repeatedly.)

Pretty In Pink

Pink noise is a very handy stimulus, especially with the prevalence of measurement systems that show us the transfer function of a device undergoing testing.

When folks are talking about “Dual FFT” measurement, this is what they’re referring to. The idea is to compare a known signal arriving at a device’s input with an unknown signal at the device’s output. In a certain sense, the unknown signal is only “quasi” unknown, because the assumption in play is that the observed output IS the input…plus whatever the measured device did to it.

Pink noise is good for this kind of testing because it can easily be continuous – which allows for testing across an indefinite time span – and also because it is known in advance to contain audio across the entire audible spectrum at all times. (As an added bonus, pink noise is much less annoying to listen to than white noise.) It’s true that with a system that computes transfer functions, you can certainly use something like music playback for a stimulus. Heck, you can even use a live show as the test signal. The system’s measurements are concerned with how the output relates to the input, not the input by itself. The bugaboo, though, is that a stimulus with content covering only part of the audible spectrum can’t give you information about what the system is doing beyond that input signal’s bandwidth. Because pink noise covers the entire audible spectrum (and more) all the time, using it as a stimulus means that you can reliably examine the performance of the system-under-test across the entire measurable range.

Now, this is not to say that pink noise is entirely predictable. Because it is a random or pseudo-random signal, a specific frequency’s level at a specific point in time is unknown until the noise is generated. For example, here’s a spectrogram of pink noise that has been aggressively bandpassed at 1kHz:

filteredpinknoise

The tone at 1kHz never completely disappears, but it’s clearly not at the same level all the time.

A major consequence of this variability is that getting a really usable measurement functionally REQUIRES two measurement points. Since the signal is not entirely known in advance, the reference signal MUST be captured during the measurement process. Although some test rigs can smooth un-referenced pink noise, and display the spectrum as being nominally “flat” (as opposed to sloping downwards from low to high frequencies), the resulting measurements just aren’t as good as they could be. It’s just harder than necessary to do something meaningful with something like this:

pinkonlyoneside

Further, any delay caused by the system being measured must be compensated for. If the delay is un-compensated, the measurement validity drops. Even if the frequency response of the measured system is laser-flat, and even if the system has perfect phase response across all relevant frequencies, un-compensated delay will cause this to NOT be reflected in the data. If the dual FFT rig compares an output signal at time “t+delay” to an input signal at “t,” the noise variability means that you’re not actually examining comparable events. (The input signal has moved on from where the output signal is.)

Here’s a simulation of what would happen if you measured a system with 50ms of delay between the input and output…and neglected to compensate for that delay. This kind of delay can easily happen if you’re examining a system via a measurement mic at the FOH mix position, for example.

uncompensateddelay

On the flipside, get everything in order and pink noise reliably gets usable measurements across a variety of test rigs, like these views of a notch filter at 1 kHz.

1kVA

1kSysTune

Hey, I Know That Guy…Er, Noise

I’ve known about pink noise for a long while now. What I didn’t know about until recently was a broadband stimulus that Reaper calls “FFT Noise.”

FFT noise is very interesting to me because it is unlike pink noise in key ways. It is dis-contiguous, in that it consists of a repeating “pulse” or “click.” It is also entirely predictable. As far as I can tell, each pulse contains most of the audible spectrum (31 Hz and above) at an unchanging level. For example, here’s a spectrogram of FFT noise with a narrow bandpass filter applied at 1 kHz:

filteredfftnoise

What’s even more interesting is what happens when the test stimulus is configured to “play nicely” with an FFT-based analyzer. You got a preview of that in the preview above. When the analyzer’s FFT size and windowing are set correctly, a trace that handily beats out some dual FFT measurements (in terms of stability and readability) results:

eqtransfer

Side note: Yup – the calculated transfer function in ReaEQ seems to be accurate.

The point here is that, if the test stimulus is precisely known in advance, then you can theoretically get a transfer function without having to record the input-side in real time. If everything is set up correctly, the “known” signal is effectively predetermined in near totality. The need to sample it to “know it” is removed. Unlike pink noise, this stimulus is exactly the same every time. What’s also very intriguing is that this removes the delay of the device-under-test as a major factor. The arrival time of the test signal is almost a non-issue. Although it does appear advantageous to have an analyzer which uses all the same internal timing references as the noise generator (the trace will be rock steady under those conditions), a compatible analysis tool receiving the signal after an unknown delay still delivers a highly readable result:

laptoptrace

Yes, the cumulative effect of the output from my main computer’s audio interface and the input of my laptop interface is noise, along with some tones that I suppose are some kind of EM interference. You can also see the effect of what I assume is the anti-aliasing filter way over on the right side. (This trace is what gave me the sneaky suspicion that an insufficient amount of test signal exists somewhere below 100 Hz – either that, or the system noise in the bottom octaves is very high.)

On the surface, this seems rather brilliant, even in the face of its limitations. Instead of having to rig up two measurement points and do delay compensation, you can just do a “one step” measurement. However, getting the stimulus and “any old” analysis tool to be happy with each other is not necessarily automatic. “FFT Noise” in Reaper seems to be very much suited to Reaper’s analysis tools, but it takes a little doing to get, say, Visual Analyzer set up well. When a good configuration IS arrived at, however, Visual Analyzer delivers a very steady trace that basically confirms what I saw in Reaper.

fftnoiseva

It’s also possible to get a basically usable trace in SysTune, although the demo’s inability to set a long enough average size makes the trace jumpy. Also, Reaper’s FFT noise plugin doesn’t allow for an FFT size that matches the SysTune demo’s FFT size, so some aggressive smoothing is required.

(As a side note, I did find a way to hack Reaper’s FFT noise plugin to get an FFT size of 65536. This fixed the need for smoothing in the frequency domain, but I wasn’t really sure that the net effect of “one big trace bounce every second” was any better than having lots of smaller bounces.)

There’s another issue to discuss as well. With this kind of “single ended” test, noise that would be ignored by a dual FFT rig is a real issue. In a way that’s similar to a differential amplifier in a mic pre, anything that’s common to both measurement points is “rejected” by a system that calculates transfer functions from those points. If the same stimulus+noise signal is present on both channels, then the transfer function is flat. A single-ended measurement can’t deliver the same result, except by completely drowning the noise in the stimulus signal. Whether this is always practical or not is another matter – it wasn’t practical for me while I was getting these screenshots.

The rabbit-hole goes awfully deep, doesn’t it?


Engin-eyes

Trust your ears – but verify.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

analysisWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

It might be that I just don’t want to remember who it was, but a famous engineer once became rather peeved. His occasion to be irritated arose when a forum participant had the temerity to load one of the famous engineer’s tracks into a DAW and look at the waveform. The forum participant (not me) was actually rather complimentary, saying that the track LOOKED very compressed, but didn’t SOUND crushed at all.

This ignited a mini-rant from the famous guy, where he pointedly claimed that the sound was all that mattered, and he wasn’t interested in criticism from “engin-eyes.” (You know, because audio humans are supposed to be “engine-ears.”)

To be fair, the famous engineer hadn’t flown into anything that would pass as a “vicious, violent rage,” but the relative ferocity of his response was a bit stunning to me. I was also rather put off by his apparent philosophy that the craft of audio has no need of being informed by senses other than hearing.

Now, let’s be fair. The famous engineer in question is known for a reason. He’s had a much more monetarily successful career than I have. He’s done excellent work, and is probably still continuing to do excellent work at the very moment of this writing. He’s entitled to his opinions and philosophies.

But I am also entitled to mine, and in regards to this topic, here’s what I think:

The idea that an audio professional must rely solely upon their sense of hearing when performing their craft is, quite simply, a bogus “purity standard.” It gets in the way of people’s best work being done, and is therefore an inappropriate restriction in an environment that DEMANDS that the best work be done.

Ears Are Truthful. Brains Are Liars.

Your hearing mechanism, insofar as it works properly, is entirely trustworthy. A sound pressure wave enters your ear, bounces your tympanic membrane around, and ultimately causes some cilia deep in your ear to fire electrical signals down your auditory nerve. To the extent that I understand it all, this process is functionally deterministic – for any given input, you will get the same output until the system changes. Ears are dispassionate detectors of aural events.

The problem with ears is that they are hooked up to a computer (your brain) which can perform very sophisticated pattern matching and pattern synthesis.

That’s actually incredibly neat. It’s why you can hear a conversation in a noisy room. Your brain receives all the sound, performs realtime, high-fidelity pattern matching, tries to figure out what events correlate only to your conversation, and then passes only those events to the language center. Everything else is labeled “noise,” and left unprocessed. On the synthesis side, this remarkable ability is one reason why you can enjoy a song, even against noise or compression artifacts. You can remember enough of the hi-fi version to mentally reconstruct what’s missing, based on the pattern suggested by the input received. Your emotional connection to the tune is triggered, and it matters very little that the particular playback doesn’t sound all that great.

As I said, all that is incredibly neat.

But it’s not necessarily deterministic, because it doesn’t have to be. Your brain’s pattern matching and synthesis operations don’t have to be perfect, or 100% objective, or 100% consistent. They just have to be good enough to get by. In the end, what this means is that your brain’s interpretation of the signals sent by your ears can easily be false. Whether that falsehood is great or minor is a whole other issue, very personalized, and beyond the scope of this article.

Hearing What You See

It’s very interesting to consider what occurs when your hearing correlates with your other senses. Vision, for instance.

As an example, I’ll recall an “archetype” story from Pro Sound Web’s LAB: A system tech for a large-scale show works to fulfill the requests of the band’s live-audio engineer. The band engineer has asked that the digital console be externally “clocked” to a high-quality time reference. (In a digital system, the time reference or “wordclock” is what determines exactly when a sample is supposed to occur. A more consistent timing reference should result in more accurate audio.) The system tech dutifully connects a cable from the wordclock generator to the console. The band engineer gets some audio flowing through the system, and remarks at how much better the rig sounds now that the change had been made.

The system tech, being diplomatic, keeps quiet about the fact that the console has not yet been switched over from its internal reference. The external clock was merely attached. The console wasn’t listening to it yet. The band engineer expected to hear something different, and so his brain synthesized it for him.

(Again, this is an “archetype” story. It’s not a description of a singular event, but an overview of the functional nature of multiple events that have occurred.)

When your other senses correlate with your hearing, they influence it. When the correlation involves something subjective, such as “this cable will make everything sound better,” your brain will attempt to fulfill your expectations – especially when no “disproving” input is presented.

But what if the correlating input is objective? What then?

Calibration

What I mean by “an objective, correlated input” is an unambiguously labeled measurement of an event, presented in the abstract. A waveform in a DAW (like I mentioned in the intro) fits this description. The timescale, “zero point,” and maximum levels are clearly identifiable. The waveform is a depiction of audio events over time, in a visual medium. It’s abstract.

In the same way, audio analyzers of various types can act as objective, correlated inputs. To the extent that their accuracy allows, they show the relative intensities of audio frequencies on an unambiguous scale. They’re also abstract. An analyzer depicts sonic information in a visual way.

When used alongside your ears, these objective measurements cause a very powerful effect: They calibrate your hearing. They allow you to attach objective, numerical information to your brain’s perception of the output from your ears.

And this makes it harder for your brain to lie to you. Not impossible, but harder.

Using measurement to confirm or deny what you think you hear is critical to doing your best work. Yes, audio-humans are involved in art, and yes, art has subjective results. However, all art is created in a universe governed by the laws of physics. The physical processes involved are objective, even if our usage of the processes is influenced by taste and preference. Measurement tools help us to better understand how our subjective decisions intersect with the objective universe, and to me, that’s really important.

If you’re wondering if this is a bit of a personal “apologetic,” you’re correct. If there’s anything I’m not, it’s a “sound ninja.” There are audio-humans who can hear a tiny bit of ringing in a system, and can instantly pinpoint that ring with 1/3rd octave accuracy – just by ear. I am not that guy. I’m very slowly getting better, but my brain lies to me like the guy who “hired” me right out of school to be an engineer for his record label. (It’s a doozy of a story…when I’m all fired up and can remember the best details, anyway.) This being the case, I will gladly correlate ANY sense with my hearing if it helps me create a better show. I will use objective analysis of audio signals whenever I think it’s appropriate, if it helps me deliver good work.

Of course the sound is the ultimate arbiter. If the objective measurement looks weird, but that’s the sound that’s right for the occasion, then the sound wins.

But aside from that, the goal is the best possible show. Denying ones-self useful tools for creating that show, based on the bogus purity standards of a few people in the industry who AREN’T EVEN THERE…well, that’s ludicrous. It’s not their show. It’s YOURS. Do what works for YOU.

Call me an “engineyes” if you like, but if looking at a meter or analyzer helps me get a better show (and maybe learn something as well), then I will do it without offering any apology.


Offline Measurement

Accessible recording gear means you don’t have to measure “live” if you don’t want to.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

I’m not an audio ninja. If you make a subtle change to a system EQ while the system is having pink noise run through it, I may or may not be able to tell that you’ve made a change, or I may not be able to tell you how wide or deep a filter you used. At the same time, I highly recognize the value of pink noise as an input to analysis systems.

“Wait! WAIT!,” I can hear you shouting, “What the heck are you talking about?”

I’m talking about measuring things. Objectivity. Using tools to figure out – to whatever extent is possible – exactly what is going on with an audio system. Audio humans use function and noise generators for measurement because of their predictability. For instance, unlike a recording of a song, I know that pink noise has equal power per octave and all audible frequencies present at any given moment. (White noise has equal power PER FREQUENCY, which means that each octave has twice as much power as the previous octave.)

If that paragraph sounded a little foreign to you, then don’t panic. Audio analysis is a GINORMOUS topic, with lots of pitfalls and blind corners. At the same time, I have a special place in my heart for objective measurement of audio devices. I get the “warm-n-fuzzies” for measurement traces because they are, in my mind, a tool for directly opposing a lot of the false mythology and bogus claims encountered in the business of sound.

Anyway.

Measurement is a great tool for dialing in live-sound rigs of all sorts. Because of its objectivity (assuming you actually use your measurement system correctly), it helps to calibrate your ears. You can look at a trace, listen to what something generating that trace sounds like, and have a reference point to work from. If you have a tendency to carve giant holes in a PA system’s frequency response when tuning by ear, measurement can help tame your overzealousness. If you’re not quite sure where that annoying, harsh, grating, high-mid peak is, measurement can help you find it and fix it.

…and one of the coolest things that I’ve discovered in recent years is that you don’t necessarily have to measure a system “live.” Offline measurement and tuning is much more possible than it ever has been before – mostly because digital tech has made recording so accessible.

How It Used To Be And Often Still Is

Back in the day, it was relatively expensive (as well as rather space-intensive and weight-intensive) to bring recording capabilities along with a PA system. Compact recording devices had limited capabilities, especially in terms of editing. Splicing tape while wrangling a PA wasn’t something that was going to happen.

As a result, if you wanted to tune a PA with the help of some kind of analyzer, you had to actually run a signal through the PA, into a measurement mic, and into the analysis device.

The sound you were measuring had to be audible. Very audible, actually, because test signals have to drown out the ambient noise in the room to be really usable. Sounds other than the test signal being audible to the measurement mic mean that your measurement’s accuracy is corrupted.

So, if you were using noise, the upshot was that you and everybody else in the room had to listen to a rather unpleasant blast of sound for as long as it took to get a reference tuning in place. It’s not much fun (unless you’re the person doing the work), and you can’t do it everywhere. Even when using a system that can take inputs other than noise, you still had to measure and make your adjustments “live,” with an audible signal in the room.

Taking A Different Route

The beautiful thing about today’s technology is that we have alternatives. In some cases, you might prefer to do a “fully live” tuning of a PA system or monitor rig – but if you’d prefer a different approach, it’s entirely possible.

It’s all because of how easy recording is, really.

The thing is, any audio-analysis system doesn’t really care where its input comes from. An analyzer really isn’t bothered about if its information is coming from a live measurement mic, or if the information is a recording of what came out of that measurement mic. All the analyzer knows is that some signal is being presented to it.

If you’re working with a single-input analyzer, offline measurement and tuning is basically about getting the “housekeeping” right:

  1. Run your measurement signal to the analyzer, without any intervening EQ or other processing. If that signal is supposed to give you a “flat” measurement trace, then make sure it does. You need a reference point that you can trust.
  2. Now, disconnect the signal from the analyzer and route that same measurement signal through the audio device(s) that you want to test. This includes the measurement mic if you’re working on something that produces acoustical output – like monitor wedges or an FOH (Front Of House) PA. The actual thing that delivers the signal to be captured and analyzed is the “device-under-test.” For the rest of this article, I’m effectively assuming that the device-under-test is a measurement mic.
  3. Connect the output of the device-under-test to something that can record the signal.
  4. Record at least several seconds of your test signal passing through what you want to analyze. I recommend getting at least 30 seconds of recorded audio. Remember that the measurement-signal to ambient-noise ratio needs to be pretty high – ideally, you shouldn’t be able to hear ambient noise when your test signal is running.
  5. If at all possible, find a way to loop the playback of your measurement recording. This will let you work without having to restart the playback all the time.
  6. Run the measurement recording through the signal chain that you will use to process the audio in a live setting.
  7. Send the output of that signal chain to the analyzer, but do NOT actually send the output to the PA or monitor rig.

Because the recorded measurement isn’t being sent to the “acoustical endpoints” (the loudspeakers) of your FOH PA or monitor rig, you don’t have to listen to loud noise while you adjust. As you make changes to, say, your system EQ, you’ll see the analyzer react. Get a curve that you’re comfortable with, and then you can reconnect your amps and speakers for a reality check. (Getting a reality check of what you just did in silence is VERY important – doubly so if you made drastic changes somewhere.)

Dual-FFT

So, all of that up there is fine and good, but…what if you’re not working with a simple, single input analyzer? What if you’re using a dual-FFT system like SMAART, EASERA, or Visual Analyzer?

Well, you can still do offline measurement, but things get a touch more complicated.

A dual-FFT (or “transfer function”) analysis system works by comparing a reference signal to a measurement signal. For offline measurement to work with comparative analysis, you have to be able to play back a copy of the EXACT signal that you’ll be using for measurement. You also have to be able to play that signal in sync with your measurement recording, but on a separate channel.

For me, the easiest way to accomplish this is to have a pre-recorded (as opposed to “live generated”) test signal. I set things up so that I can record the device-under-test while playing back the test signal through that device. For example, I could have the pre-recorded test signal on channel one, connect my measurement device so that it’s set to record on channel two, hit “record,” and be off to the races.

There is an additional wrinkle, though – time-alignment. Dual-FFT analyzers give skewed results if the measurement signal is early or late when compared to the reference signal, because, as far as the analyzer is concerned, the measurement signal is diverging from the reference. Of course, any measured signal is going to diverge from the reference, but you don’t want unnecessary divergence to corrupt the analysis. The problem, though, is that your test signal takes time to travel from the loudspeaker to the measurement microphone. The measurement recording, when compared to the reference recording, is inherently “late” because of this propagation delay.

Systems like SMAART and EASERA have a way of doing automatic delay compensation in a quick and painless way, but Visual Analyzer doesn’t. If your software doesn’t have an internal method for delay compensation, you’ll need to do it manually. This means:

  1. Preparing a test signal that includes an audible click, pop, or other transient that tells you where the signal starts.
  2. After recording the measurement signal, you’ll need to use that click or pop to line up the measurement recording with the test-signal, in terms of time. The more accurate the “sync,” the more stable your measurement trace will be.

If you’d rather not make your own test signal, you’re welcome to download and use this one. The “click” at the beginning is several cycles of a 2 kHz tone.

The bottom line is that you can certainly do “live” measurements if you want to, but you also have the option of capturing your measurement for “silent” tweaking. It’s ultimately about doing what’s best for your particular application…and remembering to do that “reality check” listen of your modifications, of course.


The SA-15MT-PW

You won’t be let down if you’re not asking too much.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

I have a weakness for “upstarts.”

I love to find little companies that are doing their own thing and trying to have some kind of fresh take on this business. When I ran across Seismic Audio, I got a little infatuated with ’em. Here were some audio-gear humans that were doing something other than selling everybody else’s boxes. Here were some folks who had some interesting, hard to find, “nifties.”

Seriously – maybe I just wasn’t looking hard enough, but Seismic had a reel mounted, 8 send/ 4 return snake with 50′ of trunk when nobody else did. And it was the perfect thing for what I needed at that moment. Neat!

Anyway.

Of course, when you’ve been at this kind of thing for a while, you develop a sort of BS detector for sales. You look at a page that’s trying to get you to buy equipment, and you instinctively throw out all the numbers and claims that you think are “best case scenarios.”

What other people call, “horrific, crippling cynicism that threatens to poison all joy and blot out the sun” is what I call “being realistic about what the sales guys are saying.” (Welcome to the music business, boys and girls!) With a company like Seismic, my thought process goes like this: “These guys have some neat offerings, and the prices are quite low. In order to do that, they must be saving money somewhere. I should expect that some part of these products is going to be demonstrably ‘cheap,’ and not be disappointed when I discover that part.”

So – when it was time to buy some new monitor wedges (because half of the current set had committed full or partial suicide), I decided to take a chance on Seismic. My goal was to get some boxes that would be “enough” for a small stage, and not to worry about having varsity-level build quality or performance. The gear wouldn’t be moving around a lot, and the bands I work with are good about not abusing monitor wedges, so I was confident that a “middle of the road” unit from Seismic would work out.

Trying Not To Ask Too Much

As I looked around Seismic’s site, I found myself gravitating towards the SA-15MT-PW monitors. Because of their configuration, which puts the HF (high frequency) driver above the LF cone, they would be naturally “bookmatched” (vertically symmetrical) when operating in pairs.

In the absence of a specific claim one way or another, and at the box’s price point, I was very sure that the 15MTs were neither biamped nor equipped with fancy processing. I would essentially be receiving passive monitors that had an amplifier and input plate added. I was perfectly okay with this.

I hoped that the 15″ driver would provide an overall output advantage over a 12″ unit, especially since I just decided to not care about anything below 75 – 100 Hz. In an environment like the one where I work, using the onstage monitoring rig to create thundering bass and kick is not a priority. The monitors are mostly there for the vocalists, with instrument midrange as an added bonus.

Seismic’s site claims that the box will play down to 35 Hz, but let’s be real. Making 35 Hz at “rock show volume” is hard enough with a properly built “mid-pro” sub, and 15MTs are affordable drivers in a small box. I’m sure the LF cone will vibrate at 35 Hz, but you’re not going to get much usable output down there. See? Being cynic – ah – REALISTIC is good for you.

I’ve also become very good at ignoring “peak” numbers in power ratings. Whenever you see “peak,” you should mentally change the wording to “power that’s not actually available to do anything useful.” What you really want to find is the lowest number with “continuous” or “RMS” next to it, and assume that’s what you’re going to get. SA claims that their powered 15MT units have an amplifier that’s capable of 350 watts continuous. I think this may be an increase from when I was shopping, as I remember something more along the lines of 250 watts.

What I was most prepared to accept was the published sensitivity rating and maximum SPL. With one watt (actually, 2.83 VRMS, but that’s a whole other discussion), the boxes were supposed to be capable of 97 dB SPL, continuous. The maximum SPL was listed as 117 dB SPL.

This seemed pretty reasonable to me, and here’s why: With a 97 dB sensitivity, 250 watts of continuous input should get you into a calculated maximum SPL range of 121 dB. That they published a maximum number LOWER than that (117 dB) made me feel more comfortable. Seeing a number that suggested that the driver could only really make use of half the available continuous power reassured me. This isn’t actually strange – when you see a number that is below the theoretical maximum, it gives the impression that the boxes were actually measured as being able to hit that number.

I thought to myself: “If I put two boxes together, I can get 120 dB SPL continuous out of the pair. That’s enough for our stage. If somebody wants more than that, then they’re playing the wrong venue. I’m going to order these.”

I got free shipping and a discount. That felt pretty good.

A Short Honeymoon

When the 15MTs arrived, I was encouraged. They weren’t too heavy, and you could actually lift and carry them in a not-awkward way by using the handles. I wasn’t wild about the way that an XLR connector would stick out of the side-mounted control panel when everything was connected, but I’m pretty much convinced that getting a powered speaker’s input panel to be perfect is an impossibility. I tried the boxes with a bit of music, and the sonics seemed fine. The playback was clean, and the overly present (too much high-mid) timbre of the boxes was easily tamed with the onboard EQ – which is a good sign, because if simplified, broad-brush EQ can fix an issue, then the problem isn’t really bad.

The units were put to work immediately in a festival setting. The venue was hosting an annual showcase of local talent, and this seemed like an ideal way to put the monitors through their paces. They’d need to perform at both quiet and moderately loud volumes, and do a lot of different acts without a hiccup.

The first day went by without a problem.

The second day wasn’t so good.

The high-energy point in the night was being provided by a band with strong roots in both country and rock. Even in an acoustic setting, this part of the show was meant to be pretty “full tilt” in terms of feel and volume. I got the lead singer’s vocal switched up to the necessary level – and that’s when I noticed the problem.

The 15MTs were distorting, and distorting so audibly that the “crunch” could clearly be heard through the cover that FOH (Front Of House) was providing. I was very, very embarrassed, and made it a point to approach the lead singer with an apology. I assured him that I knew the sound was unacceptable, and that I wasn’t just ignoring it.

He was very gracious about the whole thing, but I was not happy.

Oversold On The Power

With what seem to be updated specs on the Seismic Audio site, I can’t be sure that the SA-15MT-PW units currently being sold are the same as the 15MTs that I was shipped. What I can say, though, is that the units I was shipped do not appear to meet the minimum spec that was advertised at the time.

Not the hyped specifications – like I said, I ignore those as a rule – the MINIMUM spec was not achieved.

I’m about 90% sure that I’m right about this. The 10% uncertainty comes from the fact that I don’t exactly have NASA-grade measurement systems available to me. I can’t discount the issue that significant experimental error is probably involved here. At the same time, I don’t think that my tests were so far off as to be invalid.

After empirically verifying that sufficient volume with a vocal mic caused audible distortion (and at a volume that I felt was “pretty low”), I decided to dig deeper.

I chose a unit out of the group, removed the grill, and pulled the LF driver out of the enclosure. I then ran a 1 kHz tone to the unit, and increased the level of that tone until I could clearly hear harmonic distortion. The reason for doing this is that, for a single-amped box equipped with a 1″-exit HF driver, 2+ kHz distortion components will be audible in the HF section without being incredibly loud. The small HF driver is probably crossed over at 2 kHz or above, so the amp going full-bore with a 1 kHz input is unlikely to cook the driver.

Having found the “distortion happens here” point, I then rolled the test tone down to 60 Hz. My multimeter is designed to test electrical outlets in the US, and that means that its voltage reading is only likely to be accurate at the US mains electrical frequency of 60 Hz. I connected my multimeter’s probes to the LF hookup wires, and got this:

As the image states, 32.9 volts squared over 8 ohms (the advertised nominal impedance of the drivers in the unit) works out to be 135 watts…and that is with VERY significant distortion. In general, when an audio manufacturer claims a continuous or RMS power number, what that means is the unit can supply that power without audible distortion. In this case, however, that didn’t hold.

To be brutally frank, if this measurement is correct then there is NO PHYSICAL WAY that the units I was shipped have 250 watts of continuous power available, with the peaks exceeding that number. You might be able to get 250 watts continuous out of the amp if you drove it as far as possible into square-wave territory, but you wouldn’t want to listen to anything coming out of the monitors at that point.

Oversold On The SPL

After reconnecting and reseating the LF driver, I ran pink noise into the unit at the same level as the test tones. I then got out my SPL meter, and guesstimated what 1 meter from the drivers would be. Here’s what I got:

If the claimed maximum SPL was 117 dB, then this is actually a pretty consistent reading with the box only being able to do about half of what was advertised. Even so, this number was generated by a unit that was beyond its “clean” output capability.

Now – again – let’s be very clear. I can only speak about the specific units that were shipped to me. It would be wrong to say that these results can be categorically expected across all of Seismic’s product lines. It would also be wrong to say that the 15MT-PWs being shipped today are definitely the same design as what was shipped months ago. Products can change very fast these days, and it may be that Seismic has upgraded these units.

I also want to be clear that, dangit, I’m rooting for Seismic Audio. I love scrappy, underdog-ish businesses that carve out a space for themselves. Heck, maybe a parts manufacturer sent them the wrong amplifier plates. Maybe the passive versions of these boxes (and every other speaker they sell) meet the minimum spec. The only way to know would be to buy and test a sample of everything.

Still, in this particular case, the products I was shipped did not live up to what they were advertised to do. They’re not useless by any means, but they can’t quite cover the bases that I need them to cover. I’m planning on finding them a good home with people who can actually put some miles on ’em.

…and, Seismic folks, if you read this: I don’t want my money back, or I would have asked when I still could. I do want you guys to keep doing your thing, and I also want you to be vigilant that the products you ship actually meet the minimum spec that you list on your site.