Tag Archives: Measurement

Gain Vs. Bandwidth

Some preamps do pass audio differently when cranked up, but you probably don’t need to worry about it.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

gainvbandwidthWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

After my article on making monitor mixes not suck, a fellow audio human asked me to address the issue of how bandwidth changes with gain. Op-amps, which are very common in modern audio gear, have a finite bandwidth. This bandwidth decreases as gain increases.

A real question, then, is how much an audio tech needs to worry about this issue – especially in the context of microphone preamps. Mic pres have to apply a great deal of gain to signals, because microphones don’t exactly spit out a ton of voltage. Your average, dynamic vocal mic probably delivers something like two millivolts RMS with 94 dB SPL occurring at the capsule. Getting that level up to 0 dBu (0.775 Vrms) is a nearly 52 decibel proposition.

That’s not a trivial amount of gain, at least as far as audio is concerned. For instance, if we could get 52 dB of gain over the 1 watt @ 1 meter sensitivity of a 95 dB SPL loudspeaker, that speaker could produce 147 dB SPL! (That’s REALLY LOUD, if you didn’t know.) While there are loudspeaker systems that can produce that kind of final output, they have to start at a much higher sensitivity. A Danley Labs J3-64 is claimed to be able to produce 150 dB SPL continuous, but its sensitivity is 112 dB. The “gain beyond sensitivity” is a mere 38 dB. (“Mere” when compared to what mic pres can do. Getting 38 dB above sensitivity is definitely “varsity level” performance for a loudspeaker.)


In the face of a question like this, my response of late has become that we should try to measure something. There are, of course, gajillions of people willing to offer anecdotes, theories, and mythology, but I don’t find that to be very satisfying. I much prefer to actually see real data from real testing.

As such, I decided to grab a couple of mic-pre examples, and “put them on the bench.”

Setting Up The Experiment

The first thing I do is to set up a DAW session with the interface running at 96 kHz. I also set up an analyzer session with the same sampling rate.

The purpose of this is to – hopefully – be able to clearly “see” beyond the audible spectrum. Although my opinion is that audio humans don’t have to worry about anything beyond the audible range (20 Hz to 20 kHz) in practice, part of this experiment’s purpose is to figure out how close to audible signals any particular bandwidth issue gets. Even if frequencies we can hear remain unaffected, it’s still good to have as complete a picture as possible.

The next thing I do is generate a precisely 15 second long sample of pink noise. The point of having a sample of precisely known length is to make compensating for time delays easier. The choice of a 15 second length is just to have a reasonably long “loop” for the analyzer to chew on.

At this point, it’s time to take a look at how the analyzer handles a transfer-function calculation where I know that both “sides” are the same. The trace I get is a touch jumpy, so I bump up the averaging to “2.” This settles the trace nicely.


At this point, it’s time to connect the noise to a mic pre. I do this from my Fast Track interface’s headphone amp through an active DI, because I want to be absolutely sure that I’m ultimately running through high-gain circuitry. Yes – it’s true that the DI might corrupt the measurement to some degree, but I think I have a partial solution: My reference point for all measurements will be the test noise played through the DI, with the mic pre at the minimum gain setting. Each test will use the original noise, so that any “error factors” associated with the signal path under test don’t stack up.

Preamp 1: Fast Track Ultra 8R

My M-audio Fast Track Ultra 8R is what I would call a reasonably solid piece of pro-sumer equipment. My guess is that the preamps in the box are basically decent pieces of engineering.

The first thing to do is to get my low-gain reference. I set the noise output level so that the input through the preamp registers about -20 dBFS RMS, and record the result. I’m now ready to proceed further.

My next order of business is to put my test noise through at a higher gain. I set the gain knob to the middle of its travel, which is about +10 dB of gain from the lowest setting. I roll down the level going to the pre to compensate.

The next test will be with the gain at the “three-o-clock” position. This is about +25 dB of gain from the reference.

The final test is at maximum gain. This causes an issue, because so much gain is applied that the output compensation is extreme. In the end, I opt to find a compromise by engaging the mic preamp’s pad. This allows me to keep the rest of the gain structure in a basically “sane” area.

At this point, I check the alignment on the recorded signals. What’s rather odd is that the signal recorded through the pad seems to have arrived a few samples earlier than the signals recorded straight through. (This is curious, because I would assume that a pad would INCREASE group delay rather than reduce it.)


No matter what’s going on, though, the fix is as simple as nudging the max-gain measurement over by 10 samples, or 0.1ms.

Preamp 2: SL2442-FX

The first round of testing involved a preamp that I expect is pretty good. A more interesting case comes about when we test a device with a not-so-stellar reputation: A mic pre from an inexpensive Behringer console. My old Behringer SL2442-FX cost only a bit more than the Fast Track did, and the Behringer has a LOT more analog circuitry in it (as far as I can tell). My guess is that if I want to test a not-too-great mic pre, the Behringer is a good candidate.

(To be fair, in the situations where I’ve used the Behringer, I haven’t been unhappy with the preamps at all.)

I use the same DI to get signal to the Behringer. On the output side, I tap the console’s insert point so as to avoid the rest of the internal signal path. I want to test the preamp, not the whole console. The insert connection is fed to the line input of the Fast Track, which appears to bypass the high-gain circuitry in the Fast Track mic pre.

In basically the same way as I did the Fast Track, I get a reference by putting the test noise through the preamp at its lowest setting, aiming for an RMS level of -20 dBFS. My next test is with the gain at “half travel,” which on the Behringer is a difference of about 18 dB. The “three-o-clock” position on the Behringer preamp corresponds to a gain of about +30 dB from the lowest point. The final test is, as you might expect, the Behringer at maximum gain.

A quick check of the files revealed that everything appeared to be perfectly time-aligned across all tests.

The Traces

Getting audio into the analyzer is as simple as running the Fast Track’s headphone out back to the first two inputs. Before I really get going, though, I need to verify that I’m measuring what I think I’m measuring. To do that, I mute the test noise, and push up the levels on the Fast Track Reference and Fast Track +10 dB tracks. I pan them out so that the reference is hard left, and the +10 dB measurement is hard right. I then put a very obvious EQ on the +10 measurement:


If the test rig is set up correctly, I should see a transfer function with a similarly obvious curve. It appears that my setup is correct:


Now it’s time to actually look at things. The Fast Track +10 test shows a curve that’s basically flat, albeit with some jumpiness below 100 Hz. (The jumpiness makes me expect that what we’re seeing is “experimental error” of some kind.)


The +25 dB test looks very much the same.


The maximum gain test is also about as flat as flat can be.


I was, quite frankly, surprised by this. I thought I would see something happening, even if it was above 20 kHz. I decide to insert an EQ to see if the test system is just blind to what’s going on above 20 kHz, despite my best efforts. The answer, to my relief, is that if the test were actually missing something outside the audible range, we would see it:


So – in the case of the Fast Track, we can conclude that any gain vs. bandwidth issues are taking place far beyond the audible range. They’re certainly going on above the measurable range.

What about the Behringer?

The +18 dB transfer function looks like this, compared to the minimum gain reference:


What about the +30 dB test?

Maybe I missed something similar on the Fast Track, but the Behringer does seem to be noisier up beyond 30 kHz. The level isn’t actually dropping off, though. It’s possible that the phase gets “weird” up there when the Behringer is run hard – even so, you can’t hear 30 kHz, so this shouldn’t be a problem in real life.


Now, for the Behringer max gain trace.

This is interesting indeed. The Behringer’s trace is now visibly curved, with some apparent dropoff below 50 Hz. On the high side, the Behringer is dropping down after 20 kHz, with obvious noise and what I think is some pretty gnarly distortion at around 37 kHz. The trace also shows a bit of noise overall, indicating that the Behringer pre isn’t too quiet when “cranked.”


At the same time, though, it has to be acknowledged that these deficiencies are easy to see when graphed, but probably hard to actually hear. The distortion is occurring far above what humans can perceive, and picking out a loss of 0.3 dB from 10 kHz to 20 kHz isn’t something you’re likely to do casually. A small dip under 50 Hz is fixable with EQ (if you can even hear that), and let’s be honest – how often do you actually have to run a preamp at full throttle? I haven’t had to in ages.


This is not a be-all, end-all test. It was definitely informal, and two different preamps are not exactly a large sample size. I’m sure that my methodology could be tweaked to be more pure. At the very least, getting precisely comparable gain values between preamps would be a better bit of science.

At the same time, though, I think these results can suggest that losing sleep regarding gain vs. bandwidth isn’t worthwhile. A good, yet not-boutique-at-all preamp run at full throttle was essentially laser flat “from DC to dog whistles.” The el-cheapo preamp looked a little scary when running at maximum gain, but that’s the key – it LOOKED scary. The graphed issues actually causing a problem with a show seems unlikely to me, and again, there’s the whole issue of whether or not you actually have to run the preamp wide open on a regular basis.

If I had my guess, I’d say that gain vs. bandwidth is worth being aware of at an academic level, but not something to obsess about in the field.

Noise Notions

When measuring things, pink noise isn’t the only option.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to fftanalyzeruse this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Back in school, we learned how to measure the frequency response of live-audio rigs using a dual FFT system. I didn’t realize at the time how important the idea of measuring would become to me. As a disliker of audio mythology, I find myself having less and less patience for statements like “I swear, it sounds better when we change to this super-spendy cable over here.”

Does it? Are you sure? Did you measure anything? If you can hear it, you should be able to measure it. Make any statement you want, but at least try to back it up with real data.


I’m by no means an expert on all aspects of measuring sound equipment. At the same time, I’ve gotten to the point where I think I can pass on some observations. The point of this article is to have a bit of a chat regarding some signals that can be used to measure audio gear.

Tones and Noise

When trying to measure sound equipment, we almost always need some kind of “stimulus” signal. The stimulus chosen depends on what we’re trying to figure out. If we want to get our bearings regarding a device’s distortion characteristics, a pure tone (or succession of pure tones) is handy. If we want to know about our gear’s total frequency response, we either need a signal that’s “broadband” at any given point in time, or a succession of signals that can be integrated into a broadband measurement over several points in time.

(Pink noise is an example of a signal that’s broadband at any given time point, whereas a tone sweep has to be integrated over time.)

In any case, the critical characteristic that these stimuli share is this:

A generally-useful measurement stimulus is a signal whose key aspects are well defined and known in advance of the test.

With pure tones, for instance, we know the frequency and voltage-level of the tone being generated. With pink noise, we know that all audio frequencies are present and that the overall signal has equal power per octave. (The level of any particular frequency at any particular point in time is not known in advance, unless a specific sample of noise is used repeatedly.)

Pretty In Pink

Pink noise is a very handy stimulus, especially with the prevalence of measurement systems that show us the transfer function of a device undergoing testing.

When folks are talking about “Dual FFT” measurement, this is what they’re referring to. The idea is to compare a known signal arriving at a device’s input with an unknown signal at the device’s output. In a certain sense, the unknown signal is only “quasi” unknown, because the assumption in play is that the observed output IS the input…plus whatever the measured device did to it.

Pink noise is good for this kind of testing because it can easily be continuous – which allows for testing across an indefinite time span – and also because it is known in advance to contain audio across the entire audible spectrum at all times. (As an added bonus, pink noise is much less annoying to listen to than white noise.) It’s true that with a system that computes transfer functions, you can certainly use something like music playback for a stimulus. Heck, you can even use a live show as the test signal. The system’s measurements are concerned with how the output relates to the input, not the input by itself. The bugaboo, though, is that a stimulus with content covering only part of the audible spectrum can’t give you information about what the system is doing beyond that input signal’s bandwidth. Because pink noise covers the entire audible spectrum (and more) all the time, using it as a stimulus means that you can reliably examine the performance of the system-under-test across the entire measurable range.

Now, this is not to say that pink noise is entirely predictable. Because it is a random or pseudo-random signal, a specific frequency’s level at a specific point in time is unknown until the noise is generated. For example, here’s a spectrogram of pink noise that has been aggressively bandpassed at 1kHz:


The tone at 1kHz never completely disappears, but it’s clearly not at the same level all the time.

A major consequence of this variability is that getting a really usable measurement functionally REQUIRES two measurement points. Since the signal is not entirely known in advance, the reference signal MUST be captured during the measurement process. Although some test rigs can smooth un-referenced pink noise, and display the spectrum as being nominally “flat” (as opposed to sloping downwards from low to high frequencies), the resulting measurements just aren’t as good as they could be. It’s just harder than necessary to do something meaningful with something like this:


Further, any delay caused by the system being measured must be compensated for. If the delay is un-compensated, the measurement validity drops. Even if the frequency response of the measured system is laser-flat, and even if the system has perfect phase response across all relevant frequencies, un-compensated delay will cause this to NOT be reflected in the data. If the dual FFT rig compares an output signal at time “t+delay” to an input signal at “t,” the noise variability means that you’re not actually examining comparable events. (The input signal has moved on from where the output signal is.)

Here’s a simulation of what would happen if you measured a system with 50ms of delay between the input and output…and neglected to compensate for that delay. This kind of delay can easily happen if you’re examining a system via a measurement mic at the FOH mix position, for example.


On the flipside, get everything in order and pink noise reliably gets usable measurements across a variety of test rigs, like these views of a notch filter at 1 kHz.



Hey, I Know That Guy…Er, Noise

I’ve known about pink noise for a long while now. What I didn’t know about until recently was a broadband stimulus that Reaper calls “FFT Noise.”

FFT noise is very interesting to me because it is unlike pink noise in key ways. It is dis-contiguous, in that it consists of a repeating “pulse” or “click.” It is also entirely predictable. As far as I can tell, each pulse contains most of the audible spectrum (31 Hz and above) at an unchanging level. For example, here’s a spectrogram of FFT noise with a narrow bandpass filter applied at 1 kHz:


What’s even more interesting is what happens when the test stimulus is configured to “play nicely” with an FFT-based analyzer. You got a preview of that in the preview above. When the analyzer’s FFT size and windowing are set correctly, a trace that handily beats out some dual FFT measurements (in terms of stability and readability) results:


Side note: Yup – the calculated transfer function in ReaEQ seems to be accurate.

The point here is that, if the test stimulus is precisely known in advance, then you can theoretically get a transfer function without having to record the input-side in real time. If everything is set up correctly, the “known” signal is effectively predetermined in near totality. The need to sample it to “know it” is removed. Unlike pink noise, this stimulus is exactly the same every time. What’s also very intriguing is that this removes the delay of the device-under-test as a major factor. The arrival time of the test signal is almost a non-issue. Although it does appear advantageous to have an analyzer which uses all the same internal timing references as the noise generator (the trace will be rock steady under those conditions), a compatible analysis tool receiving the signal after an unknown delay still delivers a highly readable result:


Yes, the cumulative effect of the output from my main computer’s audio interface and the input of my laptop interface is noise, along with some tones that I suppose are some kind of EM interference. You can also see the effect of what I assume is the anti-aliasing filter way over on the right side. (This trace is what gave me the sneaky suspicion that an insufficient amount of test signal exists somewhere below 100 Hz – either that, or the system noise in the bottom octaves is very high.)

On the surface, this seems rather brilliant, even in the face of its limitations. Instead of having to rig up two measurement points and do delay compensation, you can just do a “one step” measurement. However, getting the stimulus and “any old” analysis tool to be happy with each other is not necessarily automatic. “FFT Noise” in Reaper seems to be very much suited to Reaper’s analysis tools, but it takes a little doing to get, say, Visual Analyzer set up well. When a good configuration IS arrived at, however, Visual Analyzer delivers a very steady trace that basically confirms what I saw in Reaper.


It’s also possible to get a basically usable trace in SysTune, although the demo’s inability to set a long enough average size makes the trace jumpy. Also, Reaper’s FFT noise plugin doesn’t allow for an FFT size that matches the SysTune demo’s FFT size, so some aggressive smoothing is required.

(As a side note, I did find a way to hack Reaper’s FFT noise plugin to get an FFT size of 65536. This fixed the need for smoothing in the frequency domain, but I wasn’t really sure that the net effect of “one big trace bounce every second” was any better than having lots of smaller bounces.)

There’s another issue to discuss as well. With this kind of “single ended” test, noise that would be ignored by a dual FFT rig is a real issue. In a way that’s similar to a differential amplifier in a mic pre, anything that’s common to both measurement points is “rejected” by a system that calculates transfer functions from those points. If the same stimulus+noise signal is present on both channels, then the transfer function is flat. A single-ended measurement can’t deliver the same result, except by completely drowning the noise in the stimulus signal. Whether this is always practical or not is another matter – it wasn’t practical for me while I was getting these screenshots.

The rabbit-hole goes awfully deep, doesn’t it?


Trust your ears – but verify.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

analysisWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

It might be that I just don’t want to remember who it was, but a famous engineer once became rather peeved. His occasion to be irritated arose when a forum participant had the temerity to load one of the famous engineer’s tracks into a DAW and look at the waveform. The forum participant (not me) was actually rather complimentary, saying that the track LOOKED very compressed, but didn’t SOUND crushed at all.

This ignited a mini-rant from the famous guy, where he pointedly claimed that the sound was all that mattered, and he wasn’t interested in criticism from “engin-eyes.” (You know, because audio humans are supposed to be “engine-ears.”)

To be fair, the famous engineer hadn’t flown into anything that would pass as a “vicious, violent rage,” but the relative ferocity of his response was a bit stunning to me. I was also rather put off by his apparent philosophy that the craft of audio has no need of being informed by senses other than hearing.

Now, let’s be fair. The famous engineer in question is known for a reason. He’s had a much more monetarily successful career than I have. He’s done excellent work, and is probably still continuing to do excellent work at the very moment of this writing. He’s entitled to his opinions and philosophies.

But I am also entitled to mine, and in regards to this topic, here’s what I think:

The idea that an audio professional must rely solely upon their sense of hearing when performing their craft is, quite simply, a bogus “purity standard.” It gets in the way of people’s best work being done, and is therefore an inappropriate restriction in an environment that DEMANDS that the best work be done.

Ears Are Truthful. Brains Are Liars.

Your hearing mechanism, insofar as it works properly, is entirely trustworthy. A sound pressure wave enters your ear, bounces your tympanic membrane around, and ultimately causes some cilia deep in your ear to fire electrical signals down your auditory nerve. To the extent that I understand it all, this process is functionally deterministic – for any given input, you will get the same output until the system changes. Ears are dispassionate detectors of aural events.

The problem with ears is that they are hooked up to a computer (your brain) which can perform very sophisticated pattern matching and pattern synthesis.

That’s actually incredibly neat. It’s why you can hear a conversation in a noisy room. Your brain receives all the sound, performs realtime, high-fidelity pattern matching, tries to figure out what events correlate only to your conversation, and then passes only those events to the language center. Everything else is labeled “noise,” and left unprocessed. On the synthesis side, this remarkable ability is one reason why you can enjoy a song, even against noise or compression artifacts. You can remember enough of the hi-fi version to mentally reconstruct what’s missing, based on the pattern suggested by the input received. Your emotional connection to the tune is triggered, and it matters very little that the particular playback doesn’t sound all that great.

As I said, all that is incredibly neat.

But it’s not necessarily deterministic, because it doesn’t have to be. Your brain’s pattern matching and synthesis operations don’t have to be perfect, or 100% objective, or 100% consistent. They just have to be good enough to get by. In the end, what this means is that your brain’s interpretation of the signals sent by your ears can easily be false. Whether that falsehood is great or minor is a whole other issue, very personalized, and beyond the scope of this article.

Hearing What You See

It’s very interesting to consider what occurs when your hearing correlates with your other senses. Vision, for instance.

As an example, I’ll recall an “archetype” story from Pro Sound Web’s LAB: A system tech for a large-scale show works to fulfill the requests of the band’s live-audio engineer. The band engineer has asked that the digital console be externally “clocked” to a high-quality time reference. (In a digital system, the time reference or “wordclock” is what determines exactly when a sample is supposed to occur. A more consistent timing reference should result in more accurate audio.) The system tech dutifully connects a cable from the wordclock generator to the console. The band engineer gets some audio flowing through the system, and remarks at how much better the rig sounds now that the change had been made.

The system tech, being diplomatic, keeps quiet about the fact that the console has not yet been switched over from its internal reference. The external clock was merely attached. The console wasn’t listening to it yet. The band engineer expected to hear something different, and so his brain synthesized it for him.

(Again, this is an “archetype” story. It’s not a description of a singular event, but an overview of the functional nature of multiple events that have occurred.)

When your other senses correlate with your hearing, they influence it. When the correlation involves something subjective, such as “this cable will make everything sound better,” your brain will attempt to fulfill your expectations – especially when no “disproving” input is presented.

But what if the correlating input is objective? What then?


What I mean by “an objective, correlated input” is an unambiguously labeled measurement of an event, presented in the abstract. A waveform in a DAW (like I mentioned in the intro) fits this description. The timescale, “zero point,” and maximum levels are clearly identifiable. The waveform is a depiction of audio events over time, in a visual medium. It’s abstract.

In the same way, audio analyzers of various types can act as objective, correlated inputs. To the extent that their accuracy allows, they show the relative intensities of audio frequencies on an unambiguous scale. They’re also abstract. An analyzer depicts sonic information in a visual way.

When used alongside your ears, these objective measurements cause a very powerful effect: They calibrate your hearing. They allow you to attach objective, numerical information to your brain’s perception of the output from your ears.

And this makes it harder for your brain to lie to you. Not impossible, but harder.

Using measurement to confirm or deny what you think you hear is critical to doing your best work. Yes, audio-humans are involved in art, and yes, art has subjective results. However, all art is created in a universe governed by the laws of physics. The physical processes involved are objective, even if our usage of the processes is influenced by taste and preference. Measurement tools help us to better understand how our subjective decisions intersect with the objective universe, and to me, that’s really important.

If you’re wondering if this is a bit of a personal “apologetic,” you’re correct. If there’s anything I’m not, it’s a “sound ninja.” There are audio-humans who can hear a tiny bit of ringing in a system, and can instantly pinpoint that ring with 1/3rd octave accuracy – just by ear. I am not that guy. I’m very slowly getting better, but my brain lies to me like the guy who “hired” me right out of school to be an engineer for his record label. (It’s a doozy of a story…when I’m all fired up and can remember the best details, anyway.) This being the case, I will gladly correlate ANY sense with my hearing if it helps me create a better show. I will use objective analysis of audio signals whenever I think it’s appropriate, if it helps me deliver good work.

Of course the sound is the ultimate arbiter. If the objective measurement looks weird, but that’s the sound that’s right for the occasion, then the sound wins.

But aside from that, the goal is the best possible show. Denying ones-self useful tools for creating that show, based on the bogus purity standards of a few people in the industry who AREN’T EVEN THERE…well, that’s ludicrous. It’s not their show. It’s YOURS. Do what works for YOU.

Call me an “engineyes” if you like, but if looking at a meter or analyzer helps me get a better show (and maybe learn something as well), then I will do it without offering any apology.

Offline Measurement

Accessible recording gear means you don’t have to measure “live” if you don’t want to.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

I’m not an audio ninja. If you make a subtle change to a system EQ while the system is having pink noise run through it, I may or may not be able to tell that you’ve made a change, or I may not be able to tell you how wide or deep a filter you used. At the same time, I highly recognize the value of pink noise as an input to analysis systems.

“Wait! WAIT!,” I can hear you shouting, “What the heck are you talking about?”

I’m talking about measuring things. Objectivity. Using tools to figure out – to whatever extent is possible – exactly what is going on with an audio system. Audio humans use function and noise generators for measurement because of their predictability. For instance, unlike a recording of a song, I know that pink noise has equal power per octave and all audible frequencies present at any given moment. (White noise has equal power PER FREQUENCY, which means that each octave has twice as much power as the previous octave.)

If that paragraph sounded a little foreign to you, then don’t panic. Audio analysis is a GINORMOUS topic, with lots of pitfalls and blind corners. At the same time, I have a special place in my heart for objective measurement of audio devices. I get the “warm-n-fuzzies” for measurement traces because they are, in my mind, a tool for directly opposing a lot of the false mythology and bogus claims encountered in the business of sound.


Measurement is a great tool for dialing in live-sound rigs of all sorts. Because of its objectivity (assuming you actually use your measurement system correctly), it helps to calibrate your ears. You can look at a trace, listen to what something generating that trace sounds like, and have a reference point to work from. If you have a tendency to carve giant holes in a PA system’s frequency response when tuning by ear, measurement can help tame your overzealousness. If you’re not quite sure where that annoying, harsh, grating, high-mid peak is, measurement can help you find it and fix it.

…and one of the coolest things that I’ve discovered in recent years is that you don’t necessarily have to measure a system “live.” Offline measurement and tuning is much more possible than it ever has been before – mostly because digital tech has made recording so accessible.

How It Used To Be And Often Still Is

Back in the day, it was relatively expensive (as well as rather space-intensive and weight-intensive) to bring recording capabilities along with a PA system. Compact recording devices had limited capabilities, especially in terms of editing. Splicing tape while wrangling a PA wasn’t something that was going to happen.

As a result, if you wanted to tune a PA with the help of some kind of analyzer, you had to actually run a signal through the PA, into a measurement mic, and into the analysis device.

The sound you were measuring had to be audible. Very audible, actually, because test signals have to drown out the ambient noise in the room to be really usable. Sounds other than the test signal being audible to the measurement mic mean that your measurement’s accuracy is corrupted.

So, if you were using noise, the upshot was that you and everybody else in the room had to listen to a rather unpleasant blast of sound for as long as it took to get a reference tuning in place. It’s not much fun (unless you’re the person doing the work), and you can’t do it everywhere. Even when using a system that can take inputs other than noise, you still had to measure and make your adjustments “live,” with an audible signal in the room.

Taking A Different Route

The beautiful thing about today’s technology is that we have alternatives. In some cases, you might prefer to do a “fully live” tuning of a PA system or monitor rig – but if you’d prefer a different approach, it’s entirely possible.

It’s all because of how easy recording is, really.

The thing is, any audio-analysis system doesn’t really care where its input comes from. An analyzer really isn’t bothered about if its information is coming from a live measurement mic, or if the information is a recording of what came out of that measurement mic. All the analyzer knows is that some signal is being presented to it.

If you’re working with a single-input analyzer, offline measurement and tuning is basically about getting the “housekeeping” right:

  1. Run your measurement signal to the analyzer, without any intervening EQ or other processing. If that signal is supposed to give you a “flat” measurement trace, then make sure it does. You need a reference point that you can trust.
  2. Now, disconnect the signal from the analyzer and route that same measurement signal through the audio device(s) that you want to test. This includes the measurement mic if you’re working on something that produces acoustical output – like monitor wedges or an FOH (Front Of House) PA. The actual thing that delivers the signal to be captured and analyzed is the “device-under-test.” For the rest of this article, I’m effectively assuming that the device-under-test is a measurement mic.
  3. Connect the output of the device-under-test to something that can record the signal.
  4. Record at least several seconds of your test signal passing through what you want to analyze. I recommend getting at least 30 seconds of recorded audio. Remember that the measurement-signal to ambient-noise ratio needs to be pretty high – ideally, you shouldn’t be able to hear ambient noise when your test signal is running.
  5. If at all possible, find a way to loop the playback of your measurement recording. This will let you work without having to restart the playback all the time.
  6. Run the measurement recording through the signal chain that you will use to process the audio in a live setting.
  7. Send the output of that signal chain to the analyzer, but do NOT actually send the output to the PA or monitor rig.

Because the recorded measurement isn’t being sent to the “acoustical endpoints” (the loudspeakers) of your FOH PA or monitor rig, you don’t have to listen to loud noise while you adjust. As you make changes to, say, your system EQ, you’ll see the analyzer react. Get a curve that you’re comfortable with, and then you can reconnect your amps and speakers for a reality check. (Getting a reality check of what you just did in silence is VERY important – doubly so if you made drastic changes somewhere.)


So, all of that up there is fine and good, but…what if you’re not working with a simple, single input analyzer? What if you’re using a dual-FFT system like SMAART, EASERA, or Visual Analyzer?

Well, you can still do offline measurement, but things get a touch more complicated.

A dual-FFT (or “transfer function”) analysis system works by comparing a reference signal to a measurement signal. For offline measurement to work with comparative analysis, you have to be able to play back a copy of the EXACT signal that you’ll be using for measurement. You also have to be able to play that signal in sync with your measurement recording, but on a separate channel.

For me, the easiest way to accomplish this is to have a pre-recorded (as opposed to “live generated”) test signal. I set things up so that I can record the device-under-test while playing back the test signal through that device. For example, I could have the pre-recorded test signal on channel one, connect my measurement device so that it’s set to record on channel two, hit “record,” and be off to the races.

There is an additional wrinkle, though – time-alignment. Dual-FFT analyzers give skewed results if the measurement signal is early or late when compared to the reference signal, because, as far as the analyzer is concerned, the measurement signal is diverging from the reference. Of course, any measured signal is going to diverge from the reference, but you don’t want unnecessary divergence to corrupt the analysis. The problem, though, is that your test signal takes time to travel from the loudspeaker to the measurement microphone. The measurement recording, when compared to the reference recording, is inherently “late” because of this propagation delay.

Systems like SMAART and EASERA have a way of doing automatic delay compensation in a quick and painless way, but Visual Analyzer doesn’t. If your software doesn’t have an internal method for delay compensation, you’ll need to do it manually. This means:

  1. Preparing a test signal that includes an audible click, pop, or other transient that tells you where the signal starts.
  2. After recording the measurement signal, you’ll need to use that click or pop to line up the measurement recording with the test-signal, in terms of time. The more accurate the “sync,” the more stable your measurement trace will be.

If you’d rather not make your own test signal, you’re welcome to download and use this one. The “click” at the beginning is several cycles of a 2 kHz tone.

The bottom line is that you can certainly do “live” measurements if you want to, but you also have the option of capturing your measurement for “silent” tweaking. It’s ultimately about doing what’s best for your particular application…and remembering to do that “reality check” listen of your modifications, of course.

The SA-15MT-PW

You won’t be let down if you’re not asking too much.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

I have a weakness for “upstarts.”

I love to find little companies that are doing their own thing and trying to have some kind of fresh take on this business. When I ran across Seismic Audio, I got a little infatuated with ’em. Here were some audio-gear humans that were doing something other than selling everybody else’s boxes. Here were some folks who had some interesting, hard to find, “nifties.”

Seriously – maybe I just wasn’t looking hard enough, but Seismic had a reel mounted, 8 send/ 4 return snake with 50′ of trunk when nobody else did. And it was the perfect thing for what I needed at that moment. Neat!


Of course, when you’ve been at this kind of thing for a while, you develop a sort of BS detector for sales. You look at a page that’s trying to get you to buy equipment, and you instinctively throw out all the numbers and claims that you think are “best case scenarios.”

What other people call, “horrific, crippling cynicism that threatens to poison all joy and blot out the sun” is what I call “being realistic about what the sales guys are saying.” (Welcome to the music business, boys and girls!) With a company like Seismic, my thought process goes like this: “These guys have some neat offerings, and the prices are quite low. In order to do that, they must be saving money somewhere. I should expect that some part of these products is going to be demonstrably ‘cheap,’ and not be disappointed when I discover that part.”

So – when it was time to buy some new monitor wedges (because half of the current set had committed full or partial suicide), I decided to take a chance on Seismic. My goal was to get some boxes that would be “enough” for a small stage, and not to worry about having varsity-level build quality or performance. The gear wouldn’t be moving around a lot, and the bands I work with are good about not abusing monitor wedges, so I was confident that a “middle of the road” unit from Seismic would work out.

Trying Not To Ask Too Much

As I looked around Seismic’s site, I found myself gravitating towards the SA-15MT-PW monitors. Because of their configuration, which puts the HF (high frequency) driver above the LF cone, they would be naturally “bookmatched” (vertically symmetrical) when operating in pairs.

In the absence of a specific claim one way or another, and at the box’s price point, I was very sure that the 15MTs were neither biamped nor equipped with fancy processing. I would essentially be receiving passive monitors that had an amplifier and input plate added. I was perfectly okay with this.

I hoped that the 15″ driver would provide an overall output advantage over a 12″ unit, especially since I just decided to not care about anything below 75 – 100 Hz. In an environment like the one where I work, using the onstage monitoring rig to create thundering bass and kick is not a priority. The monitors are mostly there for the vocalists, with instrument midrange as an added bonus.

Seismic’s site claims that the box will play down to 35 Hz, but let’s be real. Making 35 Hz at “rock show volume” is hard enough with a properly built “mid-pro” sub, and 15MTs are affordable drivers in a small box. I’m sure the LF cone will vibrate at 35 Hz, but you’re not going to get much usable output down there. See? Being cynic – ah – REALISTIC is good for you.

I’ve also become very good at ignoring “peak” numbers in power ratings. Whenever you see “peak,” you should mentally change the wording to “power that’s not actually available to do anything useful.” What you really want to find is the lowest number with “continuous” or “RMS” next to it, and assume that’s what you’re going to get. SA claims that their powered 15MT units have an amplifier that’s capable of 350 watts continuous. I think this may be an increase from when I was shopping, as I remember something more along the lines of 250 watts.

What I was most prepared to accept was the published sensitivity rating and maximum SPL. With one watt (actually, 2.83 VRMS, but that’s a whole other discussion), the boxes were supposed to be capable of 97 dB SPL, continuous. The maximum SPL was listed as 117 dB SPL.

This seemed pretty reasonable to me, and here’s why: With a 97 dB sensitivity, 250 watts of continuous input should get you into a calculated maximum SPL range of 121 dB. That they published a maximum number LOWER than that (117 dB) made me feel more comfortable. Seeing a number that suggested that the driver could only really make use of half the available continuous power reassured me. This isn’t actually strange – when you see a number that is below the theoretical maximum, it gives the impression that the boxes were actually measured as being able to hit that number.

I thought to myself: “If I put two boxes together, I can get 120 dB SPL continuous out of the pair. That’s enough for our stage. If somebody wants more than that, then they’re playing the wrong venue. I’m going to order these.”

I got free shipping and a discount. That felt pretty good.

A Short Honeymoon

When the 15MTs arrived, I was encouraged. They weren’t too heavy, and you could actually lift and carry them in a not-awkward way by using the handles. I wasn’t wild about the way that an XLR connector would stick out of the side-mounted control panel when everything was connected, but I’m pretty much convinced that getting a powered speaker’s input panel to be perfect is an impossibility. I tried the boxes with a bit of music, and the sonics seemed fine. The playback was clean, and the overly present (too much high-mid) timbre of the boxes was easily tamed with the onboard EQ – which is a good sign, because if simplified, broad-brush EQ can fix an issue, then the problem isn’t really bad.

The units were put to work immediately in a festival setting. The venue was hosting an annual showcase of local talent, and this seemed like an ideal way to put the monitors through their paces. They’d need to perform at both quiet and moderately loud volumes, and do a lot of different acts without a hiccup.

The first day went by without a problem.

The second day wasn’t so good.

The high-energy point in the night was being provided by a band with strong roots in both country and rock. Even in an acoustic setting, this part of the show was meant to be pretty “full tilt” in terms of feel and volume. I got the lead singer’s vocal switched up to the necessary level – and that’s when I noticed the problem.

The 15MTs were distorting, and distorting so audibly that the “crunch” could clearly be heard through the cover that FOH (Front Of House) was providing. I was very, very embarrassed, and made it a point to approach the lead singer with an apology. I assured him that I knew the sound was unacceptable, and that I wasn’t just ignoring it.

He was very gracious about the whole thing, but I was not happy.

Oversold On The Power

With what seem to be updated specs on the Seismic Audio site, I can’t be sure that the SA-15MT-PW units currently being sold are the same as the 15MTs that I was shipped. What I can say, though, is that the units I was shipped do not appear to meet the minimum spec that was advertised at the time.

Not the hyped specifications – like I said, I ignore those as a rule – the MINIMUM spec was not achieved.

I’m about 90% sure that I’m right about this. The 10% uncertainty comes from the fact that I don’t exactly have NASA-grade measurement systems available to me. I can’t discount the issue that significant experimental error is probably involved here. At the same time, I don’t think that my tests were so far off as to be invalid.

After empirically verifying that sufficient volume with a vocal mic caused audible distortion (and at a volume that I felt was “pretty low”), I decided to dig deeper.

I chose a unit out of the group, removed the grill, and pulled the LF driver out of the enclosure. I then ran a 1 kHz tone to the unit, and increased the level of that tone until I could clearly hear harmonic distortion. The reason for doing this is that, for a single-amped box equipped with a 1″-exit HF driver, 2+ kHz distortion components will be audible in the HF section without being incredibly loud. The small HF driver is probably crossed over at 2 kHz or above, so the amp going full-bore with a 1 kHz input is unlikely to cook the driver.

Having found the “distortion happens here” point, I then rolled the test tone down to 60 Hz. My multimeter is designed to test electrical outlets in the US, and that means that its voltage reading is only likely to be accurate at the US mains electrical frequency of 60 Hz. I connected my multimeter’s probes to the LF hookup wires, and got this:

As the image states, 32.9 volts squared over 8 ohms (the advertised nominal impedance of the drivers in the unit) works out to be 135 watts…and that is with VERY significant distortion. In general, when an audio manufacturer claims a continuous or RMS power number, what that means is the unit can supply that power without audible distortion. In this case, however, that didn’t hold.

To be brutally frank, if this measurement is correct then there is NO PHYSICAL WAY that the units I was shipped have 250 watts of continuous power available, with the peaks exceeding that number. You might be able to get 250 watts continuous out of the amp if you drove it as far as possible into square-wave territory, but you wouldn’t want to listen to anything coming out of the monitors at that point.

Oversold On The SPL

After reconnecting and reseating the LF driver, I ran pink noise into the unit at the same level as the test tones. I then got out my SPL meter, and guesstimated what 1 meter from the drivers would be. Here’s what I got:

If the claimed maximum SPL was 117 dB, then this is actually a pretty consistent reading with the box only being able to do about half of what was advertised. Even so, this number was generated by a unit that was beyond its “clean” output capability.

Now – again – let’s be very clear. I can only speak about the specific units that were shipped to me. It would be wrong to say that these results can be categorically expected across all of Seismic’s product lines. It would also be wrong to say that the 15MT-PWs being shipped today are definitely the same design as what was shipped months ago. Products can change very fast these days, and it may be that Seismic has upgraded these units.

I also want to be clear that, dangit, I’m rooting for Seismic Audio. I love scrappy, underdog-ish businesses that carve out a space for themselves. Heck, maybe a parts manufacturer sent them the wrong amplifier plates. Maybe the passive versions of these boxes (and every other speaker they sell) meet the minimum spec. The only way to know would be to buy and test a sample of everything.

Still, in this particular case, the products I was shipped did not live up to what they were advertised to do. They’re not useless by any means, but they can’t quite cover the bases that I need them to cover. I’m planning on finding them a good home with people who can actually put some miles on ’em.

…and, Seismic folks, if you read this: I don’t want my money back, or I would have asked when I still could. I do want you guys to keep doing your thing, and I also want you to be vigilant that the products you ship actually meet the minimum spec that you list on your site.