Maybe a little, but not much.
The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.
A perennial moan-n-groan amongst pro-audio types is: “Ya cain’t trust dem portable music players, Sonny!” At the core of this angst is the idea that the inexpensive output circuitry of Ipods, phones, tablets, and [insert whatever else here] simply can not handle audio very well. It MUST be doing something nasty to the passing signal. It’s an affront to plug that device into something serious, like a mixing console. An affront, I say!
But here’s the thing.
Up until recently, I have never seen any kind of measurement made to prove or disprove the complaint. No numbers. Nothing quantitative at all. Just an overall sense of hoity-toity superiority, and the odd anecdote in the vein of “We plugged the music player into the console, and I swear, Led Zeppelin didn’t sound as zeppelin-y through the thing.” I say that I haven’t seen anything quantitative until recently because of a page from this ProSoundWeb thread. Scroll down a bit, and you’ll find a link to a series of honest-to-goodness measurements on the Iphone 5.
The short story is that the Iphone 5 has output that is basically laser-flat from DC to dog-whistles. It does roll off a tiny bit above 5 kHz, but the curve is on the order of half a decibel per octave. If you plug the thing into a higher-impedance-than-headphones input (as you would if you were running the phone to a mixing console), the phone is definitely NOT the limiting factor in the audio.
Seeing that measurement inspired me to pull out my Samsung Galaxy SIII and run my own series of tests.
The first thing I did was to get a sample of pink noise, put the WAV file in my phone’s storage, and play that audio back to my computer. In the process of getting the recorded signal from the phone aligned with the original noise, I heard something very curious:
When the start points were aligned and summed, there was the strange effect of a downward-sweeping comb filter. Not steady comb-filtering…SWEEPING. (Like a flanger effect.) Zooming into the ends of the audio regions, I could clearly see that the recording from the phone was ending earlier than the reference file. The Galaxy was very definitely not playing the test signal at the same speed that the computer played the original noise. On a hunch, I set the playback varispeed on the phone recording to 0.91875 of normal. The comb-filter sweep effect essentially disappeared.
See, my hunch was that the phone’s sampling rate was 48 kHz instead of 44.1 kHz. My hunch was also that the phone was not sample-rate converting the file, but just playing it at the higher rate. That’s why I chose the 0.91875 varispeed factor. Divide 44100 by 48000, and that’s the number that comes out – which would be the ratio of playback speeds if no rate-conversion was going on.
So, in the case of WAV files, the phone may very well not be playing them back at the speed it ought to be. That IS something to think about, although it’s hardly a fatal problem if the discrepancy is from 44.1 k sampling to 48 k. Also, that’s not a problem with audio circuit design. It’s a software implementation issue.
In the end, I ran the dual-FFT analysis with the phone audio playing at 1X speed, because the “corruption” introduced by the time-stretching algorithm was enough to make the measurement more uncertain (rather than less). Uncertainty in measurements like these manifests the same way as noise does. It causes the trace to become “wild” or “fuzzy,” because the noise creates a sort of statistical perturbation that follows the shape of the curve. The more severe that perturbation is, the tougher it is to read the measurement in a meaningful way.
Here’s what I got in terms of the phone’s frequency response:
You can see what I mean in terms of the noise. Especially at higher frequencies, the measurement shows a bit of uncertainty. I used a very high averaging number in order to keep things under control.
In any case, the trace is statistically flat from 20 Hz to 20 kHz. The phone’s output circuitry sounds just fine, thanks.
With the problems introduced by playback timing, I wanted to also try tests with a signal that “self references.” FFT noise fits this description. Run through a properly configured analyzer, FFT noise (which sounds like a series of clicks) does not require a “known” signal for comparison. It’s own properties are such that, when measured correctly, the unaltered signal should be completely flat.
As an aside, you may remember me talking about FFT noise in a bit more detail here.
In the article I just linked, I didn’t get into one of the main weaknesses of FFT noise, and that is its susceptibility to external noise. FFT noise is really great when you want to test, say, plugins used in a DAW, because digital silence is actual silence – 0 signal of any kind. There’s no electronic background noise to consider. The problem, though, is that FFT noise is a series of spaced “clicks.” In the empty spaces, anything other than digital silence is incredibly apparent, and easily corrupts the measurement trace.
Even so, I wanted to give the alternate stimulus a try.
This is what I got in Reaper’s Gfxanalyzer, which is meant to “mate” well with FFT noise:
Again, the trace is statistically flat, although low-frequency noise is very apparent.
For an alternate trace, I tried to configure Visual Analyzer to play nicely with the noise.
Once more, the trace is noisy but otherwise flat.
It’s very important that I recognize a huge limitation in all of this: The sample size is very low. One Iphone 5 and one Samsung Galaxy SIII, with one measurement each, do not properly constitute a representative sample of every phone and media player that might ever get plugged into a mixing console.
At the same time, actually measuring these devices suggests that the categorical write-off of portable players as being unable to pass good audio is just worry to no purpose. There are probably some horrible media players out there, with really bad output circuitry. However, half-decent analog output stages are at the implementation point that I would venture to say is “trivial.” I would further guess that most product-design engineers are using output circuits that are functionally identical to each other. When it comes to plugging a phone, tablet, or MP3 player into a console, I simply can’t find a reason to be up in arms about the quality of the signal being handed to the output jack. I might worry a bit about the physical connection provided by that jack, but the signal on the contacts is a different matter.
I’m in agreement with the sentiments expressed by others. If the audio from a media device doesn’t sound so good, the problem is far more likely to be the source material than the output circuitry.
The phone is probably fine. What’s the file like? What’s the software implementation like?