Well, after finishing the last couple of posts, I sent the Squeezebox Receiver back to fordgtlover. Thanks for the opportunity to test out another member of the SB family! I very much appreciate the generosity!
Before the Receiver left on the long trip back to Australia; I decided to take a family portrait:
I remember patiently awaiting the arrival of the Slim Devices Squeezebox 3 back in late 2005 after putting in the pre-order. It's thanks to this little device that my audio listening habits changed forever away from physical media. Despite the ups and downs with various versions of the server software, these little boxes have certainly made access to my own music remarkably easy in a way almost unimaginable previously. My father has access to my music library across the city through his Touch and wherever I go in my travels, music streams to the PC/Mac as well. The stability of my server these days with >5000 albums is fantastic with uptimes of months (this is with Windows, and the only reason it goes down these days is because I install Windows updates every few months).
Sadly, the Squeezebox line was discontinued in August 2012 by Logitech.
Looking back, I'm still impressed by the technology put together by the Slim Devices team to get the infrastructure in place, and later with the Logitech team and especially the Touch. Even today, the objective analogue audio output performance of the Transporter remains superb (it came out in September 2006) and as I look at all that can be done with the Touch (came out April 2010) including the ability to handle 24/192 (EDO kernel), communicate with a number of USB DACs, transport DSD64 via DoP, it's amazing the power of what can be done with an Open Source architecture where the "community" is empowered to maximize the machine's capabilities. Have a look at the used prices of the Touch these days, it's a reminder of just how much value it offered audiophiles. The computer audiophile community needs more devices of this caliber; devices that can "raise the bar" and do it without need for valuation as a luxury or "artisan" item when it's just "good sound" and "good functionality" that I believe many of us are after.
Looking ahead, it's great to see ongoing development in the Squeezebox community towards replacement hardware and software - check out the communitysqueeze.org FAQ. Wonderful to see the DIY machines being put together out there (check out the picture gallery!). With the availability of small, low powered, but reasonably fast computers like the Wandboard and the already ubiquitous ports of client software like Softsqueeze/Squeezeplay/Squeezeslave, Squeeze Player (Android), Squeezecast (iPod/Phone), I have a feeling that I'll be running my Logitech Media Server (or some equivalent) for the forseeable future...
... Now, if they could support multichannel FLAC/WAV and native stereo/multichannel DSD (I still think there's a need for better file formats than .dsf and .dff), I think we'd have all of audio covered. :-)
Vive le Squeezebox!
----
Well, it's summer time in the northern hemisphere and that means the kids are out of school. The heat wave finally arriving here in Vancouver. Time for some vacation, BBQ's, camping, lazy afternoons, general R&R, and of course good summer tunes (Katrina & The Waves I'm Walking On Sunshine is playing in the back currently). :-)
I'm also starting to go through Ethan Winer's recent 2012 book "The Audio Expert: Everything You Need To Know About Audio". Only started and already it's a good read written in an accessible manner for those interested in the science and technical aspects of audio without going deeply into the math. The Kindle edition (IMO a bit pricey) along with some mindless fiction like Dan Brown's Inferno will likely accompany me on the summer trips coming up.
As always, enjoy the music everyone!
A 'more objective' take for Rational Audiophiles. Among other topics!
X/Twitter: @Archimago
E-Mail: archimagosmusings(at)outlook.com
[Some items linked to affiliate accounts - I may receive gift certs from qualifying purchases.]
Sunday, 30 June 2013
Friday, 28 June 2013
MEASUREMENTS: Do bit-perfect digital S/PDIF transports sound the same?
Using suggestions from this page, the Touch can be used to transport DSD to the TEAC as DoP wrapped around a 24/176 FLAC file through Triode's USB kernel. Neat hack - proof of bit perfect transmission. Unfortunately the files are huge, so I likely will await an efficient solution. Note that this is NOT the test setup described below, just something cool. :-)
Let's talk about digital transports for a bit.
Most of us I'm sure remember those days when the only way to get digital data to an outboard DAC was through a CD transport. Although we can still resort to a CD reader, I suspect that many of us here have gone mostly into the computer audio realm with data on hard drives or flash/SSD devices. Thankfully gone are the days when the data being read off the CD could be inaccurate and interpolation may be needed in realtime, or be susceptible to mechanical failures of CD drive mechanisms (though hard drive failures and need for backups present another challenge).
I have shown that bit-perfect data can be transferred and played back without any concern off a USB asynchronous interface (eg. the various Mac and Windows software players and laptops). While I have the Squeezebox Receiver still on hand, let us have a look at the effect of using different transport devices with S/PDIF interfaces objectively.
Remember to keep in mind that the S/PDIF interface, unlike packetized asynchronous USB (or ethernet) conducts its data transfer in a unidirectional serial fashion formalized around 1985 when the AES3 (AES/EBU) standard was also laid down. As I think many of us have read, S/PDIF combines the data and clock signals using "biphasic mark code" and it is this "feature" of the interface that has resulted in many an audiophile nightmare regarding timing issues - jitter concerns especially well publicized. This is to a large part the basis for the well known paper by Dunn and Hawksford in 1992 asking "Is The AESEBU/SPDIF Digital Audio Interface Flawed?". (Of course the topic of jitter can be very complex as described in the paper going far beyond what we need to concern ourselves with here.)
With that said, let's be practical and see what the results looks like with the different devices using TosLink and coaxial interfaces (with comparison to asynchronous USB)...
Here's the hook-up:
* Transport device * -> Coaxial/Toslink/USB cable -> ASUS Essence One -> Shielded 3' RCA -> E-MU 0404USB -> 6' Belkin Gold USB -> Win8 laptop
Coaxial cable = 6' Acoustic Research
TosLink cable = 6' Acoustic Research
USB cable = 6' Belkin Gold
Transport devices tested:
1. Squeezebox 3 -> Coax / TosLink
2. Transporter -> Coax / TosLink
3. Receiver -> Coax / TosLink
4. Touch -> Coax / TosLink
5. Laptop -> CM6631A Async USB to S/PDIF -> Coax / TosLink
6. Laptop -> Async USB direct to ASUS Essence One
As you can see I've got the host of Squeezebox devices on the test bench along with the usual two ways to connect the computer's USB port to DACs (direct or through USB-S/PDIF converter).
Numerically, not much difference... From my subjective listening during the tests, I would agree with these numbers in saying that "it sounds like the Essence One DAC"; there are more similarities in the sound than subjective differences.
Let's have a look at the frequency response graph because there does appear to be some difference - here's coaxial interface only:
Hmmm, interesting! Small differences at the top end. Let's zoom into that top end and have a good look:
Notice the shape of the curves suggest slightly earlier roll-off with some devices. The flattest, most extended frequency response (and possibly most "accurate") is the Transporter, followed by SB3, then Touch, and Receiver. Remember, we are talking only about 0.15dB difference between the Transporter and Receiver at 20kHz; not perceptible IMO but since we're looking for evidence of a difference, useful to note.
Let's now include the TosLink measurements:
Even though we ran out of colors, it doesn't matter because there are still only 4 curves. There is no difference between coaxial and TosLink; they overlay on top of each other essentially perfectly.
Let us now add the computer-USB interfaces (take away the TosLink since no difference):
As you can see, the flattest response curves come from USB direct and Transporter. Here's the ranking: Transporter, ASUS USB direct, SB3, Touch, CM6631A, and Receiver. We'll talk more about this later, just keep in mind then that frequency responses are *slightly* different between transports despite bit-perfect settings...
The rest of the RightMark graphs - no significant difference:
I have shown that bit-perfect data can be transferred and played back without any concern off a USB asynchronous interface (eg. the various Mac and Windows software players and laptops). While I have the Squeezebox Receiver still on hand, let us have a look at the effect of using different transport devices with S/PDIF interfaces objectively.
Remember to keep in mind that the S/PDIF interface, unlike packetized asynchronous USB (or ethernet) conducts its data transfer in a unidirectional serial fashion formalized around 1985 when the AES3 (AES/EBU) standard was also laid down. As I think many of us have read, S/PDIF combines the data and clock signals using "biphasic mark code" and it is this "feature" of the interface that has resulted in many an audiophile nightmare regarding timing issues - jitter concerns especially well publicized. This is to a large part the basis for the well known paper by Dunn and Hawksford in 1992 asking "Is The AESEBU/SPDIF Digital Audio Interface Flawed?". (Of course the topic of jitter can be very complex as described in the paper going far beyond what we need to concern ourselves with here.)
With that said, let's be practical and see what the results looks like with the different devices using TosLink and coaxial interfaces (with comparison to asynchronous USB)...
Setup:
For this round of testing, I decided to take a break from the TEAC UD-501 DAC and go back to the ASUS XONAR Essence One for a bit. Although I have been listening to the TEAC a lot in the last couple months, on a daily basis, I still use the ASUS Essence One at my computer workstation and it was just more practical to run these tests there. Despite my concerns around the upsampling feature of the ASUS, it measures well and sounds excellent.Here's the hook-up:
* Transport device * -> Coaxial/Toslink/USB cable -> ASUS Essence One -> Shielded 3' RCA -> E-MU 0404USB -> 6' Belkin Gold USB -> Win8 laptop
Coaxial cable = 6' Acoustic Research
TosLink cable = 6' Acoustic Research
USB cable = 6' Belkin Gold
Transport devices tested:
1. Squeezebox 3 -> Coax / TosLink
2. Transporter -> Coax / TosLink
3. Receiver -> Coax / TosLink
4. Touch -> Coax / TosLink
5. Laptop -> CM6631A Async USB to S/PDIF -> Coax / TosLink
6. Laptop -> Async USB direct to ASUS Essence One
As you can see I've got the host of Squeezebox devices on the test bench along with the usual two ways to connect the computer's USB port to DACs (direct or through USB-S/PDIF converter).
I. RightMark Analysis:
Since there are so many devices/combinations, I'll show the results a few at a time to demonstrate what was found. Let us start with the results of the four Squeezebox devices. I decided to "max out" the capabilities of the SB3 and Receiver by using 24/48 sampling rate:Numerically, not much difference... From my subjective listening during the tests, I would agree with these numbers in saying that "it sounds like the Essence One DAC"; there are more similarities in the sound than subjective differences.
Let's have a look at the frequency response graph because there does appear to be some difference - here's coaxial interface only:
Hmmm, interesting! Small differences at the top end. Let's zoom into that top end and have a good look:
Notice the shape of the curves suggest slightly earlier roll-off with some devices. The flattest, most extended frequency response (and possibly most "accurate") is the Transporter, followed by SB3, then Touch, and Receiver. Remember, we are talking only about 0.15dB difference between the Transporter and Receiver at 20kHz; not perceptible IMO but since we're looking for evidence of a difference, useful to note.
Let's now include the TosLink measurements:
Even though we ran out of colors, it doesn't matter because there are still only 4 curves. There is no difference between coaxial and TosLink; they overlay on top of each other essentially perfectly.
Let us now add the computer-USB interfaces (take away the TosLink since no difference):
As you can see, the flattest response curves come from USB direct and Transporter. Here's the ranking: Transporter, ASUS USB direct, SB3, Touch, CM6631A, and Receiver. We'll talk more about this later, just keep in mind then that frequency responses are *slightly* different between transports despite bit-perfect settings...
The rest of the RightMark graphs - no significant difference:
II. Jitter Analysis:
Dunn J-Test stimulation of jitter. To keep it more manageable, I'll group them into 16-bit and 24-bit side-by-side first, let's just look at the coaxial interface here:
A. Squeezebox 3 (16-bit / 24-bit):
B. Transporter (16-bit / 24-bit):
C. Receiver (16-bit / 24-bit):
D. Touch (16-bit / 24-bit):
E. Laptop -> CM6631A USB to coaxial (16-bit / 24-bit):
F. Laptop -> USB direct (16-bit / 24-bit):
Objectively, it looks like the CM6631A USB-to-S/PDIF and USB direct 24-bit graphs are cleaner, and of the Squeezebox devices, the Transporter on the whole seems to have the least data-correlated jitter. Even at its worst, the sidebands for the SB3 around the primary signal is down around -120dB. Is this a problem? I doubt it since auditory masking will easily make this inaudible (assuming one could even hear down that low around 12kHz pitch). Furthermore, in theory, the J-Test should create a "worst case scenario" for jitter which is unrealistic in real music.
Here's the difference between coaxial vs. TosLink:
A. Squeezebox 3 24-bit, Coaxial vs. TosLink:
B. Transporter 24-bit, Coaxial vs. TosLink:
C. Receiver 24-bit, Coaxial vs. TosLink:
D. Touch 24-bit, Coaxial vs. TosLink:
E. CM6631A USB to S/PDIF 24-bit, Coaxial vs. TosLink:
In general, we can say that indeed TosLink
is worse (remember however TosLink is immune to electrical noise with galvanic isolation so there are some positives in this regard). Interestingly this is very clear with the Transporter! However, increased jitter with TosLink is not a given because the SB3 and Receiver seem to behave in the opposite fashion and show less jitter artifacts with the TosLink interface.
Remember that all of these jitter graphs are indicative of the interface between the transport device connected to the Essence One DAC. The graphs could be different with another DAC since much of the result will depend on the accuracy of the DAC in extracting the clock information and what other steps it might take (eg. data buffering, reclocking) to further stabilize the timing - it's not just about the transport device.
III. DMAC Protocol
So, up to now we can see differences between bit-perfect devices with RightMark, and obvious differences with the J-Test. How do they sound? Let's see what the computer "hears". For this test, I am using the Transporter playback as reference against which all the others are being compared.
First, I must admit that I'm not as confident about these numbers as I am of the graphs and plots above simply because it was really tough getting this done properly! From previous experience with the Audio DiffMaker program, results can vary depending on environmental factors like temperature of the equipment and subtle "sample rate drift" over time. With each transport measured, cables needed to be reconnected, settings needed to be changed, and for each condition, I ran the test 3 times to get a sense of the "range" of results. Admittedly, I made an error with the 'USB direct' measurements and did not realize this until after the fact so did not include the results here (foobar was accidentally set to output 16-bit instead of 24-bit).
The bottom line is that the results suggest that each device "sounded" different according to the computer. Instead of the usual high "correlated null depth" like in my previous tests with player software around 80-90dB (similar to the Transporter tested against itself above), we're seeing numbers in the 60-80dB range between transports. The computer thought the Squeezebox 3 sounded the most different from the Transporter. Good to see that it was able to detect the Receiver playing 320kbps MP3 as "most different" (ie. lowest correlation) to provide a point of reference. A reminder, this measurement is logarithmic so the actual mathematical difference between the MP3 sample compared to the others is larger than what it might look like on the graph.
Remember that this is a measurement of the difference between each device and the Transporter connected to the Essence One. There is no implication here of whether one sounds "better" than another since that would of course be the listener's subjective judgment call.
IV. Summary
Let me see if I can summarize this based on the results here along with what I know/believe over the months of testing as applicable... Q&A format:
Q: Do all bit-perfect transports sound the same?
A: Based on the results, not exactly. Even though bit-perfect (I have verified this with the Touch, Transporter, SB3, CM6631A, ASUS USB direct with ASIO), small differences in frequency response can be measured. Furthermore, jitter analysis clearly looks different between devices and this also varies between coaxial and TosLink interfaces (with TosLink generally worse than coaxial for jitter). Likewise, the DMAC test also suggests the level of audio correlation when playing musical passages is not as high as previous tests with bit-perfect software or decoding lossless compression. Within the Squeezebox family, not surprisingly the Transporter performed the most accurately with flattest frequency response and lowest coaxial S/PDIF jitter, although I was quite surprised by the stronger TosLink jitter.
Q: Why do you think the frequency response varies?
A: My belief is that this is not a jitter issue. The reason I say this is that there appears to be no difference between coaxial and TosLink even though jitter varies between the two interfaces as demonstrated by the Dunn J-Test. I believe that this is the result of mild clock speed / data rate differences of the transport devices. Since the word clock has to be recovered from the S/PDIF signal, clock accuracy is dependent on the transport's internal clock - some transports may be timed a little quicker, some a littler slower and the DAC has to adjust to this (of course the E-MU 0404USB ADC measuring the audio has a part to play in setting where it believes the roll-off should be). This frequency roll-off variability is not seen with laptops connected to an asynchronous USB device for example (that's of course the point of being asynchronous; not time-coupled to the data sender by having the recipient working off its own clock and telling the sender to speed up or slow down if necessary).
Q: But surely different/better/more expensive digital S/PDIF cables can help?
A: No. I don't think so. As I have measured and discussed before, digital cables make no substantial difference to timing/jitter as far as I can tell. Even though very long or poorly constructed cables may add to the jitter, the difference IMO is much less than what I'm showing here and as far as I can tell is irrelevant for a reasonable length of decently constructed coaxial/TosLink.
Q: Those jitter plots look nasty... I bet I can easily tell the transports apart!
A: Of course, anyone can claim anything over the Internet or in print since there are rarely if ever any actual "double checking" with sound methodology or formal peer review in the case of print magazines (obviously these are not scientific journals). Although I have shown these measurable differences, as a (currently) 41 year old male who works in an office environment, have generally avoided very loud concerts, and have a hearing frequency threshold around 16kHz, I do not believe I would be able to differentiate any of these bit-perfect transports in controlled testing with the same ASUS Essence One DAC.
Q: Surely you just need better gear to hear it!
A: The data correlated jitter with any of these devices would be >100dB below the primary signal. The frequency response difference is less than 0.15dB at 16kHz (my frequency threshold). Unless there's some significant interaction that causes anomalies in the output significantly beyond what I measure here, these difference would be inaudible to me irrespective of the quality of the sound system. Of course if you have younger ears and better hearing, this could be different. I believe speakers and headphones would introduce much more distortion and change to the frequency response than what I'm measuring here with a good modern DAC.
Q: Well, if that's the case, then I might as well go for the cheapest digital transport/streamer I can find, right?
A: Well, maybe, maybe not. When it comes to sound quality, I think a digital transport would have to be quite incompetent to sound poor (eg. non-bit-perfect, horrific jitter or imagine if the frequency response rolled off way too early because of severe S/PDIF timing inaccuracies). Therefore, spending more on a digital transport is IMO not primarily about sound quality but rather features and the aesthetic "look and feel" you're after (eg. better remote, can handle higher sampling rates, more reliable, fits into the decor...). Sound quality IMO is better served by putting the money into good speakers/room treatments/amp/DAC. Back in the "old days" of CD spinners, better mechanics with higher reliability and accuracy just cost more money. Even then it's not a given; I remember spending five times more on a higher model Harmon/Kardon CD player 20 years ago and that failed within three years whereas an inexpensive JVC from Costco with digital out still runs fine today. I have not had occasion to try the "low end" devices like the <$100 media streamers (eg. WD TV) to see how those compare to the Squeezebox dedicated audio units.
BTW: If you're not aware, the Squeezebox devices by nature are asynchronous since they receive the data through WiFi or ethernet from Logitech Media Server and buffered with a decent amount of internal memory. You can see that the TosLink and coaxial connections have worse jitter than what's measured directly off the analogue outputs (eg. look at SB3, Touch, Transporter, Receiver jitter measurements).
As usual, feel free to comment or link to any good data you may have come across regarding this topic especially if conclusions are different from what I've presented.
Musical selection this evening:
Philippe Jaroussky - Carestini (The Story of a Castrato) (Virgin Classics, 2007) - amazing vocals and fascinating musical history. ("It's a man, baby!" -- Austin Powers)
Musical selection this evening:
Philippe Jaroussky - Carestini (The Story of a Castrato) (Virgin Classics, 2007) - amazing vocals and fascinating musical history. ("It's a man, baby!" -- Austin Powers)
Happy listening! ;-)
Monday, 24 June 2013
MEASUREMENTS: Squeezebox Duet - Receiver & Controller (Analogue Output).
A number of moons ago (December 2012), this blog was started to obtain data on ~320kbps MP3 vs. lossless audio. Near the end of data collection for that study, I started performing some tests and began posting them on the Squeezebox forum for discussion.
Over the ensuing months, I started posting data on the family of Squeezebox devices which I collected over the years as the foundation of my sound system at home. Up to this point, I have had the opportunity to measure ALMOST the whole family of products from the SB3 onwards...
Here she is:
The Controller comes with a nice weighted metal stand which also plugs in as a recharger. The LCD screen is reasonably bright and easy to read. Buttons have a nice click and selection is made with the turn wheel. One could ask for a smoother feel to the wheel, but as is, it's reasonably functional. Even though you can connect the Receiver via ethernet, the Controller obviously needs a functioning WiFi network.
The Receiver is a relatively non-descript black box with just a lit wireless icon on the front to indicate power & device status. Getting it connected up was quite simple and quick using the Controller - see here for the details.
A look at the back ports on these devices. From left to right of the Receiver: analogue stereo, TosLink digital, coaxial digital, Ethernet (10/100), and wallwart power (9V, 0.56A). You can make out on the top of the Controller a phono plug as well for audio output to headphones on the go.
I: Receiver Measurements
Internally, the Receiver uses the Wolfson WM8501 which is spec'ed at 100dB SNR and although it can go to 24/192, the Receiver is limited at 24/48 (as are all the other Squeezeboxes except Transporter and Touch).
Okay, lets get into it. First I wanted to have a look at the analogue outputs of the Receiver. Since this is a loaner unit, I'll try to extract as much as I can before sending it on the long journey home to Australia...
Lets pull up an oscilloscope reading* - 1kHz 0dBFS square wave (24/44):
The yellow channel is the right, and blue left. Notice there is some channel imbalance on this unit with the left channel a bit louder. Left channel peak voltage measures at 2.47V, right channel 2.40V.
The square wave itself looks nice. Minimum noise, good contours with minimal ringing (compare with the Controller below). From this, one would suspect that the usual RightMark analysis would show good results.
Here's the impulse response (as usual, 16/44 impulse recorded with the E-MU 0404USB at 24/192):
A standard linear phase filter response. Absolute polarity is maintained. We can therefore expect a sharp roll-off at Nyquist.
A. RightMark results:
The usual setup:
SB Receiver --> Shielded RCA --> E-MU 0404USB --> Shielded USB --> Win8 laptop
Firmware: 77
Receiver volume 100%.
Cables were the same for all these measurements (a 3' shielded RCA cable from Radio Shack, shielded 6' USB cable).
I tried both ethernet and WiFi. Since I could find no difference, I'll just present the WiFi data here. I noticed that the WiFi in the Receiver was able to achieve better signal strength than the Touch at the same location. 2 floors up from the ASUS RT-N56U router, my Touch got ~65% strength, while the Receiver was up at ~75% (Controller also about 75%).
Where I did the measurements, the Receiver was up around 85% WiFi signal strength.
As you can see, I measured the Receiver against the Touch all the way up to the max sampling rate of 24/48. Also, I included the 24/44 data from the Squeezebox 3/Classic as comparison. The graphs below are derived from the 24/44 data. Notice that we gained 4dB from 16 to 24-bits using the Receiver... Not much.
Frequency Response:
Noise:
THD:
Relatively stronger odd-order harmonics up at 3, 5, and 7kHz compared to the Touch or SB3.
Stereo Crosstalk:
As you can see, a mixed bag. The Receiver's frequency response is similar to the Touch; both are flatter than the SB3 which tends to roll off the deep bass a bit more. The Touch clearly beats the other two in terms of noise performance (SB3 and Receiver about equivalent here), but scores lower with stereo crosstalk (not really an issue IMO). Total harmonic distortion was higher with the Receiver.
B. Dunn Jitter Test (16 & 24-bits):
16-bits:
24-bits:
Low jitter using this test in the analogue output.
II: Controller Measurements
The controller is a neat device. As shown above, there is a metal charging base and it's motion sensitive so will automatically wake when picked up (a few seconds wait time to wake up if not in the charger). It's pretty good at doing what it's meant to - as controller for the various Squeezebox units (not just the Receiver). The scroll wheel allows accelerated item selection, but these days, I mainly use a tablet to control the Squeezebox devices and it's of course quicker to type in search items than scrolling through letters.
The Controller has a standard 3.5mm phono plug up top for headphones. Although it can also play back audio, as far as I know, this feature was implemented at a "beta" level only. You have to go into Settings -> Advanced -> Beta Features -> Audio Playback to turn this on. Because the unit has a little speaker inside, I also needed to download the "Headphone Switcher" app from Applet Installer which adds an item in the "Extras" menu to turn the headphone on. Eventually I was able to get the unit to automatically switch between the speaker and headphone after some fiddling and rebooting the device (honestly I don't know which step was the key - the nature of 'beta' I guess). I did notice some slowdown to the UI responsiveness when streaming audio to the Controller.
According to the SB Hardware Wiki, the Controller has a Wolfson WM8750 inside. Understandably, this is a low power chip meant for portable devices. Data sheet lists DAC SNR up to 98dB. Maximum sample rate goes up to 24/48.
Setup:
Controller --> phono-to-RCA shielded Radio Shack cable (3') --> E-MU 0404USB --> shielded USB --> Win8 laptop
Firmware: 7.8.0-r16739
Controller volume 100%.
Here's what a 1kHz square wave, 0dBFS, 24/44 looks like under the oscilloscope playing off the phono plug:
In comparison to the Receiver above, it's clear that this isn't going to be "hi-fidelity". There's a bit of ringing and noise evident. Peak voltage is around 800mV and the two channels are reasonably balanced.
Here's the impulse response:
Upside down - the Controller inverts polarity. Standard linear phase digital filter.
A. RightMark results:
Comparison is made with the Receiver and Touch. Numerically, the thing that really sticks out is that massive intermodulation distortion up at 4%! I'm a bit surprised how the program did not calculate a higher THD as well given how nasty the graph looks - this is why I post up the graphs as well... I checked and double checked the testing to make sure settings were not clipping the signal. Made no difference - it is what it is!
No meaningful improvement with going to 24-bits.
Frequency Response:
Odd noise spikes from 5kHz up. Have not seen this before in my other tests. I did the test 3 times and this appears to be a real finding. Also, earlier roll-off down to -3dB by 20kHz.
Noise:
THD:
This looks bad but the score was about the same as the Receiver. RightMark is probably just looking at the odd and even harmonics for the calculations.
Intermodulation Distortion + Noise:
Ouch. Non-linearity is an issue.
Stereo Crosstalk:
As I said earlier - the Controller isn't high fidelity :-).
B. Dunn J-Test:
16-bit
24-bit
Very strong jitter! Many many nanoseconds of jitter :-). Again, this is not distortion from clipping.
III: Summary (Analogue Output)
Receiver: Reasonable analogue output from the device. It performs similar to the old SB3 in terms of dynamic range and noise floor. I hooked up the Receiver to my bookshelf system upstairs ('vintage' Sony MHC-1600 from university days powering some Tannoy mX2's) for a few days to listen. Very enjoyable - got a chance to listen to the Into Darkness soundtrack, the recent HDTracks release of Cole Español, and stroll down pop memory lane with Samantha Fox's Greatest Hits :-). I pulled out the Squeezebox 3 to compare and subjectively, I agree that there's a bit more bass with the Receiver and Touch than the SB3. While that channel imbalance was measurable, I didn't find any gross anomaly with the speaker system but I could hear the difference with the AKG Q701 headphones for example listening to where Ella Fitzgerald's voice was centered on Ella Sings Gershwin, I'd say it's subtle so would not affect my listening pleasure (could also be placebo since I was specifically listening for this!).
Controller: Firstly, how does it sound? Well, perhaps surprisingly OK :-). It's low powered so I listened with a pair of JVC HAFXC80 IEM headphones. I would compare the sound output to what I hear off my Samsung Galaxy S2. If you have a Controller, have a good listen - that's what a high jitter device sounds like (I have not ever seen this many side bands in all the testing so far!). For me, the best I can describe is that the Controller sounds "distant" even with volume pushed up and slightly "hollow" compared to listening through the headphone out from the Touch. The percussion at the start of Star Trek Main Theme from the "Star Trek: Into Darkness" soundtrack for example sounded less defined and spatially smeared, getting worse as the orchestral dynamics build up as if there's some low-level static in the background. How much of this can be definitively attributed to the high jitter is hard to say since the unusual frequency response, noise levels, intermodulation distortion and headphone output limitations are all significant factors in the sound. I highly recommend just streaming 192kbps MP3 as this will improve the responsiveness of the Controller and you're not going to hear a difference.
"fordgtlover" wanted to know how well the Receiver serves as a digital transport - good question! This then will be the topic for the next installment along with comparisons with the Touch... Stay tuned... Something tells me this is going to be complicated and hopefully quite interesting!
Friday, 21 June 2013
MUSINGS: About Those USB Cable Tests...
Back in April, I posted my USB cable tests (note this was updated recently with the TEAC DAC & Belkin Gold results). To recap, basically I could not find significant analogue output differences between a few cables despite differences in construction and lengths - including one of which consisting of 2 cable extenders and totaling 17 feet in length. The analogue output from the DAC did not show significant change in frequency response, distortion, noise levels, or jitter whether the USB cable was used to feed an asynchronous USB-to-SPDIF device (the CM6631A), directly to an older adaptive isochronous DAC (AUNE X1), or directly into a modern asynchronous USB device (TEAC UD-501). Since it is these analogue waveforms that get transduced to sound waves, it's not a stretch to conclude that USB cables make no audible difference. My subjective evaluation of USB cables is consistent with the measured results - no audible differences in controlled tests (my own blind tests).
These days, we all use USB cables of different lengths and varieties for high-speed devices like hard drives and generally either it works or it doesn't. USB protocol sends data in packets which consists of not just the audio data itself (up to 1024 bytes at a time for high speed mode), but CRC check to detect errors, flow control, and also address information to direct the data to the appropriate device (remember, you can attach up to 127 devices to each host controller). The low-level details of this communication including timing are addressed by the hardware and generally outside of the purview of the software we install on the computer. Logically this means that fine timing issues (like data flow control and scheduling of the data packet delivery) would be outside of the effect of things like audio player software. Furthermore, with modern DACs, the machine will use its own internal memory buffer and very fine timing (like pico- and nanosecond jitter) will be derived from the accuracy of this internal clock.
USB data transfer rate for audio is much less demanding than something like a hard drive. Whereas we can easily transfer >200Mbit/sec to the hard drive using a simple generic USB2 cable, sending standard 16/44 stereo audio to a DAC is only ~1.5Mb/s, 24/96 only needs ~4.6Mb/s, and 24/192 ~9.2Mb/s. If we go all out with the TEAC UD-501 DAC with DSD128 or 32/384, even that "only" takes up to ~25Mb/sec.
Not unexpectedly, some message forums got a bit heated when the USB cable test post was published since obviously the results deviated from expectations or experiences of subjective evaluation or maybe the person is trying to sell something related. I have seen no good evidence from controlled tests demonstrating otherwise.
Shortly after that post, this set of measurements came out from one of the principles of Empirical Audio (I thought he promised to measure the AudioQuest Diamond USB cable?). Nice to see he measured the down-to-earth Belkin Gold anyhow. I will let you, fine readers, examine the data and determine if you think this correlates with audio quality, or just demonstrating minute differences between lengths of copper with little correlation to digital data transfer much less the analogue output from a DAC! (Note that he's measuring 5m [~16.5 feet] lengths of USB cable to get those picosecond numbers.)
For this installment of MUSINGS, I wanted to have a look at a recent article from the July 2013 issue of Hi-Fi News where they did a "Group Test" looking at USB cables. Since I do not subscribe to this magazine, I want to thank "Mushroom_3" and "Julf" off the Slim Devices/Squeezebox forum for bringing up this article.
As you can see on the cover - "USB cable sound - Fact or fantasy? p43" was advertised. So from page 43-51 (with 2 pages of ads), they tested 10 cables.
Here's the first 2 paragraphs of the article (as usual, I reproduce only small portions out of fair use for the purpose of discussion, education, and critique):
That's a pretty gross generalization and assumption to start off with isn't it? The only thing I think "every seasoned audiophile" knows is that without cables, there can be no sound - and that's assuming your system isn't a boombox :-). That's about the extent of discussion as to whether USB cable sound could be "fantasy" (I think the better word would be 'inaudible'). From there on, it's all about oscilloscope measurements (they give data risetime measurements for the cables and use this and the eye pattern as their objective data), and minimal detail was provided as to how they conducted the "blind" subjective test - we do not know how many subjects participated, how they were selected, in what way was the 'blinding' accomplished, duration between cable swaps, etc. Furthermore, we do not know how the participants scored the cables or what the statistical outcome is - it may be just a "blind" solicitation of subjective opinions on those 10 cables (a huge number to test properly!).
I appreciate that at least Paul Miller the editor spent a page talking about the technical bits of USB cabling on page 98. At least the physical cabling characteristics were described in detail but strangely nothing on the packetized nature or how they would correlate audible differences in such a system of data transfer since IMO this would be much more educational.
The comments below will hopefully make sense to those who have not read the article.
1. From what they wrote, the test setup went like this:
Laptop --> Test USB Cable --> Musical Fidelity M1 S-DAC --> coaxial cable --> Devialet D-Premier --> speakers
Why was the decision made to use the Musical Fidelity M1 S-DAC as a USB-to-S/PDIF converter? Seriously, you're testing USB cables but introducing a coaxial S/PDIF interface into the middle of the signal path for no clear reason when the M1 is already a fine DAC?! We know that a good USB asynchronous interface has lower jitter in general than most S/PDIF, so why not use the analogue output of the M1 directly to a preamp/amp? I would not be surprised if the Musical Fidelity is a better DAC technically than the Devialet (see below). There was no mention of what coaxial cable was used - surely this must be important since this is the digital cable directly connected to the DAC and these guys believe S/PDIF cables make a difference as per the first paragraph quoted above. How is it possible to test USB digital cables if there's potentially an even more jittery (yes, the dreaded jitter bogeyman yet again) digital coaxial cable/interface in the audio chain?
2. Why was the Devialet D-Premier used? I suspect why they used the USB-to-S/PDIF interface is because the Devialet internally resamples analogue to digital as per the discussion in the Stereophile review, plus it doesn't have a USB interface. Furthermore, the measurements for the D-Premier isn't all that remarkable as a DAC. It achieves a respectable 18-bit dynamic range (remember, the SB Touch already can do 17.5 bits) but like John Atkinson say - it's "not quite up to the standard set by the best-measuring standalone processors, such as the Bricasti M1, MSB Diamond DAC IV, NAD M51, or Weiss DAC202". I don't know about the M1 S-DAC specifically, but the Musical Fidelity M1 appears to outperform the D-Premier already (Stereophile data) and "offers performance that is close to the state of the art."
3. Why was the "grotty giveaway USB cable" measured on p.98 not part of the blind listening test? I wish they included a picture of this "grotty giveaway" - what makes it "grotty" anyways? Why not also tell us what that cable's data risetime is while you're at it? Doesn't this often seem like the case with these audiophile "shootouts"? They have all these high priced options (well, at least they included a £18 QED Performance Graphite) but neglect to include the real competition - something that is essentially free and works! Another idea - how about testing a reasonable quality "non-audiophile" USB cable like the Belkin Gold for <£5?
4. USB data risetime & eye pattern: is it even relevant to the analogue waveform coming out of the DAC? Are they measuring something which actually COULD have sonic impact? In terms of this article with the risetime derived from the eye pattern as the prime objective measure, what evidence do we have that this actually correlates with the subjective impression? So what if the £18 QED scores 12.8ns, £60 Kimber B BUS Cu scored 12.4ns, £139 Audioquest Carbon scored 11.9ns (hey folks where's the Diamond?), and the insanely priced £6,500 Crystal Absolute Dream scored 11.0ns? What does this risetime have to do with the final DAC analogue output anyways given the nature of packet digital transmission, asynchronous protocol, and a cable that can provide much more bandwidth than required for audio - especially since you're passing the bits off through a digital S/PDIF coaxial in the audio chain?! I question if taking "a good few months" (p.98) to develop this test was time well spent!
5. Why does there appear to be so little correlation between sample rate and audio quality through the USB cables? Surely, a poor USB cable with "slow" risetime should sound worse with 24/96 or higher bitrate music, right? Yet the majority of the subjective complaints focused on 'Hotel California' off Hell Freezes Over CD or Oscar Peterson's We Get Requests FIM K2 CD. Heck, even The Beatles' Abbey Road 24/44 barely taxes the USB interface. The only really "hi-res" track was the 24/192 Helge Lien Trio Natsukashi which was mentioned 3 times total in the whole subjective write-up. Surely, to get a sense of how well USB cables work, we need to grab some DXD and DSD128 material, right? If timing/jitter were that important, there could be big issues with DSD sampled in the MHz range, don't you think?*
6. In the subjective analysis conclusion, you see the following comments: the QED is described as "almost sounds louder", Transparent Performance's cable had "rich and warm sound proving a little too luxuriant at times", Wireworld's Starlight 7 had the "woomph of air that would normally accompany the deepest bass was subjectively filtered away", or the Crystal Cable's "greater extension" - why not measure those things? Surely you can easily detect amplitude changes, differences to frequency extension, tonal changes to add "warmth" in the DAC output, right? Yet, for the objective test all we get is a risetime measurement of a piece of wire and the subjective testing done through a second digital interface of unclear quality and unclear blind testing protocol. Therefore neither the subjective or objective results appear all that convincing.
Assuming we do acknowledge this article as having validity and not just an example of a majorly flawed study (needless to say, the setup using a digital coaxial cable is a BIG problem IMO), there is one thing we learned in this piece for sure that could be of value and a good reminder. Don't buy the £70 Vertere Pulse D-Fi USB cable - apparently it's constructed out of "twin coaxial cables" (err, impedance matching anyone?), has bad rise time (27.6ns), and sounds questionable with "softening of its extreme treble and loss of atmosphere". I wonder if connecting this cable to a hard drive might show slower transfer speed due to high data error rate. Evidence that you can spend more and get significantly less quality than the 'freebie'?
Assuming we don't believe the listening tests are valid because of the methodological issues, then perhaps this article serves as an example of the unreliability of subjective listening. Isn't it possible they're all just listening to the same sound with the "jittery" coaxial digital cable and interface and coming up with different impressions? Since we have no concrete idea of how the "blind test" was conducted (especially what level of statistical confidence we can expect), it's quite possible the term 'fantasy' would be appropriate to describe the results (but verboten).
Enjoy the music folks... IMO, there is still no good evidence that USB cables make a difference to sound (well, at least with decent modern gear!).
My musical selection tonight: C.C. Colletti Bring It On Home (HDTracks 24/96 Binaural recording). Wonderfully spatial sonics and details from the ASUS Essence One + Sennheiser HD800! Have a listen to that title track.
* For the record, I do not believe there is any reason to think DSD would be affected by USB cable jitter to an audible degree with good modern DACs. Just wanted to throw out a thought which the jitter-fearing-audiophile might bring up one day.
These days, we all use USB cables of different lengths and varieties for high-speed devices like hard drives and generally either it works or it doesn't. USB protocol sends data in packets which consists of not just the audio data itself (up to 1024 bytes at a time for high speed mode), but CRC check to detect errors, flow control, and also address information to direct the data to the appropriate device (remember, you can attach up to 127 devices to each host controller). The low-level details of this communication including timing are addressed by the hardware and generally outside of the purview of the software we install on the computer. Logically this means that fine timing issues (like data flow control and scheduling of the data packet delivery) would be outside of the effect of things like audio player software. Furthermore, with modern DACs, the machine will use its own internal memory buffer and very fine timing (like pico- and nanosecond jitter) will be derived from the accuracy of this internal clock.
USB data transfer rate for audio is much less demanding than something like a hard drive. Whereas we can easily transfer >200Mbit/sec to the hard drive using a simple generic USB2 cable, sending standard 16/44 stereo audio to a DAC is only ~1.5Mb/s, 24/96 only needs ~4.6Mb/s, and 24/192 ~9.2Mb/s. If we go all out with the TEAC UD-501 DAC with DSD128 or 32/384, even that "only" takes up to ~25Mb/sec.
Not unexpectedly, some message forums got a bit heated when the USB cable test post was published since obviously the results deviated from expectations or experiences of subjective evaluation or maybe the person is trying to sell something related. I have seen no good evidence from controlled tests demonstrating otherwise.
Shortly after that post, this set of measurements came out from one of the principles of Empirical Audio (I thought he promised to measure the AudioQuest Diamond USB cable?). Nice to see he measured the down-to-earth Belkin Gold anyhow. I will let you, fine readers, examine the data and determine if you think this correlates with audio quality, or just demonstrating minute differences between lengths of copper with little correlation to digital data transfer much less the analogue output from a DAC! (Note that he's measuring 5m [~16.5 feet] lengths of USB cable to get those picosecond numbers.)
For this installment of MUSINGS, I wanted to have a look at a recent article from the July 2013 issue of Hi-Fi News where they did a "Group Test" looking at USB cables. Since I do not subscribe to this magazine, I want to thank "Mushroom_3" and "Julf" off the Slim Devices/Squeezebox forum for bringing up this article.
As you can see on the cover - "USB cable sound - Fact or fantasy? p43" was advertised. So from page 43-51 (with 2 pages of ads), they tested 10 cables.
Here's the first 2 paragraphs of the article (as usual, I reproduce only small portions out of fair use for the purpose of discussion, education, and critique):
That's a pretty gross generalization and assumption to start off with isn't it? The only thing I think "every seasoned audiophile" knows is that without cables, there can be no sound - and that's assuming your system isn't a boombox :-). That's about the extent of discussion as to whether USB cable sound could be "fantasy" (I think the better word would be 'inaudible'). From there on, it's all about oscilloscope measurements (they give data risetime measurements for the cables and use this and the eye pattern as their objective data), and minimal detail was provided as to how they conducted the "blind" subjective test - we do not know how many subjects participated, how they were selected, in what way was the 'blinding' accomplished, duration between cable swaps, etc. Furthermore, we do not know how the participants scored the cables or what the statistical outcome is - it may be just a "blind" solicitation of subjective opinions on those 10 cables (a huge number to test properly!).
I appreciate that at least Paul Miller the editor spent a page talking about the technical bits of USB cabling on page 98. At least the physical cabling characteristics were described in detail but strangely nothing on the packetized nature or how they would correlate audible differences in such a system of data transfer since IMO this would be much more educational.
The comments below will hopefully make sense to those who have not read the article.
1. From what they wrote, the test setup went like this:
Laptop --> Test USB Cable --> Musical Fidelity M1 S-DAC --> coaxial cable --> Devialet D-Premier --> speakers
Why was the decision made to use the Musical Fidelity M1 S-DAC as a USB-to-S/PDIF converter? Seriously, you're testing USB cables but introducing a coaxial S/PDIF interface into the middle of the signal path for no clear reason when the M1 is already a fine DAC?! We know that a good USB asynchronous interface has lower jitter in general than most S/PDIF, so why not use the analogue output of the M1 directly to a preamp/amp? I would not be surprised if the Musical Fidelity is a better DAC technically than the Devialet (see below). There was no mention of what coaxial cable was used - surely this must be important since this is the digital cable directly connected to the DAC and these guys believe S/PDIF cables make a difference as per the first paragraph quoted above. How is it possible to test USB digital cables if there's potentially an even more jittery (yes, the dreaded jitter bogeyman yet again) digital coaxial cable/interface in the audio chain?
2. Why was the Devialet D-Premier used? I suspect why they used the USB-to-S/PDIF interface is because the Devialet internally resamples analogue to digital as per the discussion in the Stereophile review, plus it doesn't have a USB interface. Furthermore, the measurements for the D-Premier isn't all that remarkable as a DAC. It achieves a respectable 18-bit dynamic range (remember, the SB Touch already can do 17.5 bits) but like John Atkinson say - it's "not quite up to the standard set by the best-measuring standalone processors, such as the Bricasti M1, MSB Diamond DAC IV, NAD M51, or Weiss DAC202". I don't know about the M1 S-DAC specifically, but the Musical Fidelity M1 appears to outperform the D-Premier already (Stereophile data) and "offers performance that is close to the state of the art."
3. Why was the "grotty giveaway USB cable" measured on p.98 not part of the blind listening test? I wish they included a picture of this "grotty giveaway" - what makes it "grotty" anyways? Why not also tell us what that cable's data risetime is while you're at it? Doesn't this often seem like the case with these audiophile "shootouts"? They have all these high priced options (well, at least they included a £18 QED Performance Graphite) but neglect to include the real competition - something that is essentially free and works! Another idea - how about testing a reasonable quality "non-audiophile" USB cable like the Belkin Gold for <£5?
4. USB data risetime & eye pattern: is it even relevant to the analogue waveform coming out of the DAC? Are they measuring something which actually COULD have sonic impact? In terms of this article with the risetime derived from the eye pattern as the prime objective measure, what evidence do we have that this actually correlates with the subjective impression? So what if the £18 QED scores 12.8ns, £60 Kimber B BUS Cu scored 12.4ns, £139 Audioquest Carbon scored 11.9ns (hey folks where's the Diamond?), and the insanely priced £6,500 Crystal Absolute Dream scored 11.0ns? What does this risetime have to do with the final DAC analogue output anyways given the nature of packet digital transmission, asynchronous protocol, and a cable that can provide much more bandwidth than required for audio - especially since you're passing the bits off through a digital S/PDIF coaxial in the audio chain?! I question if taking "a good few months" (p.98) to develop this test was time well spent!
5. Why does there appear to be so little correlation between sample rate and audio quality through the USB cables? Surely, a poor USB cable with "slow" risetime should sound worse with 24/96 or higher bitrate music, right? Yet the majority of the subjective complaints focused on 'Hotel California' off Hell Freezes Over CD or Oscar Peterson's We Get Requests FIM K2 CD. Heck, even The Beatles' Abbey Road 24/44 barely taxes the USB interface. The only really "hi-res" track was the 24/192 Helge Lien Trio Natsukashi which was mentioned 3 times total in the whole subjective write-up. Surely, to get a sense of how well USB cables work, we need to grab some DXD and DSD128 material, right? If timing/jitter were that important, there could be big issues with DSD sampled in the MHz range, don't you think?*
6. In the subjective analysis conclusion, you see the following comments: the QED is described as "almost sounds louder", Transparent Performance's cable had "rich and warm sound proving a little too luxuriant at times", Wireworld's Starlight 7 had the "woomph of air that would normally accompany the deepest bass was subjectively filtered away", or the Crystal Cable's "greater extension" - why not measure those things? Surely you can easily detect amplitude changes, differences to frequency extension, tonal changes to add "warmth" in the DAC output, right? Yet, for the objective test all we get is a risetime measurement of a piece of wire and the subjective testing done through a second digital interface of unclear quality and unclear blind testing protocol. Therefore neither the subjective or objective results appear all that convincing.
Assuming we do acknowledge this article as having validity and not just an example of a majorly flawed study (needless to say, the setup using a digital coaxial cable is a BIG problem IMO), there is one thing we learned in this piece for sure that could be of value and a good reminder. Don't buy the £70 Vertere Pulse D-Fi USB cable - apparently it's constructed out of "twin coaxial cables" (err, impedance matching anyone?), has bad rise time (27.6ns), and sounds questionable with "softening of its extreme treble and loss of atmosphere". I wonder if connecting this cable to a hard drive might show slower transfer speed due to high data error rate. Evidence that you can spend more and get significantly less quality than the 'freebie'?
Assuming we don't believe the listening tests are valid because of the methodological issues, then perhaps this article serves as an example of the unreliability of subjective listening. Isn't it possible they're all just listening to the same sound with the "jittery" coaxial digital cable and interface and coming up with different impressions? Since we have no concrete idea of how the "blind test" was conducted (especially what level of statistical confidence we can expect), it's quite possible the term 'fantasy' would be appropriate to describe the results (but verboten).
Enjoy the music folks... IMO, there is still no good evidence that USB cables make a difference to sound (well, at least with decent modern gear!).
My musical selection tonight: C.C. Colletti Bring It On Home (HDTracks 24/96 Binaural recording). Wonderfully spatial sonics and details from the ASUS Essence One + Sennheiser HD800! Have a listen to that title track.
* For the record, I do not believe there is any reason to think DSD would be affected by USB cable jitter to an audible degree with good modern DACs. Just wanted to throw out a thought which the jitter-fearing-audiophile might bring up one day.
Thursday, 13 June 2013
MEASUREMENTS: Part II: Bit-Perfect Audiophile Music Players - JPLAY (Windows).
There's something happening here,What it is ain't exactly clear.-- Buffalo Springfield
Welcome to Part II of the "Bit-Perfect" roundup for Windows.
In this installment, I'll focus on JPLAY (most recent version 5.1) which has been highly promoted around many of the audiophile web sites like 6moons, EnjoyTheMusic ("First Step to Heaven = JPLAY"!), High Fidelity (Polish), TNT-Audio, recently AudioStream plus all the hoopla around the terribly misleading series of articles on computer audio by Zeilig & Clawson in "The Absolute Sound" in early 2012.
If you browse around the JPLAY website (or hang out on various computer audiophile forums), you run into suggestions that process priority, memory playback, task switching latency, driver type (eg. ASIO, WASAPI, Kernel Streaming) all have some kind of effect on the playback audio quality (usual digital whipping boy for bad sound quality is of course the dreaded jitter). Sure, computers can be complicated, but is it possible for these factors to affect audio output significantly enough to hear and common enough to worry about with a modern USB DAC like in my test set-up? Maybe, but like many subtle things in life "reported" to be true, it's hard to know with confidence unless systematically tested. To address these "issues" JPLAY even has a mode where the screen gets turned off, OS processes are halted, drivers turned off, etc. the famed "hibernate mode". Since the keyboard is turned off, the computer comes back to life after the music has completed playing or if you remove a USB stick during playback! Realize that this makes the computer even less interactive than a disc spinner!
Mitchco has already used the DiffMaker program to compare JPLAY and JRiver about a year ago but was not able to detect a difference. Likewise, there have been some heated discussions over on Hydrogen Audio regarding this program.
Well, let us put this program and various of its settings on the test bench and see what comes up...
Firstly, the setup - same as Part I:
ASUS Taichi (*running JPLAY/foobar*) Win8 x64 --> shielded USB (Belkin Gold) --> TEAC UD-501 DAC --> shielded 6' RCA --> E-MU 0404USB --> shielded USB --> Win8 Acer laptop
Remember that JPLAY is essentially a "playback engine" using its own buffering algorithm and runs as an Audio Service under Windows. In this regard, it is quite unique. By itself, it has a very bare-bones text-based interface. Therefore, for ease of testing, I'm going to be using JPLAY through foobar2000 as an ASIO output "device". Within JPLAY settings, one can then specify the actual audio device and which driver to use (eg. ASIO, WASAPI, Kernel Streaming...), along with specific parameters like buffer size. There are four "engines" - Beach, River, Xtream (Kernel Streaming only), and ULTRAstream (Kernel Streaming & Windows 8 only) - I am unaware of how they are supposed to differ. Note that for all these tests, I had "Throttle" turned ON which is supposed to increase priority to JPLAY (and hence diminish task priority of other processes). Volume control was turned OFF. According to the manual, there are even "advanced settings" called DriverBuffer, UltraSize, and XtreamSize which I did not bother playing with since I presume they're deemed less important if one has to go into the Registry to adjust. Thankfully, the trial version will play ~2 minutes of uninterrupted audio before inserting ~5 seconds of silence. This is enough time to measure the audio quality using my standard tests.
Here's the data for JPLAY at 16/44 with ASIO and the two "engines" that support ASIO - "Beach" and "River". When you see a number like "ASIO 2", that refers to the buffer setting (number of samples). The software recommends using the smallest buffer size with "DirectLink" being 1 sample. I was not able to get the DirectLink setting working without occasional blips and errors once I started doing Kernel Streaming, but 2 works well for 16/44, 4 was good for 24/96 so I standardized on those settings throughout. As you can see, I also measured with buffer of 64 samples to see if that made any difference (I see from the manual that Xtreme only works for buffer sizes up to 32 samples - it didn't complain when I set it to 64):
Note that I used the standard foobar2000 + ASIO audio output as the comparison measurement in the leftmost column.
Frequency Response:
Noise:
THD:
Stereo Crosstalk:
For some reason, WASAPI refused to initiate for me at 16/44. But it did work at 24/96, so I have included that here as well. Again, foobar + ASIO was used for comparison on the left:
Frequency Response:
Noise:
THD:
Stereo Crosstalk:
So far, no real difference between the JPLAY playback and foobar2000.
Now, lets deal with Kernel Streaming. It is with KS that we see significant CPU utilization by "jplay.exe". Instead of <2% peak CPU utilization above, with low buffers like a setting of 2 samples for 16/44, I see peak CPU utilization of 13%, and with a 4-sample buffer at 24/96, peak CPU jumps to 16% with my laptop's i5 and the "Xtream" engine. The "ULTRAstream" engine chews up even more CPU cycles by another 1-2% above those previous numbers. The moment the buffer size was increased to 64 samples, CPU utilization dropped to 3% with 16/44 and 6% with 24/96 peak. It looks like only when Kernel Streaming is used, does JPLAY actually kick in to maintain those tiny buffer settings.
Starting with 16/44, here's Kernel Streaming with mainly the "Xtream" and "ULTRAstream" engines since these two cannot be used with ASIO. I included "Beach" in these measurements as well out of interest. The test on the far right with "hiber" refers to the use of the "hibernate mode" where the computer screen, OS processes, drives, etc. turn off during playback. Again, foobar + ASIO was used as the comparison:
Frequency Response:
Noise:
THD:
Stereo Crosstalk:
Here is 24/96:
Frequency Response:
Noise:
THD:
Stereo Crosstalk:
So far, despite all the changes in CPU utilization, different audio "engines" and buffer settings, I see no substantial change in the measurements that would suggest a qualitative difference in terms of the analogue output signal from my DAC.
JPLAY supposedly can support DSD playback. I did not test this function.
foobar2000 ASIO:
Beach ASIO 64-sample buffer:
River ASIO 2-sample buffer:
Xtream Kernel Streaming 64-sample buffer:
ULTRAstream Kernel Streaming 2-sample buffer:
ULTRAstream Kernel Streaming + Hibernate 2-sample buffer:
Essentially no difference in the J-Test spectra.
Recall that the 24-bit Dunn J-Test is done with a 24/48 signal. While doing this test using Kernel Streaming mode, something strange was found.
This is what the 24-bit Dunn J-Test looks like with foobar2000 + ASIO (notice primary signal at 12kHz rather than 11kHz with the 16-bit test):
The 24-bit LSB modulation signal is buried under the noise level. This is quite normal.
Here is Beach + ASIO:
Again, quite normal - we're still using the TEAC ASIO driver.
Look what happens with 24-bit J-Test Beach + Kernel Streaming (doesn't matter what buffer size):
Eh? What's with all the modulation signal bursting through like with the 16-bit test?!
My suspicion is that JPLAY isn't handling the last 8-bits in the 24-bit data properly... One possible scenario is where the last 8 bits got flipped from LSB to MSB, thus causing the LSB signal to show through at a higher level. With RightMark, this is what it looks like:
We've "lost" the last 7 bits of resolution at 24/48 with Kernel Streaming causing the dynamic range to drop to 102dB (17-bits resolution) instead of the usual >110dB (24-bits into the noise floor). I considered the possibility that this may represent some sort of dithering but why would it be applied to 24/48 and not 24/96?
Strangely, this anomaly did not show up at 24/96. I did not bother to check if 88/176/192kHz sampling rates were affected.
Here are measurements of a few settings in JPLAY in comparison to foobar ASIO as reference. I included the most "extreme" one that I could perform - ULTRAstream + 2-sample buffer + hibernate mode. As usual a couple of "sensitivity tests" with MP3 320kbps and slight EQ change in foobar for comparison:
The machine was able to correlate the null depth down to around 90dB with all the JPLAY settings and foobar suggesting no significant difference in sound in comparison to the reference foobar + ASIO playback.
1. 24/48 with Kernel Streaming appears buggy as shown with the 24-bit J-Test and RightMark result. Don't use it (as of version 5.1) if you want accurate sound. However, subjectively it still sounds OK to me since it's still accurate down to 16-bits at least. ASIO works fine. 24/96 is fine. I don't know if other sampling rates with 24-bit data are affected. For some reason I could not get WASAPI 16/44 to work with JPLAY even though it was fine with foobar2000.
2. Technically, JPLAY appears to be bit perfect with 16/44 and likely 24/96 based on my tests (of course we cannot say this for 24/48). Since the program claims to be bit perfect, this is good I guess.
3. I was unable to detect any evidence of sonic difference at 16/44 and 24/96 compared to a standard foobar set-up. RightMark tests look essentially the same. Over the months of testing, I see no evidence still that software changes the jitter severity with CPU load, different software, even different DACs (as I had postulated awhile back in this post). DiffMaker null tests were also unable to detect any significant difference in the "sound" of the analogue output from the TEAC UD-501.
4. Still no evidence that extreme settings like "hibernate mode" which reduces the utility of the computer makes any sonic difference. Of course, it's possible that this could make a difference with some very slow machines like a 1st generation single-core Atom processor with small buffer settings doing Kernel Streaming... But in that case, why not just increase the buffer size with Kernel Streaming (why all the "need" for low buffer settings and obsession over low latency for just playback?!) or just go with an efficient ASIO/WASAPI driver? I'd also recommend a processor upgrade if you're still rocking an old Atom!
Over the 3 nights I was performing these tests, I took time to listen to music with the various JPLAY settings - probably ~3 hours total. It sounds good subjectively (other than the brief interruptions every 2 minutes or so for the trial version). The i5 computer shows a little bit of lag with Kernel Streaming and low buffer size if I'm trying to do something else. I listened to Donald Fagen's The Nightfly and Peter Gabriel's So (2012 remaster) in 24/48 with Kernel Streaming and didn't think they sounded bad despite the issue I found (remember, the anomaly was down below 16-bits). Dire Strait's Brother In Arms sounded dynamic and detailed as usual. Likewise Richard Hickox & LSO's Orff: Carmina Burana (2008, 24/88 SACD rip) sounds just as ominous. (Almost) Instantaneous A-B'ing is possible in foobar changing ASIO output from the TEAC driver to JPLAY and I did not notice any significant subjective tonal, amplitude, or resolution difference using my Sennheiser HD800 headphones with the TEAC UD-501 DAC flipping back and forth a number of times.
Bottom line: With a reasonably standard set-up as described, using a current-generation (2013) asynchronous USB DAC, there appears to be no benefit with the use of JPLAY over any of the standard bit-perfect Windows players tested previously in terms of measured sonic output. Nor could I say that subjectively I heard a difference through the headphones. If anything, one is subjected to potential bugs like the 24/48 issue (I didn't run into any system instability thankfully), and the recommended Kernel Streaming mode utilizes significant CPU resources when buffer size is reduced (which the software recommends doing). I imagine that CPU utilization would be even higher if I could have activated the DirectLink (1-sample buffer) setting.
Finally, there's the fact that this program costs €99. A bit steep ain't it? JRiver costs US$50, Decibel on the Mac around $35, foobar2000 FREE and these all feature graphical user interfaces and playlists at least! Considering my findings, I'm unclear with what DAC or computer system one would find tangible benefits after spending €99 for this program.
As usual, I welcome feedback especially with objective data or controlled test results (any JPLAY software developers care to comment on how the audio engines were "tuned"?). I would also welcome independent testing to see if my findings can be verified on other hardware combinations (especially that 24/48 issue).
Music selection tonight: Paavo Järvi & Orchestre de Paris - Fauré: Messe de Requiem (2011). Lovely rendition of Pavane for mixed choir!
As usual... Enjoy the tunes... :-)
Welcome to Part II of the "Bit-Perfect" roundup for Windows.
In this installment, I'll focus on JPLAY (most recent version 5.1) which has been highly promoted around many of the audiophile web sites like 6moons, EnjoyTheMusic ("First Step to Heaven = JPLAY"!), High Fidelity (Polish), TNT-Audio, recently AudioStream plus all the hoopla around the terribly misleading series of articles on computer audio by Zeilig & Clawson in "The Absolute Sound" in early 2012.
If you browse around the JPLAY website (or hang out on various computer audiophile forums), you run into suggestions that process priority, memory playback, task switching latency, driver type (eg. ASIO, WASAPI, Kernel Streaming) all have some kind of effect on the playback audio quality (usual digital whipping boy for bad sound quality is of course the dreaded jitter). Sure, computers can be complicated, but is it possible for these factors to affect audio output significantly enough to hear and common enough to worry about with a modern USB DAC like in my test set-up? Maybe, but like many subtle things in life "reported" to be true, it's hard to know with confidence unless systematically tested. To address these "issues" JPLAY even has a mode where the screen gets turned off, OS processes are halted, drivers turned off, etc. the famed "hibernate mode". Since the keyboard is turned off, the computer comes back to life after the music has completed playing or if you remove a USB stick during playback! Realize that this makes the computer even less interactive than a disc spinner!
Mitchco has already used the DiffMaker program to compare JPLAY and JRiver about a year ago but was not able to detect a difference. Likewise, there have been some heated discussions over on Hydrogen Audio regarding this program.
Well, let us put this program and various of its settings on the test bench and see what comes up...
Firstly, the setup - same as Part I:
ASUS Taichi (*running JPLAY/foobar*) Win8 x64 --> shielded USB (Belkin Gold) --> TEAC UD-501 DAC --> shielded 6' RCA --> E-MU 0404USB --> shielded USB --> Win8 Acer laptop
Remember that JPLAY is essentially a "playback engine" using its own buffering algorithm and runs as an Audio Service under Windows. In this regard, it is quite unique. By itself, it has a very bare-bones text-based interface. Therefore, for ease of testing, I'm going to be using JPLAY through foobar2000 as an ASIO output "device". Within JPLAY settings, one can then specify the actual audio device and which driver to use (eg. ASIO, WASAPI, Kernel Streaming...), along with specific parameters like buffer size. There are four "engines" - Beach, River, Xtream (Kernel Streaming only), and ULTRAstream (Kernel Streaming & Windows 8 only) - I am unaware of how they are supposed to differ. Note that for all these tests, I had "Throttle" turned ON which is supposed to increase priority to JPLAY (and hence diminish task priority of other processes). Volume control was turned OFF. According to the manual, there are even "advanced settings" called DriverBuffer, UltraSize, and XtreamSize which I did not bother playing with since I presume they're deemed less important if one has to go into the Registry to adjust. Thankfully, the trial version will play ~2 minutes of uninterrupted audio before inserting ~5 seconds of silence. This is enough time to measure the audio quality using my standard tests.
I. RightMark 6.2.5 (PCM 16/44 & 24/96)
Let us start with JPLAY using the ASIO & WASAPI drivers. I think it is important to remember that the makers of JPLAY recommend using Kernel Streaming instead. My suspicion is that JPLAY essentially just passes information off to these ASIO and WASAPI drivers so logically there would be little reason to believe measurements & sound quality should change at all. Furthermore, the CPU utilization for the process "jplay.exe" with ASIO and WASAPI remained low during playback - on the order of <2% peak and usually <1% using the i5 processor. As we see later, this dramatically changes with Kernel Streaming where the program can actually get closer to the hardware and do direct processing in "kernel mode".Here's the data for JPLAY at 16/44 with ASIO and the two "engines" that support ASIO - "Beach" and "River". When you see a number like "ASIO 2", that refers to the buffer setting (number of samples). The software recommends using the smallest buffer size with "DirectLink" being 1 sample. I was not able to get the DirectLink setting working without occasional blips and errors once I started doing Kernel Streaming, but 2 works well for 16/44, 4 was good for 24/96 so I standardized on those settings throughout. As you can see, I also measured with buffer of 64 samples to see if that made any difference (I see from the manual that Xtreme only works for buffer sizes up to 32 samples - it didn't complain when I set it to 64):
Note that I used the standard foobar2000 + ASIO audio output as the comparison measurement in the leftmost column.
Frequency Response:
Noise:
THD:
Stereo Crosstalk:
For some reason, WASAPI refused to initiate for me at 16/44. But it did work at 24/96, so I have included that here as well. Again, foobar + ASIO was used for comparison on the left:
Frequency Response:
Noise:
THD:
Stereo Crosstalk:
So far, no real difference between the JPLAY playback and foobar2000.
Now, lets deal with Kernel Streaming. It is with KS that we see significant CPU utilization by "jplay.exe". Instead of <2% peak CPU utilization above, with low buffers like a setting of 2 samples for 16/44, I see peak CPU utilization of 13%, and with a 4-sample buffer at 24/96, peak CPU jumps to 16% with my laptop's i5 and the "Xtream" engine. The "ULTRAstream" engine chews up even more CPU cycles by another 1-2% above those previous numbers. The moment the buffer size was increased to 64 samples, CPU utilization dropped to 3% with 16/44 and 6% with 24/96 peak. It looks like only when Kernel Streaming is used, does JPLAY actually kick in to maintain those tiny buffer settings.
Starting with 16/44, here's Kernel Streaming with mainly the "Xtream" and "ULTRAstream" engines since these two cannot be used with ASIO. I included "Beach" in these measurements as well out of interest. The test on the far right with "hiber" refers to the use of the "hibernate mode" where the computer screen, OS processes, drives, etc. turn off during playback. Again, foobar + ASIO was used as the comparison:
Frequency Response:
Noise:
THD:
Stereo Crosstalk:
Here is 24/96:
Frequency Response:
Noise:
THD:
Stereo Crosstalk:
So far, despite all the changes in CPU utilization, different audio "engines" and buffer settings, I see no substantial change in the measurements that would suggest a qualitative difference in terms of the analogue output signal from my DAC.
JPLAY supposedly can support DSD playback. I did not test this function.
Part II: Dunn J-Test for jitter
I did a whole suite of J-Test with all the different audio "engines", either 2 or 64 sample buffers, ASIO/WASAPI/KS, also tried the "hibernate" mode. For brevity, here's a selection of 16-bit jitter spectra using various settings:foobar2000 ASIO:
Beach ASIO 64-sample buffer:
River ASIO 2-sample buffer:
Xtream Kernel Streaming 64-sample buffer:
ULTRAstream Kernel Streaming 2-sample buffer:
ULTRAstream Kernel Streaming + Hibernate 2-sample buffer:
Essentially no difference in the J-Test spectra.
Recall that the 24-bit Dunn J-Test is done with a 24/48 signal. While doing this test using Kernel Streaming mode, something strange was found.
This is what the 24-bit Dunn J-Test looks like with foobar2000 + ASIO (notice primary signal at 12kHz rather than 11kHz with the 16-bit test):
The 24-bit LSB modulation signal is buried under the noise level. This is quite normal.
Here is Beach + ASIO:
Again, quite normal - we're still using the TEAC ASIO driver.
Look what happens with 24-bit J-Test Beach + Kernel Streaming (doesn't matter what buffer size):
Eh? What's with all the modulation signal bursting through like with the 16-bit test?!
My suspicion is that JPLAY isn't handling the last 8-bits in the 24-bit data properly... One possible scenario is where the last 8 bits got flipped from LSB to MSB, thus causing the LSB signal to show through at a higher level. With RightMark, this is what it looks like:
We've "lost" the last 7 bits of resolution at 24/48 with Kernel Streaming causing the dynamic range to drop to 102dB (17-bits resolution) instead of the usual >110dB (24-bits into the noise floor). I considered the possibility that this may represent some sort of dithering but why would it be applied to 24/48 and not 24/96?
Strangely, this anomaly did not show up at 24/96. I did not bother to check if 88/176/192kHz sampling rates were affected.
Part III: DMAC Protocol
Time for the machine listening test of 24/44 composite music using Audio DiffMaker. As shown in previous blog posts, this test seems to be quite good in detecting even relatively small changes like minute EQ adjustments, difference between AAC, MP3, etc... This test is also similar to what mitchco did last year.Here are measurements of a few settings in JPLAY in comparison to foobar ASIO as reference. I included the most "extreme" one that I could perform - ULTRAstream + 2-sample buffer + hibernate mode. As usual a couple of "sensitivity tests" with MP3 320kbps and slight EQ change in foobar for comparison:
The machine was able to correlate the null depth down to around 90dB with all the JPLAY settings and foobar suggesting no significant difference in sound in comparison to the reference foobar + ASIO playback.
Part IV: Conclusion
Based on these measurements, a few observations:1. 24/48 with Kernel Streaming appears buggy as shown with the 24-bit J-Test and RightMark result. Don't use it (as of version 5.1) if you want accurate sound. However, subjectively it still sounds OK to me since it's still accurate down to 16-bits at least. ASIO works fine. 24/96 is fine. I don't know if other sampling rates with 24-bit data are affected. For some reason I could not get WASAPI 16/44 to work with JPLAY even though it was fine with foobar2000.
2. Technically, JPLAY appears to be bit perfect with 16/44 and likely 24/96 based on my tests (of course we cannot say this for 24/48). Since the program claims to be bit perfect, this is good I guess.
3. I was unable to detect any evidence of sonic difference at 16/44 and 24/96 compared to a standard foobar set-up. RightMark tests look essentially the same. Over the months of testing, I see no evidence still that software changes the jitter severity with CPU load, different software, even different DACs (as I had postulated awhile back in this post). DiffMaker null tests were also unable to detect any significant difference in the "sound" of the analogue output from the TEAC UD-501.
4. Still no evidence that extreme settings like "hibernate mode" which reduces the utility of the computer makes any sonic difference. Of course, it's possible that this could make a difference with some very slow machines like a 1st generation single-core Atom processor with small buffer settings doing Kernel Streaming... But in that case, why not just increase the buffer size with Kernel Streaming (why all the "need" for low buffer settings and obsession over low latency for just playback?!) or just go with an efficient ASIO/WASAPI driver? I'd also recommend a processor upgrade if you're still rocking an old Atom!
Over the 3 nights I was performing these tests, I took time to listen to music with the various JPLAY settings - probably ~3 hours total. It sounds good subjectively (other than the brief interruptions every 2 minutes or so for the trial version). The i5 computer shows a little bit of lag with Kernel Streaming and low buffer size if I'm trying to do something else. I listened to Donald Fagen's The Nightfly and Peter Gabriel's So (2012 remaster) in 24/48 with Kernel Streaming and didn't think they sounded bad despite the issue I found (remember, the anomaly was down below 16-bits). Dire Strait's Brother In Arms sounded dynamic and detailed as usual. Likewise Richard Hickox & LSO's Orff: Carmina Burana (2008, 24/88 SACD rip) sounds just as ominous. (Almost) Instantaneous A-B'ing is possible in foobar changing ASIO output from the TEAC driver to JPLAY and I did not notice any significant subjective tonal, amplitude, or resolution difference using my Sennheiser HD800 headphones with the TEAC UD-501 DAC flipping back and forth a number of times.
Bottom line: With a reasonably standard set-up as described, using a current-generation (2013) asynchronous USB DAC, there appears to be no benefit with the use of JPLAY over any of the standard bit-perfect Windows players tested previously in terms of measured sonic output. Nor could I say that subjectively I heard a difference through the headphones. If anything, one is subjected to potential bugs like the 24/48 issue (I didn't run into any system instability thankfully), and the recommended Kernel Streaming mode utilizes significant CPU resources when buffer size is reduced (which the software recommends doing). I imagine that CPU utilization would be even higher if I could have activated the DirectLink (1-sample buffer) setting.
Finally, there's the fact that this program costs €99. A bit steep ain't it? JRiver costs US$50, Decibel on the Mac around $35, foobar2000 FREE and these all feature graphical user interfaces and playlists at least! Considering my findings, I'm unclear with what DAC or computer system one would find tangible benefits after spending €99 for this program.
As usual, I welcome feedback especially with objective data or controlled test results (any JPLAY software developers care to comment on how the audio engines were "tuned"?). I would also welcome independent testing to see if my findings can be verified on other hardware combinations (especially that 24/48 issue).
Music selection tonight: Paavo Järvi & Orchestre de Paris - Fauré: Messe de Requiem (2011). Lovely rendition of Pavane for mixed choir!
As usual... Enjoy the tunes... :-)
Subscribe to:
Posts (Atom)