Saturday 27 July 2013

MEASUREMENTS: DAC "Waveform Peeping" - the -90.3dB 16-bit LSB Test...


When it comes to technological "toys", I've vacillated over the years between the accumulation of digital photography gear and audio stuff... As I'm sure many of you know, "pixel peeping" is the act of "using 100% crops and similar techniques to identify flaws that have no effect on the photograph under real-world conditions" (Google web definition). Back in the "old" days (like a decade ago), the act of pixel peeping wasn't all that unreasonable since the differences visible could be demonstrated on photo-enlargements. When I was using my old Nikon D70 with 6 megapixels, sharpness at the pixel level was a significant consideration with moderate enlargements like 13"x19"; imperfections like moiré could be seen in the final product as well. Monitor resolution wasn't that high back then either so fine details were easily obscured.

Fast forward these days and I'm now using the Nikon D800. At 36 megapixels viewed on a >2MP monitor; unless I'm printing huge enlargements, there really is little need to "zoom" down into the 1:1 pixel level to appreciate a high quality image... Sure, sometimes it's just fun to see how much detail has been captured especially when evaluating different lenses or to show off each hair follicle, but for the most part, "pixel peeping" has become quite unnecessary.

Although in daily usage, one might not need to "peep" anymore, if one were to publish camera body or lens reviews, any reviewer these days "worth their salt" would run the images through objective tests; including highly detailed "pixel level" tests or compare 1:1 images between cameras or lenses. Dynamic range, ISO-noise interaction, color accuracy tests, distortion characteristics (for lenses), effect of file formats (JPEG vs. RAW) of course all serve to complete the evaluation. The quality is so high these days among high-end cameras (SLR's, medium format digital backs...), it is with these detailed tests that we can fully appreciate the qualitative differences between top contenders. Subjective opinions in terms of the camera's touch-and-feel and user interface are important of course, but if you care about the potential image quality that can be captured, then objective tests are really really important. If you haven't already done so, just have a look at the camera reviews on www.dpreview.com and see how much work actually goes into what I respect as proper reviews of well engineered equipment! Also of interest, Hasselblad is trying to market "exotic" cameras at high prices by appealing to aesthetics (just look at the responses to see how people feel about that!). [Here's another one.] Is this what happens when technology matures and companies have difficulty competing on primarily technological merits?

I've often wondered why in the audio world, objective measures have so often been left out as part of the review process - especially as it comes to line-level devices like DACs. Maybe it's because digital audio matured earlier and we're going to see the same outcome with cameras one day. Around some forums, the mere mention of objective measures seems to be scoffed at - as if objectivism with audio gear is either "obsolete" or the sole domain of "high end" manufacturers with arcane tests out of reach of mere mortals. I know I'm digressing into "MUSINGS" territory here, but IMO, a good review needs to dig into the gear's objective properties so the reader can truly appreciate how it compares with other similar gear in order to have an informed opinion and gauge value as a (hopefully) well engineered piece of technology... Let's get back on track then with some "MEASUREMENTS".

For me, one of the most interesting "waveform peeping" tests consistently done by Stereophile over the decades on digital gear has been the undithered 1kHz sine wave test at -90.3dBFS. This is one of the most "microscopic" tests of DAC performance. It's simple and the result basically answers the question "can this DAC accurately reproduce the least significant bit (LSB) in a 16-bit audio signal?" At a glance one can tell at least 3 things:
1. Is the DAC "bit-perfect" down to that last 16th bit? (Assuming everything upstream is set up properly, you should see something resembling the 3 quantization "steps".)
2. Is the dynamic range at least 16 bits? If not, the waveform becomes obscured by excessive noise.
3. Are there anomalies to the waveform morphology to suggest "DC shifts" leading to "tilting" of the waveforms (power supply related issues). (For a good example of 60Hz low frequency noise effect, see the measurement of the Philips CDR880 Figure 7.)

The Stereophile website archive provides some lovely examples of this test dating back to the late 1980's such as the Philips LHH1000 from 1989 (check out Figure 5). How about the Naim NA CDS from 1992 (Figure 6) or the $8000 Mark Levinson No.35 DAC from 1993 (Figure 6, still not good). By 1995, we saw excellent performance like with the $9000 Krell KPS-20i (Figure 5). In a few years, by 1998, reasonable priced gear like the California Audio Labs CL-15 CD player was capable of similar accuracy at the $1500 price point. Since the millennium, this level of performance can easily be achieved within the $1000 price point and below (eg. the Rega Apollo from 2006). These days, the little Audioquest Dragonfly can do a reasonable job USB-powered at <$250 retail.

On the whole, this test has demonstrated the progression of improved accuracy over the years. State-of-the-art DACs like the MSB Diamond DAC IV (Stereophile October 2012, not on website) and Weiss DAC202 (Figure 6) are great examples of what this level of accuracy looks like (as opposed to expensive gear of questionable technical ability which I will not mention). IMO, well engineered CD/DVD/SACD/Blu-Ray/DACs these days claiming to be "high resolution" really should pass this test without issue. Nonetheless, there are recent devices apparently incapable of a low noise floor for whatever reason (eg. Abbingdon DP-777 Figure 15, surprisingly the recent Wadia 121 Decoding Computer Figure 6 didn't fare too well either).

I was curious whether I could run a similar test using my simple test gear... After all, so long as the DAC and measurement device can achieve >16-bits dynamic range reliably, one should be able to obtain a reasonably good set of measurements. So far, from what I've seen in the other tests, I should be able to reproduce this test with the E-MU 0404USB!

Here goes... Setup and procedure for the various DACs/streamers:
Test DAC --> shielded RCA --> E-MU 0404USB ADC --> shielded USB --> Win8 laptop

- I created an undithered 1.1025kHz sine wave at -90.31dBFS at 16/44. This is what an "ideal" waveform would look like with the usual Gibbs phenomenon (ringing) due to bandwidth restriction.
- Green is LEFT channel, Blue is RIGHT channel. Notice the phase inversion between the channels.

- For comparison, I also created the equivalent at 24-bit quantization:

- Capture the above at 24/88 with Audacity using the E-MU 0404USB. From previous tests, the E-MU functions very well at 2x sample rates (88 & 96kHz) with optimal dynamic range. Although not as good as a high precision oscilloscope used by Stereophile, this should be adequate to allow relative comparisons between different DACs. I used the analogue preamp on the E-MU to boost the signal by about 18dB to give me "more" amplitude to capture.
- As you can see above, I decided to plot the channels overlaid and inverted to compare precision of timing and amplitude.

Here are the results of this test on the various DACs I have around here:

TEAC UD-501 [2x BB PCM1795 circa 2009] SHARP filter:
16-bit undithered:


24-bit:

Clearly the TEAC has no problem with reproducing that least significant bit in the 16-bit signal. Also, obviously the resolution has improved significantly by going to 24-bits.

ASUS XONAR Essence One [2x BB PCM1795 c. 2009] (opamps upgraded to all LM4562):

Very nice... Notice a wee bit of channel imbalance - the left channel (blue) seems consistently louder than the right. Same internal DAC chip as the TEAC so similar level of performance expected.

Logitech Squeezebox Transporter [AKM4396 c. 2004]:

Nice! Not bad for a discontinued device from a computer peripheral manufacturer released in 2006, eh? ;-)
Of course, the Stereophile review demonstrated this nicely already...

Logitech Squeezebox Touch [AKM4420 c. 2007]:
WiFi (only 30% signal strength 2 floors up from router!):

Ethernet:

24-bit:
Three observations:
1. Clearly the Touch is noisier than the better DACs above. It's still capable of >16-bit dynamic range though.
2. Some DC shift is evident - look at the upward slope with the 24-bit sine wave and compare to the Transporter above. Maybe this could be improved with a better linear power supply than the stock switching wallwart I used... Not sure if an improvement would be audible however.
3. No substantial difference between WiFi and Ethernet. (No surprise; just thought I'd have a look to see if WiFi added much noise down at this level.)
N.B. Remember that this is still a pretty good result - we are looking at a waveform down at -90dBFS, or ~90 microvolts! Nice correlation with what Stereophile found (Figures 5 & 6) in terms of the Touch being a 'touch' more noisy than better DACs.

AUNE X1 Mark I [BB PCM1793 c. 2003] (using CM6631A USB-to-Coaxial S/PDIF, ASIO driver):

This is what can be achieved by a <$175 DAC off eBay direct from China these days (I bought this unit in early 2012). Notice that it's able to produce a cleaner analogue output than the Touch. But it's also not quite up to the standard of the TEAC, ASUS, or Transporter. Notice both a slight channel imbalance as well as mild amplitude fluctuations (again, possibly due to cheap wallwart). Hopefully the following zoomed out screenshots illustrates this well for comparison:
 AUNE X1 - left channel (blue) noticeably louder and notice the amplitude fluctuations over time.

Touch - Notice it's more noisy with unpredictable amplitude spikes occurring in both right & left channels.

TEAC UD-501 - more stable, clean, uniform waveforms in comparison.

As much as it's great to see the level of performance afforded by DACs these days, I'm very impressed by the level of performance of this old E-MU ADC! As I have stated before, one of the reasons I put up these posts is to demonstrate that it doesn't take megabuck equipment to test out audio gear objectively. A lot can be "known" about the performance of a piece of hardware rather than depending on only subjective "opinion".

The other test I would categorize as the equivalent of "pixel peeping" is the Dunn jitter test where we're "peeping" into a small part of the audio spectrum around the 11 or 12kHz primary signal and scanning for sideband anomalies. IMO, neither the jitter nor this undithered LSB test really are that important for audio quality. Random noise affecting the 16-bit least significant bit would sound like some form of dither (eg. like what happens when an HDCD with embedded LSB data is decoded by a non-HDCD player). Likewise, my feeling is that even a "moderate" amount of S/PDIF jitter (like say 1ns) isn't going to intrude into my listening pleasure. (Maybe one could make the argument that the details and nuance of sound/music can reside in these microscopic domains but I have yet to see any proof...)

Assuming the digital player/DAC is meant to be faithful to the source signal and doesn't implement a DSP known to affect the LSB data, to be able to measure and verify precision down to these levels I believe would be a reasonable pre-requisite in achieving high-fidelity. It's a test of how well the hardware was designed and implemented than necessarily how it "sounds". Just like knowing if a hi-res digital camera is capable of the resolution it claims... You might never need 36 megapixels for a slideshow or in print, but it's good to know that the camera was capable of delivering on the claims! Likewise, if I'm going to spend a good amount of money on high-fidelity gear, I'd certainly like to know that precision engineering went into it by the results of tests like this among others already discussed over these months.

Let's throw some nostalgia in. Here's what the MUSE Mini TDA1543 x4 NOS DAC looks like down at -90dB (using the CM6631A USB-to-S/PDIF coaxial interface):

Party like it's 1991! Ugly... Clearly it's incapable of accurately reproducing the 16-bit LSB undithered tone.

Zoomed out (24/44) - still ugly:

Using a 24-bit signal makes no difference since this is a 16-bit DAC and the lower 8 bits get truncated. The Philips TDA1543 DAC chip was introduced back in 1991 according to the specs sheet... Thankfully, it looks like DAC designs have improved somewhat since then at least in this characterstic :-).

A stroll down memory lane... The top 3 highest grossing movies of 1991: Terminator 2, Robin Hood: Prince of Thieves, Disney's Beauty And The Beast. Top 3 songs (Billboard): (Everything I Do) I Do It For You, I Wanna Sex You Up, Gonna Make You Sweat (Everybody Dance Now). Hmmm...  Good year :-).

------------------------------------------------------------------

Folks, much of the measurements above were done about 3 weeks ago but I'm putting this post up behind the "Great Firewall" while on vacation (hey, >10hr plane flight gave me plenty of time to do some writing!). I know a few people are trying to get hold of me by E-mail. Unfortunately my VPN + Outlook is a bit finicky so other than more important work related matters as I check the E-mail every few days, I will likely not be responding until mid-August.

Time to go enjoy some good food... And snap some pictures of course... :-)


BTW: For those interested in some light non-fiction summer reading, consider picking up Chuck Klosterman's "I Wear The Black Hat: Grappling With Villains". An enjoyable, thought provoking social commentary.

6 comments:

  1. Hi,

    Your comparison with digital photography reminds me of an article in a well established British HiFi magazine where a regular contributor did a comparison of USB cables. This was about 12-18 months ago, before the audio USB cable thing really took off. He printed out a photo two times in large scale using different USB cables. He then proceeded to analyse each photo at a pixel level, and lo and behold, he found some very small, but discernible differences (discernible only with careful scrutiny and under large magnification).
    For sure, the article was offered as a curiosity rather than an attempt at a meaningful objective experiment. No conclusions were offered, but the implications were all but clearly stated - USB cables appeared to affect the output.

    I thought it was interesting, not least because photography as a hobby is largely devoid of pseudo science. But then again, not all pixels are equal, ....

    Thanks for the great articles. Your musings amuse me greatly, and the effort you put into them is acknowledged and appreciated.

    Regards,

    Bob

    ReplyDelete
    Replies
    1. Interesting comment Bob!

      Wow! I'd love to have a look at this article! Something is fishy with that... Could make a good rebuttal post if I can get my hands on it.

      Delete
  2. How would one describe 'bokeh' in audio terms and would something similar be present in other domains ? electromechanical acoustical ? and how would one measure that as Bokeh is understood and debatable/provable.

    Funny thing with Bokeh is it doesn't change with different USB cables.... ;-)
    Well perhaps on the printer mentioned above with all it's mechanical tolerances, resolution, repeatability and other factors aside from the only deliberate change (USB cable)

    ReplyDelete
    Replies
    1. Ah... Those thousands of $$$$ spent on just the right lens with that special magical bokeh :-).

      Delete
    2. Yep, I'm sure it was just an issue of consistency with the ink cartridges, the heads that apply the ink, paper, etc.

      Delete
  3. This comment has been removed by the author.

    ReplyDelete