If you look back at my writings and tests over the last few years, you will see a number of instances where I did measurements using different operating systems, computer hardware, different bitperfect software (for Windows, for Mac, "audiophile" JPLAY), and looked at things like computer CPU loads and jitter. Already, in the post on the AudioEngine D3 review, I demonstrated that even with disparate computer hardware and operating systems like Windows 8.1, Windows Server 2012 R2, Mac OS X "Mountain Lion", and "Mavericks", I was not able to show measurable differences in noise level, dynamic range, distortion or jitter with this DAC which operates up to 24/96.
The results are not surprising really, as a computer audiophile in 2015, it's quite likely that one would be using a high-quality USB DAC which is asynchronous, thus avoiding issues like the potential for significant jitter due to timing problems in the interface (more below). Yet, I get questions still about uncertainty as to whether different OSes make a difference and then there are programs out there like Fidelizer (check out this review, or this one) and even the well popularized Audiophile Optimizer (through reviews like this one) claiming improvements; strangely none of those reviews bothered to tell us exactly what DAC was used!
Could the OS and "optimizer" programs like these affect the output through the OS audio stack (eg. Windows Mixer) or the motherboard headphone jack? Sure... I guess it's quite possible since process scheduling, mixing and dithering algorithms could change. But seriously, what audiophile interested in high fidelity output with dollars invested in good digital source/pre-amp/amp/speakers/headphones would be listening to motherboard audio or DirectSound with Windows audio? Not many I bet. As usual, despite the testimony from various places about how Windows 10 has improved sound, I believe we have yet to see any evidence that sonic output of a decent modern asynchronous USB DAC using a bitperfect driver like ASIO or WASAPI has changed with an OS upgrade (or the use of OS optimizations).
First, let's talk about Windows 10. Not only is this the "latest and greatest" Windows, there is the potential that this is "the last" Windows version for the foreseeable future. Maybe the situation will be similar to "OS X" in the Mac world where that moniker has stuck since 2001. Windows 10 does add a very useful feature - multiple desktops - which I use regularly now on my main workstation. The "Start Menu" is back rather than using that un-desktop-friendly "Start Screen"; makes locating apps and installed programs much easier. Beyond that, it really hasn't changed the way I use the machines; I have not tried Cortana and the new Edge browser shows promise but is not up to the features of Firefox or Chrome yet.
Since I'm actually doing the upgrade, what better time than now to measure whether Windows 10 will change the sound of my DAC (the TEAC UD-501) connected to the HTPC machine before and after the upgrade?
I. The Hardware & SoftwareThe test computer is the following (similar to the previous HTPC build article with slight changes):
Enclosure: Bitfenix Prodigy M microATX
Power supply: fanless SeaSonic SS-400FL2 400W
Motherboard: ASUS B85M-E/CSM - has HDMI with limited 4K (30Hz) capability
CPU: Intel Pentium G3220 (dual core, 3GHz, 54W TDP only
Haswell graphics features but slow 3D)
- underclocked to 2.8GHz, slight undervolt
CPU Cooler: CoolerMaster Hyper 212 Plus (total overkill but running fanless!)
SSD: SanDisk Ultra II 240GB SSD
RAM: 8GB Kingston DDR3 1600
(note the G3220 CPU will underclock this to 1333)
Normally I use the Corning Optical USB3 cable with this computer, the rationale as previously reported. In order to remove another variable, for these tests, I switched back to a generic good quality 12' shielded USB cable.
|The humble Pentium fanless HTPC. Generic (silver) USB2 cable shown.|
Windows 8.1 Pro (latest updates as of August 10, 2015) will be upgraded to Windows 10 Pro (with all updates as of August 11, 2015).
A couple of other tweaks thrown in the Windows 10 tests to explore software optimizations:
- Fidelizer 6.8 Demo + foobar
- Fidelizer + JPLAY 6.2 Demo using foobar
I trust these combinations will demonstrate enough about whether the Windows 8.1 to 10 upgrade changes the sound of a typical modern USB DAC. Furthermore, let's see if OS optimization with Fidelizer makes a difference, and now that it has been 2 years since I last looked at JPLAY, whether anything has changed... Note that because there are many potential settings in programs like JPLAY and Fidelizer (which of course appeals to the tweaking demographic), I'll measure a typical recommended setting to see if there is any evidence of a difference.
Set-up is as follows:
HTPC (Win 8.1/10) --> shielded 12' USB cable --> TEAC UD-501 DAC --> shielded RCA --> E-MU 0404USB ADC --> shielded 6' USB --> Win 8.1 measurement laptopAll test audio files situated on the computer's SSD except where indicated for "streaming" off the ethernet from my Windows Server 2012 R2 machine situated about 50 feet away (probably ~70' ethernet cabling snaking through the walls and corners) in an adjoining room with 2 inexpensive gigabit switches in between (D-Link DGS-1016A located by the ethernet central patch and TP-Link TL-SG1008D in the soundroom about 12' from the computer). All tests performed with bitperfect ASIO drivers for both playback and recording unless otherwise indicated.
|System running DMAC test (see below) - Win 8.1 still - note TV screen with Win 10 advert bottom right.|
|And we're off with the upgrade...|
II. Windows 8.1 vs. Windows 10First let's start with just a look at the OS change itself, as usual, here's the RightMark 6.4.1 Pro summary; I will just measure "high-resolution" 24/96 performance since 16/44 is no challenge for most DACs these days:
No difference at all...
Any evidence then of jitter differences?
Slight variation at the level of the noise floor. Nothing of significance IMO. As expected with the low noise TEAC DAC, it's very easy to see the jitter modulation LSB component in the 16-bit signal.
III. Let's add Fidelizer 6.8 Demo
|59 --> 51 processes. And 893 --> 705 threads after "Fidelization"... Ok, cool...|
IV. Let's Add JPLAY 6.2 Trial on top of Fidelizer and Windows 10!Okay, let's now get truly "audiophile"! What if we installed JPLAY ("Optimize JPLAY for single PC setup" checked during installation) and tried it with some pretty "strong" settings with Windows 10 and Fidelizer activated?
|Kernel Streaming, "ULTRAstream" engine whatever that means, and DAC Link down to "~2.5Hz" from the default of "~10Hz"... Sounds fancy...|
Oh boy, we have a problem... Recall that in my testing back in June 2013, I said JPLAY seems to have a problem with Kernel Streaming 24/48? It doesn't appear to be handling the 24-bits properly and instead we end up with around 17-bit resolution at best... Well, guess what, I suppose nobody told the developers about this issue! I double checked again and it looks like if I switch to JPLAY with the ASIO driver this problem goes away. Here are the measurements with the "Classic" Engine using ASIO compared to "ULTRAstream" Engine with Kernel Streaming:
Notice the worsened noise level with Kernel Streaming compared to ASIO. And here is the noise profile:
Considering we have gone from version 5.1 in June 2013 to 6.2 today, let's just say it's a bit "disappointing" that 24/48 is still not being managed correctly by a piece of software billed as "One player to rule them all: enhance sound quality of your player"! I fail to see how any of this can be construed as "enhancing" sound quality. And I'm of course sure many "golden ears" have been deprived of hi-res 48kHz audio enjoyment because of this bug over the last few years :-).
[BTW, my guess is that JPLAY flipped the "endianness" of the lowest 8 bits in a 24-bit sample to cause that anomaly on playback.]
V. Music Samples with Audio DiffMakerUp to this point, the results are from synthetic test tones meant to explore parameters of resolution of the audio output. One criticism I've heard about objective testing in general is that "nobody listens to test tones!" To address this concern then, I will bring back the Audio DiffMaker Protocol that I put together 2 years ago. The test consists of segments of 4 actual pieces of music spliced together as a composite lasting ~30 seconds (which I call the DMAC - "DiffMaker Audio Composite"). I played the DMAC audio file with each of the conditions through the DAC and record them with the ADC. The computer then calculates based on what was "heard" the "null depth", a value of how much similarity there is between the "reference" and "comparator" recordings. Any change - frequency response changes, distortion, noise, severe timing issue - will be accounted for in this test! The greater the null depth, the "more alike" the two sound. Realize that with analogue output, it's impossible to be 100% the same... For example, as the DAC and ADC heat up, small variations happen including "drift" in the samplerate, this is why many of the high-end DACs these days advertise TCXO (Temperature Controlled Crystal Oscillators) to reduce the phenomenon. (While technically this is good especially in studio settings where synchronization is important, the actual value in terms of audibility in home playback I suspect is questionable.)
Using my procedure and the E-MU 0404USB ADC, I can reliably achieve null depths of around 80-90dB with good inter-test reliability (average 80.2dB, standard deviation 3.9 based on previous work) for bitperfect lossless audio. Much of this is dependent on the reliability of the DAC and ADC of course in their ability to play back and record with high fidelity. With 320kbps MP3 (well encoded with LAME 3.99), null depth drops to ~65dB, telling me that the measuring device can "hear" approximately 15-20dB of difference between the lossy compressed audio versus the original lossless source averaged over 30 seconds. Considering that humans rarely are capable of differentiating 320kbps MP3 consistently (and usually only with specific segments of difficult to encode audio samples) and echoic memory lasts <5 seconds, digital differencing test paradigms like this are IMO certainly better than the best human ability to recognize a difference between music samples.
Let's see then whether the machine can hear a difference between the different OS's, "optimizations", and playback conditions. All recordings were done at 24/96 (original lossless DMAC audio is 24/44). The "reference" recording is with Windows 8.1 using foobar, ASIO driver, default settings. For each condition, I measured 3 times (except for the MP3 test conditions measured only 2 times as sensitivity testing). Each data point you see is the average of the two channels' calculated null depths.
As you can see there are quite a number of conditions shown in the graph. "Win 8.1" and "Win 10" are just the standard playback results using foobar, ASIO driver, default settings off the local SSD storage. The "Stream" conditions are when I play the audio off the network through my Windows Server 2012 R2 machine 50' away over gigabit ethernet (just standard SMB file sharing). The "MP3" conditions are for "sensitivity testing" to see if MP3 "sounds" consistently of lower accuracy to the machine (they obviously do!).
The bottom line is that Win 8.1, Win 10, ethernet playback, Fidelizer, and JPLAY made no difference as far as I can tell in terms of what is likely to be audible differences. The variation is well within inter-test differences and generally stays around 80dB null depth over the 30 seconds of the music composite for every condition except with 320kbps MP3 (where it sits around the expected 65dB area). Notice that the chart is arranged in temporal sequence; that is the Win 8.1 tests were done first, then Win 10, and the last to be done were the Fidelizer and JPLAY tests. Over time, as the equipment warmed up the drift appears worse but settles down. The biggest change was between Win 8.1 to Win 10 during which time I let the machine spend a few hours downloading the OS, installing, etc. before I came back to run the tests. In retrospect, I probably should have given the system at least a couple hours to warm up (instead of measuring in the 1st hour) - might have made the data a little more consistent comparing the Windows 8.1 results with the others but conclusion would be the same.
VI. ConclusionsI hope these experiments and results add to the discussion around operating systems and sound quality using a "modern" DAC and standard computer source as a case study.
The bottom line is that I am unable to find evidence to say that OS changes will make any difference to analogue audio output from a modern asynchronous USB DAC despite having heard comments already that somehow upgrading to Windows 10 will make things sound "better". Likewise, I see no evidence that software like Fidelizer will do anything to the sound output even though I can show that it "worked" to reduce the number of processes and threads of execution in the OS. Finally, and yet again, JPLAY demonstrates no clear ability to affect the sonic output and in fact (again after 2 years since the last test) I see problems with Kernel Streaming in playing back 24/48 accurately. This is obviously worse than "no effect" in that it actually deteriorates the quality of playback. I seriously wonder how these developers actually perform testing on their software and what they're actually doing! (Beyond marketing and dreaming up terminology like "ULTRAstream" for example.)
I see that Audiophile Optimizer has a bit of a following but given that there is no free trial link available for download (you can contact them for some kind of trial version / 14-day policy). I certainly encourage others to perform an objective assessment of whether claims about Windows Server 2012 R2 and software optimization can make a difference (already with the AudioEngine D3 tests last year, I found nothing special with Server 2012). As usual, the developer does not provide any objective evidence except testimony. Given the results here, I think it's fair to remain skeptical of claims and would strongly suggest some kind of "try before you buy".
As I mentioned in the intro, Windows 10 itself brings us back to a more useful desktop computing experience. So far I like it. I have thus far upgraded 4 machines - 2 for work, the HTPC and my main 4K home workstation. For the most part, upgrades went smoothly. Ironically the most trouble I have had so far is with Microsoft's own Surface 3 Pro! I kept getting stuck at 18% install and discovered it had to do with MBR (Master Boot Record) issues. I found this link and the instructions by funnyfarm299 to do the "bootrec /rebuildbcd" before it finally installed. I actually still have some issues with the Surface and WiFi connecting sporadically and at times getting kicked off the network - I hope MS fixes this ASAP. I think this has something to do with VPN software like Cisco AnyConnect in some situations. The only other issue I had was sometimes a couple drivers and programs did not migrate properly. For example I had to manually direct Windows 10 to look in the "C:\Program Files\TEAC" directory to find the 64-bit DAC drivers after the OS update. JRiver 20 also needed a reinstall.
Anyhow, enjoy Windows 10 if you decide to install. "Win X" could stay with us awhile! And that's probably not a bad thing... I have been using Windows 10 with my other computers over the last few weeks and have not noticed any sonic difference to speak of among the various machines and different DACs (like the ASUS Essence One listening with headphones).
Disagree with my conclusions? Please experiment and if there's data to suggest I am in error, please leave me a comment and links! Thanks.
Since I will be away on vacation for a few weeks, I want to end off with a couple of "musings" and comments I have "heard" on line.
1. Let's talk about the phrase "bits is bits". If it is true that "bits are NOT bits", then what is it that is being transmitted? The standard answer is: bits (data) + timing. This is the reason why jitter has played such a big part in digital audiophile discussions. Jitter being the slight timing inaccuracies in the electrical signal which may affect the arrival time of the data (usually from the computer/transport to the DAC). And yes, I agree - so maybe that means I'm not a "bits is bits" kind of guy after all?
However, technology is evolving and improving. And while timing is a factor, the amount of timing irregularity in a typical USB data transfer has limited effect on good modern DACs. It is because asynchronous protocols have decoupled the need for accurate timing to the point where to all practical purposes, precise timing of data transfer from computer to DAC is just no longer a significant contributor to the final analogue output quality (we're not talking extreme data under-run timing issues of course). An asynchronous USB DAC can tell the computer to start and stop transmission based on its needs to fill the buffer. The buffer data is then fed to the DA converters synchronized with the precise internal clock (not USB clock). This is a significant advance over the "old" days with SPDIF clock recovery (see what happens with TosLink loopback here) and isochronous USB (see further tech details here); it was not difficult to show these timing anomalies with the Dunn J-Test, but these days, decent asynch USB DACs are essentially immune. Jitter anomalies then become a function of the DAC hardware itself. If we see bad jitter with an asynchronous USB DAC, the problem resides in the DAC and unless we're dealing with driver/firmware bugs, it's time to buy better hardware... Forget fancy cables making a difference, and likewise OS upgrades, optimizations, and "audiophile" software are unlikely to "fix" anything. In fact, I would say that installing these "tweaks" are more likely to cause functional issues or introduce bugs as we have seen.
I would not be surprised to see in the days ahead a significant dissipation in discussion around jitter and more focus on parameters like noise and power-related phenomena. Not that they're problems necessarily in most circumstances, just that worries about audible jitter have run their course...
Now if the "bits are NOT bits" folks have any other theories than the one above, I welcome the opportunity to "hear" about it and even better, providing evidence would open up dialogue and improve our overall understanding.
2. I believe I have been accused of trying to "prove everything sounds the same" by some (subjectivists) these last few years. Recognize that many of my posts have shown changes and difference; from jitter anomalies, questionable performance in certain DAC/ADCs (eg. the Dragonfly, Tascam UH-7000), to room measurements... Sometimes very much unexpectedly (like the JPLAY KS 24/48 issue above).
Sure, I'm trying to prove/disprove various ideas/claims but mainly for myself out of curiosity so that I may understand and explain when asked. I write because I've enjoyed this hobby for decades and in the pursuit of "high fidelity", had reached a point in this hobby where intellectual consistency and (I believe) honesty compelled me to experiment and explore beliefs which can be empirically verified. (Yes, I've been there over the years with claims like green CD rim pens, Belt Rainbow Foils, freezing CD's, etc...) I remember talking to other audiophiles over the years and there were so many questions, yet so few seemed to have any evidence to support what on many occasions appear to be fanciful beliefs! In the process of writing, I have thought about the philosophical "boundaries" between the subjective and objective. The differentiation of statements and beliefs born of the "magisterium" of art (impressions, feelings, aesthetics, preference) and that which belongs to science (design based on engineering principles, electrical energy, and sound waves governed by laws of physics). Both having value, but not always concurrently applicable to the question at hand and with different levels of generalizability. It has certainly been enjoyable sharing these findings and ideas.
My hope has always been to demonstrate that even a "guy in his basement" without the need of thousands of dollars in audio analysers can figure out for himself what is fact and what is hype. (Of course a big thanks goes to all the guys/gals for the discussions, tips and ideas to try along the way!) The fact that I cannot find differences some of the time is not because I'm "trying" to prove that none exist. Surely one must accept the possibility that there perhaps never was a difference to begin with given the many dubious claims! Objective results may not correlate with some things like level of enjoyment or personal satisfaction, but it can show what is factual, the magnitude of an effect, and fidelity to an ideal. When I put money down to buy audio products, I do so with the expectation that claims made correlate to real benefits based on applied science (quality engineering); I don't know if anyone would openly accept that they buy audio gear knowing that they're after a placebo, spiritual, or magical outcome. In writing about my experiments in audio, I hope that this adds to open discussion and raise the level of discourse and knowledge among audiophiles. It is a shame that the "professional" audiophile press and spokespersons do not seem interested in answering basic questions like the topic of this post nor in dispelling myths... Sadly, at times they even seem to be actively promoting bizarre beliefs and methods (like articles over the years by these authors which got even more horrifying by the second article in the series).
As I said, I'll be on vacation to catch up on other interests for the next few weeks. Along the way, I'm sure to spend some good "quality" time with beloved albums in front of the sound system as well :-).
Enjoy the summer, enjoy the music everyone...