Something I have noticed over the years commenting on the audiophile hobby has been how incessant and persistent various themes tend to be. Just like the apparently never-ending arguments of "digital vs. analogue/vinyl", or "CD vs. hi-res", or "subjective vs. objective", there has been this mostly friendly banter between those who feel that essentially "bits are bits" vs. those who think there is significantly more to digital transmission than bit-accuracy.
Seeing recently this article "Why the 'Bits is Bits' Argument Utterly Misses the Point" from Upscale Audio published compelled me to write this post to explore the topic further with a review of measurements and some demo tracks for readers to listen to themselves. I don't know how long the Upscale article has been on the site since there's no date or author listed, and was only made aware of it through the Darko.Audio Facebook page (it seems Mr. Darko felt the article was accurate, really?).
While the article claims that some people have "missed the point", let us examine their points and see if perhaps it might be the author(s) that are a bit too aggressive in making these arguments. After all, it is 2019 with decades of development in digital technology that impact our lives in more sophisticated ways than just audio reproduction. It's hard to imagine there are huge lacunae in our knowledge of digital communications and digital-to-analogue conversion of audio frequencies.
0. A preamble about digital...
First, remember why digital is "good". Digital data consists of discontinuous representations of information quantized as either "0" or "1" which we call the "bit". With higher speeds over the years and more "0" and "1"s available, we can represent evermore complex information from ultra-high resolution audio, video to bewildering amounts of "big data".Since each bit is either "0" or "1", mechanisms are in place to ensure integrity and protocols allow excellent accuracy ("perfect" even) for data transmission. The electrical signal for "1" is significantly different from "0", thus making it comparatively easy to transmit the data without guesswork when the bits are transported. For example, USB uses a differential voltage pair of wires with "1" being when the D+ line is ≥200mV higher than D- and the converse, digital "0" being when D- is ≥200mV higher than D+ on the receiver end. That differential voltage between D+ and D- of ~400mV is not a trivial amount and provides good signal-to-noise ratio such that we can quite easily these days enjoy high integrity, multi-megabyte-per-second data transfers with commodity, consumer devices even at very low cost.
[There is more complexity to the USB situation around modes of operation and speed identification, etc. especially with newer iterations. No point getting bogged down with this stuff; you can read some more here if interested.]
Remember that years ago there were some tests of USB cables using eye diagrams in Hi-Fi News & Record Review. These "eye diagrams" can tell us all kinds of information like signal-to-noise, jitter, rise time, bit period, etc. and whether all of these parameters over thousands if not millions of captured samples measure up to the specifications to ensure that bit errors are within the acceptable tolerances. Not just USB, but all digital interfaces (SATA, ethernet, S/PDIF, PCIe...) can go through these kinds of checks.
Sample USB eye diagram showing the ~400mV differential, 2-level Pulse Amplitude Modulate (PAM2). |
Although USB data transfer consists of just 2 differential voltage levels, the article makes reference to more complex systems like Gigabit Ethernet with 5 levels (PAM5), and 10GbE which I wrote about last year with 16 levels (PAM16, referred to specifically in the Upscale article). They have even funkier eye diagrams than USB:
Of course this is complex, but the nature of technology is such that over the years, complexity becomes commonplace and easier to achieve with commodity components. As the signals get more complicated and speeds go faster, yes, things like cable specifications do need to be better. As long as compliance testing has been performed and the cable achieves the specified parameters, it's fine. Even if expensive to begin with, the "magic" of market forces including economy of scale eventually makes technology affordable.
Note that not all digital communications implement error correction. For example, if you're playing a video game online and transmitting using the UDP protocol over ethernet, you just want fast "realtime" data transfer with minimal latency to the server and the occasional data error likely isn't going to be critical for gameplay. Back in 2016, I explored this with the ODROID-C2 machine looking at ethernet error using UDP which was still very low without error correction across a home network. By the way, these days, other custom protocols can be implemented over UDP so that the server/client can be smart enough to selectively control what data is important and which packets with errors can be ignored.
For audiophiles, other than ethernet with TCP transfers, most digital audio transmission actually do not have error correction (even if detected). The old S/PDIF interface (coaxial, TosLink) did not have error correction since it was a unidirectonal flow from source to receiver, and USB data transfers to your DAC using today's asynchronous protocols do not implement error correction (even when detected, your DAC will not ask the computer/streamer to resend an erroneous data block). Like playing the video game online and not noticing the occasional error, the vast majority of the time you won't hear an issue either with the audio stream. However, when data errors are sufficiently numerous, as I demonstrated here with a very poor cable, you will notice the problem from non-bit-perfect data transmission.
I hope everyone is in agreement that digital therefore provides an exceptional level of error-free storage and data transfer; a level of "perfection" if you will. "Bit-perfect" copies of music and "bit-perfect" transmission of this data to one's DAC is simply to be expected and normal. These days, "non-bit-perfect" transmission to a DAC likely suggests malfunctioning hardware/drivers, unintended software settings (like running audio through an OS software mixer), or intentional data manipulation (eg. EQ or DSP).
With this preamble, let's get back to that "Bits Are Not Bits" Upscale article then and consider why there are major issues with what they're saying / implying. "Bits are not bits" arguments generally agree that the signal remains "bit-perfect" but feel other anomalies significantly still affect the sound.
I. Noise: "Digital is actually analogue" exaggeration...
Yes, the signal that represents the digital data does electrically manifests as an "analogue" waveform. Of course it does, and indeed the rising and falling edges of the signal are not perfect square waves; we can easily see this in those eye diagrams above. However, the reality is that transfer techniques can tolerate imperfections with significant margin such that ultimately the data remains error-free; the digital data remains "perfect".This point about "there's no such thing as digital" was somehow made into a big deal as I recall by John Swenson back in 2013 in this interview. To this day, I still see the occasional reference to the idea as if this was some remarkable revelation. He said in that article (bold emphasis mine):
At a first glance let’s look at a voltage on a wire. It can have many different voltage levels on that wire, 0, 10, 300, -2.75, 13765.45 etc. This is the infamous “analog” realm, the voltage on that wire can be pretty much anything.
What digital does is quantize those values and say “anything below a certain value (the threshold) is low and anything above the threshold is high”. This is the fiction part.While most of what he says is true, it's that little twist at the end suggesting "something's not right" that is problematic. Notice this is often how people start conspiracy theories ;-). So what exactly is fictitious?
Notice how in the article, from that questionable statement on, Mr. Swenson wanders into speculative territory suggesting that "noise" now can somehow magnify and cause problems, yet still below thresholds so the digital data remain error-free. He then proceeds with dragging in concerns about the ground plane and yet more noise that might or might not be problematic, resistance and inductance that might or might not be issues, then he throws in capacitance effects, and the "return current", and so on and so forth... Sure, maybe there are some really terrible and noisy USB DACs out there. But all of this piling on of speculations without actual reference to specific devices or magnitude of the issues he's talking about does nothing but overwhelm the reader, thus directly sowing the seeds of fear and uncertainty! What a mess of an article with no context provided yet some writers in the audiophile community seem to blindly hold these speculations up as the words of some kind of enlightened "brilliant" designer?
By 2015, much of these speculations made their way into the "one-port USB hub" known as the UpTone USB Regen, and more speculations can be found as "Swenson Explains". Despite lots of words, that page still provides essentially no details as to the magnitude of the problem he's trying to "fix" nor does he show evidence that the device even did anything to the sound from a USB DAC. Others examining the device and later variations apparently could show no improvement, or perhaps even slightly higher noise (yet still garnering positive subjective testimony).
You see... The problem here is not necessarily that the "bits are bits" people (like myself) deny the existence of noise in digital circuitry, or claim that the electrical waveforms are perfectly square. Rather, there's no evidence of what he says should be of concern to audiophiles using reasonable digital gear for a long time now!
For example, over the years of listening and testing, I have never been able to show that noise level was high with simple "USB-stick" DACs like the AudioEngine D3, GeekOut V2, SMSL iDEA, or even the AudioQuest Dragonflys (v1.2, Red, disappointing performing Cobalt) from a few weeks back despite their proximity and use of a computer USB port for power and data. Furthermore, as I showed in late 2018, using even a USB TEAC UD-501 DAC from 2013, there was no severe worsening in noise from the DAC output whether I fed it with a low power, "quieter", battery-powered Raspberry Pi 3 B+ or a mains-powered Intel i7 computer with nVidia GTX 1080 GPU from audible frequencies up to 192kHz. At most, when the power-hungry CPU + GPU were running at 100% (who does this while listening to music?), only a few tiny noise anomalies could be seen! Here are a couple of previously published graphs to demonstrate this point:
As far as I can tell, there is no need for esoteric "audiophile" equipment or the USB Regen as filter / signal "regenerator" since noise levels are already excellent. Can John Swenson show us where it is he sees problematic noise in his digital systems (USB or otherwise)? Perhaps give us an example after all these years where the USB Regen actually improved the performance of even an inexpensive DAC?
So what about "noisy" network switches and the like, also mentioned in that article? As shown years ago, on a relatively complex home network like mine (you can see the network configuration with various devices here), ethernet noise with multiple switches and computers on the network remains minuscule - here's an example using a battery-powered Raspberry Pi 3 streamer with DAC:
(Don't worry about that 37kHz noise... It's a limitation of the Focusrite Forte, not to do with ethernet noise.) |
Again, if there really is a problem, why doesn't the Upscale Audio article writer(s) demonstrate examples where digital cables show the problematic "antenna effect" that they claim? While the idea sounds fine, where is there evidence that more "shielding against gigahertz transmissions" will improve a typical DAC or streamer's output? Could it be that the engineers who produce modern DACs already know about these issues and have figured this out years ago?
BTW, speaking of blurred "digital as analog" ideas, remember that others in the audio media have made similar claims/comments such as UHF Magazine stating that "the CD can be said to be an analog disc" back in 2014. Yikes. Thankfully, I think audiophile magazine writings have improved somewhat since then and I haven't seen such types of comments in awhile.
II. Timing: The jitter exaggeration...
Let's not spend too much time here since measurement after measurement of jitter with modern asynchronous USB DACs have demonstrated excellent performance these days. Ditto with ethernet-based DACs. Yet, jitter remains some kind of perennial beast that must be tamed in the eyes of various audiophile writers, magazines, and of course certain companies!While jitter was worse back in the days of synchronous USB and with older S/PDIF equipment (much improved these days), there is no evidence that a cable is able to change jitter performance - it cannot. Jitter is a function of the sending and receiving devices themselves when using reasonable-length cables adhering to standard specifications. For synchronous USB, S/PDIF, HDMI, the sending device's clock accuracy will affect jitter performance while with asynchronous USB and ethernet, the accuracy of the clock is based off the DAC itself.
Again, have a listen to the simulated jitter samples I posted last year. As you can hear for yourself, jitter has to be at massive levels before it can be heard. Unless you truly own terrible (or faulty) audio equipment, I doubt jitter will ever be audible. I would love to see results of a blind test with typical picosecond jitter showing otherwise.
III. Timing: The drift exaggeration...
This is mentioned in the Upscale Audio article a few times briefly. Drift refers to the idea that clocks do not keep perfect pace. True, clock oscillators are not perfect and will function at slightly different speeds. Again, like jitter, let's not get too concerned - why? Because we're still talking about tiny parts-per-million (ppm) levels of inaccuracy!Have a look at this page "Clock accuracy in ppm" for an interesting overview on clock precision between typical crystals, TCXO ("Temperature Compensated"), OCXO ("Oven Controlled"), rubidium, and cesium atomic clocks. In the table describing accuracy of the various types, notice that a typical crystal may have accuracy as poor as +/-100ppm. That sucks, right? But then again, that's an inaccuracy of only 8.6 seconds out of 24 hours! Do you think this temporal inaccuracy which will lead to a very slight change in pitch is going to be noticeable?!
We can actually measure the drift between two devices as hobbyists... Check out the new software DeltaWave Audio Null Comparator currently in beta (1.0.37b). Years ago, I had used Audio DiffMaker which worked well in certain situations but had its limitations. DeltaWave goes above and beyond DiffMaker's abilities by a big margin!
Remember earlier this year, I ran the "Do digital audio players sound different playing 16/44.1 music?" blind test? Well, we can load up the same track recorded from different DACs, and compare the temporal drift between the DACs (same highly accurate RME ADC used of course). Loading up the Stephen Layton "Chorus: For Unto Us A Child Is Born" track played back from the ASRock motherboard and comparing this to the Oppo UDP-205, we see this very linear graph of clock drift in DeltaWave:
Note: Y-Axis is the # of samples of offset. Above 0, the "Compare" track is faster, and number of samples "ahead" vs. the "Reference". For this comparison, I had set DeltaWave to upsample to 192kHz, |
If I just examined the drift of the Oppo UDP-205 itself compared to the "ideal" bit-perfect data, there's barely any drift as recorded off the RME ADC - <6ppm!
Are 30 and 6ppm variations like this audible? To help answer that question, let me create some demo samples for you to download and listen for yourself.
As audiophiles, we love Diana Krall :-). So to start, we have a 90 second clip from her song "Let's Fall In Love" off the album When I Look In Your Eyes. (As usual, samples used under "fair use" for the purpose of education... Please delete the files when done listening. If you like the music, please purchase it.)
Track 1 is the original 90 second segment of Ms. Krall with a 2-second fade out. Track 2 was manipulated with drift going back and forth between 100ppm and 200ppm (remember, typical low quality crystals may drift +/-100ppm).
If we compare the original with the "drifty" version, the Clock Drift graph looks like this in DeltaWave:
DeltaWave settings: upsampled to 192kHz. |
0 - 10 seconds - no changeThis is significantly worse than the difference between a good DAC like the Oppo UDP-205 and the built-in audio from a computer motherboard!
10 - 40 seconds - slowed clock by 100ppm (0.01%)
40 - 70 seconds - sped up the clock by 200ppm (0.02%)
70 - 90 seconds - slowed clock by 200ppm (0.02%)
Focus particularly at the transition points at 40 seconds and 70 seconds. At 40 seconds, we're going from 100ppm slower to 200ppm faster - a change of 300ppm. At 70 seconds, the sound will go from 200ppm "fast" to 200ppm "slow", a sudden 400ppm change in speed/pitch! These kinds of "speed shifts" are beyond even the worst DACs out there.
Can you hear the timing difference? Can you hear the change during those transition points? How bad is it? If you're wondering what to listen for, drift will result in very minor pitch variation depending on amount of drift. A 200ppm error is something like 0.3% of a semitone shift in pitch. Would even the most "golden eared" audiophile detect this difference with the best gear and multi-thousand-dollar cables?
But wait! There's more :-).
Let's add some "nasty" jitter! Like I did with the simulated jitter samples linked in the previous section, let's use Yamamoto2002-san's WWAudioFilter (1.0.46.32) and throw in a few jitter sidebands - in total 5.5ns worth of cumulative jitter resulting in sideband distortion up to +/-5kHz from the primary signal. This is what the 16-bit J-Test would look like with this amount of sinusoidal jitter added:
You will not find this high amount of jitter in any half-decent DACs out there. Such a level of jitter should automatically disqualify this DAC from being called "high-fidelity" and objective reviewers would rightly criticize the performance (even if subjective-only writers don't notice!). Track 3 is the sound of Ms. Krall with this amount of jitter applied to the data.
Finally, let's put the duo of fluctuating drift and jitter together. We have Track 4. Have a listen...
Not good enough, you say? Don't like Ms. Krall's music? Okay then, how about a hi-res big band style male vocal track instead?
Thanks to Dr. Mark Waldrep, I've also included similar manipulations with an excellent sounding sample from AIX Records; Steve March-Tormé's "On The Street Where You Live" from Tormé Sings Tormé. This is a true high-resolution 24/96 recording with excellent dynamic range as usual with these AIX recordings. Have a listen to Tracks 5-8. Like with the Diana Krall sample, I've included processing but with even more intense +/-200ppm speed fluctuation every 10 seconds through the whole clip! Flipping back and forth between +200 and -200ppm timing variation results in the equivalent of 400ppm sudden change each time. For Track 8, we have both the 400ppm fluctuation every 10 seconds along with 5.5ns jitter.
Here is the graph from DeltaWave showing the temporal drift pattern described above for "On The Street Where You Live" Tracks 6 and 8 (Track 7 is 5.5ns jitter added with no drift over the seconds):
DeltaWave settings: upsampled to 192kHz. |
So, dear readers, based on what you hear with these temporal anomalies "baked in" compared to the original music, do you think you should be worried about ppm variation even worse than typical standard crystals plus jitter from maybe the worst of the worst commodity DACs?
These days we have companies selling products with very high quality clock stability featuring "femtosecond" accuracy for low jitter and "master clocks" with very low drift. Remember that clock drift can be an issue in production studios when multiple devices have to run in synchrony, especially with audio-visual projects. In that situation, having a master clock might make sense (here's a good article). However, when listening to music at home with a single DAC, would something like the dCS Vivaldi Master Clock (US$14-15k) which promises +/-1ppm accuracy make sense? And obviously this begs the question of just how inaccurate is the Vivaldi DAC's (~US$36k) internal clock to begin with such that adding a Master Clock supposedly improves sound!?
Furthermore, what then do you make of this John Quick of dCS interview on AudioStream recently? Is there anything in the content of that interview that provides a cogent argument for owning a master clock (other than maybe bragging rights or the box looks cool)? Did the contents of that article clarify anything for you or did it just make things more obscure?
[For the record, I have listened to dCS DACs before. The Ring DAC sounds good, and workmanship is top notch. But this doesn't mean that the claims around the Master Clock should not be questioned or one should not consider the cost-benefit.]
IV. Concluding thoughts...
If a non-audiophile reads the Upscale Audio "bits are not just bits" article were to come and ask you:"Wow, audiophile friend! When did you realize that there were so many issues like 'noise' from digital cables, timing problems such as 'jitter' or 'drift'?"
What would you say?
Did you hear noise in the digital transmission between your CD transport and DAC over the years? Did you personally hear jitter? Have you noticed significant pitch differences between CD players and DACs, thus concerned about the accuracy of the crystal oscillators and drift?
Perhaps not surprisingly, when I speak to audiophiles over the years, they generally don't seem too concerned about these issues firsthand. Most couldn't really describe what jitter sounded like (again, have a listen here). I've never heard of an audiophile complaining of terrible drift. Rather, most would point at claims made by manufacturers or refer to online press and magazine articles as being the source of their discontentment.
As you can see in the subheadings above, I used the word "exaggeration" to describe the noise and temporal anomalies being suggested by the Upscale Audio article. These are not the only exaggerations - the article also makes claims about SMPS noise which we have already addressed recently, and also points at digital filters which of course can be different depending on how they were programmed and has nothing to do with digital data or transmission (a topic well discussed over the years). Like with other claims, the article just "hits and runs"; mentioning these issues quickly with no actual examples or depth to the discussion. I find this style of article-writing rather unfortunate, irresponsible even, and endemic in much of the Industry-sponsored audiophile literature (here's another example from iFi/Abbingdon).
Overall, remember that digital is not like analogue audio where noise is commonly a factor in the storage medium (eg. dirty, poorly pressed, damaged or warped vinyl LPs), or have significant timing abnormalities in the playback system (eg. wow and flutter of turntables are orders of magnitudes worse than digital jitter or drift!).
When it comes to digital audio, if you hear a big difference between different devices, IMO, don't freak out about inherent "noise" in digital technology (any poorly designed equipment can have noise) or temporal error (jitter or drift) as the
As audiophiles, I agree that we should aspire to achieving the best fidelity we can, including to the point of insuring that the resolution is beyond human perception. As such, I'm certainly not suggesting that we should be satisfied with 1ns cumulative jitter, or be happy with low-accuracy 50ppm clock oscillators (much less >5ns jitter and 100-400ppm/0.01-0.04% variance as in the demo tracks)! As I said in my SUMMER MUSINGS a month back, it's good to not become neurotic slaves of a hobby either as extreme "subjectivists" blown away by all manners of testimonies or extreme "objectivists" chasing after femtoseconds and down to the smallest of decibels. For me, it's about that balance of achieving a level of science and engineering for the quality of reproduction we desire that can be verified objectively, validated for ourselves with enjoyable subjective listening and getting it done with reasonable value for the money. The wise audiophile also recognizes that there are commercial influences out there desiring to distort and misinform, most likely for financial reasons.
I therefore must agree with the aspirational "Most Interesting Man in the World" pictured at the start of this post. Based on what I have heard and found with today's gear, subjectively and objectively, indeed "Bits Are Bits" as far as the consumer/audiophile is concerned. If you don't like the sound, the issue has to do with the device itself, not some esoteric "bits are not bits" rationale. Only poor quality equipment would need any special attention to "noise" filters for digital data communication, exotic clocks for accuracy, or requires anything more than normal digital cables.
"Stay thirsty my friends" for good music, and justifiably skeptical of vague audiophile articles.
Hope you're enjoying the music as we head into the latter half of August...
Addendum: August 18, 2019
With all this talk in the comments of HDMI and "snake oil", readers might be wondering what a "non-snake-oil" company and the types of technical information they provide can look like. Here's Exhibit A:
While it is unfortunate that Oppo no longer makes audio gear, that post is a beautiful example from 2017 of what honesty and technical competence looks like when a company wants to engage and educate the customer.
Notice the technical background information and rationale provided. Notice the fact that they demonstrated with objective results how the goal was achieved (eye diagram, J-Test). While they understandably did not disclose the comparison Blu-Ray player they measured from another company, the fact that they compared the UDP-205 to another device is a good sign that they examined the competitors' products.
Notice also that there's a humility to the tone of the presentation. The HDMI jitter was reduced from 53.82 to 50.67ps only (6% reduction). No dramatic claims about improved "air", veils lifted, soundstage "opened up" or any such flowery talk. As Oppo said, already jitter performance is excellent and while they were able to improve it even further, I highly doubt the engineers would have proclaimed these improvements are being massively audible! Such is the advanced level of digital performance these days. (Objectively, I was able to verify indeed that the HDMI "Audio" output performed with lower jitter when audio sent to my Yamaha receiver.)
Would we ever see forthright discussion articles like this from John Swenson/UpTone, Mapleshade, AudioQuest, Nordost, Synergistic, Shunyata and countless others for their "digital" gear and cables? Would we ever get articles of this technical magnitude from the likes of specialist stores like Upscale Audio? I certainly would not hold my breath...
Remember, the Oppo UDP-205 when it was available up to earlier 2018 only had an MSRP of US$1299. How many pieces of cable by some of the companies listed above cost significantly more than that!? And with what justification?
Addendum: July 2021
For further reading on a specific device that makes certain claims, consider the AudioQuest Jitterbug FMJ and how useless it is.
Digital media isn't bit-perfect because of jitter, and jitter of modern devices with standard cables is still audible (and visible, although much less noticeable). I didn't believe that until testing the Mapleshade Vivilink 3 HDMI cable and doing listening tests. The Mapleshade was more transparent and a blind test was easy to pass with 100% accuracy for several people I tested, including non-audiophiles (they had a hard time putting the difference they heard into words, although the audiophiles did as well).
ReplyDeleteThere isn't much point in arguing about this too much, as people will ultimately believe what they want to believe. I recommend testing the cable I mentioned and reporting back if you're curious enough.
Hi 739,
DeleteHmmm, can't seem to find the "Mapleshade Vivilink 3" HDMI cable. In fact, has Mapleshade discontinued all HDMI cables? Wonder why?
True, there isn't much point spending lots of time arguing over what someone subjectively experiences. Whether it's hearing the music or even seeing the image (some people claim expensive HDMI "looks better"...) being different / better / worse, so be it.
Interestingly it looks like Mapleshade believed the Vivilink HDMI made images better back in the day - see quote here about "viewing tests":
http://www.jeremykipnis.com/Cables_%26_Wiring.html
I'll leave it to readers how they view Mapleshade and the advertising claims.
Since it looks like Mapleshade doesn't sell any HDMI cables any more, there's really no point. What I would suggest the next time you grab some friends over for a listen is to document the encounter, let us know what music was used, which HDMI devices, what cables, what the sound system consists of, take some photos, etc... Maybe post the event description with results somewhere. Obviously the performance of the whole system and the room makes a huge difference way beyond any single cable!
The Vivilink 3 is their latest revision of HDMI cable that isn't listed on their site yet, I ordered it via phone. They just launched a new website so some of their stock isn't listed yet.
DeleteMost cases of expensive HDMI looking better are placebo and that's why I had this cable blind tested hoping to rule that out. There's also possible bias in that people want their expensive purchase to be an improvement. I try to avoid that bias by only buying products that have a hassle free return policy, so I can audition products hoping to hear no difference so I can save my money. I don't get to keep my money as often as I'd like.
A better review of an older Vivilink HDMI is here: https://www.audaud.com/mapleshade-vivilink-hdmi-cable-with-plus-upgrade/
Unfortunately this is the only professional review I've seen of the cable, so if you're skeptical you could assume this is just a paid review.
The problem with leaving it to the readers is that most readers are too busy laughing at Mapleshade's multitude of seemingly ridiculous claims on their website that they would never stop to consider that any of those claims could possibly be true.
I may post my setup on AVSForum or somewhere with reviews and impressions eventually, but 1) I'm too busy enjoying music on my system and 2) I have several more upgrades planned in the near future, so I keep putting it off.
I'll post a very brief description of the results of the last set of listening tests my friends and I did with the Mapleshade HDMI in the next comment.
My system is a 5.1 setup that I use for both home theater and stereo music. The system consists of:
DeleteAscend Sierra RAAL Towers (60hz crossover)
Rythmik FV18 aluminum cone driver subwoofers (x2)
Anthem Statement P5 Amplifier
Anthem MRX 720 AV Receiver
miniDSP DDRC-88A with Dirac Live room correction
HTPC connected to AV receiver over HDMI
We tested a standard HDMI and the Mapleshade Vivilink 3 from the HTPC to the AV receiver. We played FLAC CD rips from the HTPC over WASAPI, bypassing the Windows mixer.
For general impressions, the consensus was that the bass had more impact and was more articulate, and highs were cleaner and less fatiguing on the Mapleshade. Of the test tracks we tried, we found the most easy to hear difference in the first few seconds of Bikes by Lucy Rose. I have a hard time describing the difference myself but the way the notes of the acoustic guitar harmonize is different, and the difference, while subtle, is easy to hear.
Looks good 739,
DeleteNice system and great that you're enjoying the sound!
How much are the Vivilink 3 going for?
I'll give "Bikes" a try (I see this is off the Like I Used To album). Indeed would be great if you get a chance to post the experience up and of course will be good if Mapleshade could discuss the design and provide some science-based rationale for the benefits of the cable - only then IMO can companies selling exotic/expensive cables get past their reputation of engaging in "snake oil" sales.
I paid around $200 for the Vivilink 3 (with "plus" upgrade). For my own personal budget this hits a sweet spot of something that I am willing to pay to audition something and return if it doesn't meet expectations (I'd never pay for a $7000 HDMI cable or even a $1500 one, it doesn't make sense to pay for a cable more expensive than most of my speakers combined, even if I fully intend to return the cable).
DeleteMapleshade does provide some brief discussion of the science behind their design. Again, I don't believe this provides anything that will sway anyone who has already made up their mind on the subject (auditioning and doing listening tests would be the best way to sway someone in my opinion), but since you mentioned it, according to Mapleshade:
Conventional high-end HDMI cables have thick, energy-absorbing plastic insulation and excessively heavy-gauge, slow responding conductors. This adds serious jitter to digital video data streams, just like it distorts audio data and signals. Our Vivilink is dramatically different, using the same design ideas that make our audio cables so astonishingly clean and detailed: thinner all-copper conductors, minimalist insulation with better dielectric materials than any famous high-end video cable. Then we add in the same chemical and thermal treatments used for audio wires; they take image clarity and sound quality up another sizable notch.
The Oppo article is a good read and I agree that it would be nice if more companies would offer a similar level of honesty and transparency, but I'm not sure if it's fair to expect that standard from every company. My understanding is that Oppo is a larger company, while Mapleshade, for example, is basically one man who designs all of the products, and a handful of others that work in sales, etc. And even if Mapleshade provided some data to demonstrate that their Vivilink has less jitter than competitors, skeptics would assume that Mapleshade stretched the truth in their favor somehow (which is not an unreasonable thing to do at all; snake oil is a real problem in the audio industry so some level of skepticism is healthy). The ideal scenario is that a reputable third party (or several, to reduce the possibility of paid reviews) tests their cable and publishes the results of objective measurements as well as blind subjective listening tests.
I'm not a reputable third party, nor have I provided any objective measurements (if you have any ideas on how I can do so without spending too much money on specialized tools, feel free to suggest any), but I have provided subjective observations of multiple users tested with blind listening tests, and I think this is the best I can do for now.
I have thought about ways I could best capture the differences of HDMI video, such as using an HDMI capture card, but the one I have doesn't do lossless video, it automatically encodes to compressed H264 so it wouldn't be the best for this test. I have taken some photos on my phone which I believe do show the subtle differences in picture quality (mainly in vividness and accuracy of colors), but photos are certainly unreliable for a proper test, and I could be accused of just boosting the colors on one of the photos (or the difference in images could be due to something completely unrelated to the swapped HDMI cable, such as slight differences in lighting). If you do end up auditioning the cable, I would be very interested if you could come up with a good way to objectively demonstrate the difference (or lack thereof, if that's what you find) in picture.
Hi Archimago,
DeleteMapleshade make some, err, unusual looking cables. They offer a 30 day return policy so if you've got some time to waste you could test some and then return them if they don't perform. They do make some very strong claims, implying an obvious audible improvement. I wonder if the ribbon construction makes them susceptible to inteference.
https://shop.mapleshadestore.com/Digital-Interconnects_c_63.html
Not going to bother to debate anything here BUT, I intrinsically distrust a business that sells cork & rubber Isoblocks, like I have under my furnace, for $36-$52.
DeleteI see the exact same thing at Home Depot for $6-7.
As I think most will concur (there are always a few holdouts), bit transmission is (indeed) error-free - I've seen tests proving several DAYS worth of completely error-free transmission over a standard Belkin USB cable. That kind of accuracy or lack thereof simply cannot affect audio playback.
ReplyDeleteHOWEVER - again, as you note in your post - there OTHER factors that can impact playback, including the two you mention here; i.e., jitter and clock drift. I listened to your test files (sighted) and noted differences that are audible - in particular, the tracks that have combined jitter and drift sound decidedly "unsettled" / not quite right. The issues are subtle (to my ear, at least), but they ARE there.
The last track (the tone with timing variation every few sections) makes drift quite audible - as a musician, hearing the tone go sharp every few seconds is painful ;)
Thanks for providing the files!
That said, these differences are not "night and day" (to me, at least), as some would claim. More along the lines of, "Something's not QUITE right - wonder what it is?".
Thanks for the response, man!
DeleteRemember, I purposely created the amount of timing error to be many times worse than today's digital gear (even the really cheap stuff!) so that those with acute hearing (especially those with musical training and pitch acuity) can potentially experience the difference.
From there, we can then perhaps put into perspective the magnitude of the effect without need for dramatics. Indeed, not "quite" right subjectively and if you listen in a very quiet room with good headphones, you might be able to hear the sine wave imperfections every 5 seconds with Track 9.
With my ears, the differences are subtle at best, but objectively when measured clearly abnormal :-).
BTW, anyone want to try an ABX comparing Tracks 1 vs. 4, or Tracks 5 vs. 8?
Speaking of painful pitch shifting, things were much worse in the 70’s when pianist Glenn Gould started using a lot of analog tape splicings to construct his « ideal » performances from different takes. Tape speed precision was not so great…Here is an example where pitch goes up suddenly at around 1:04 (and at other places too, especially at the very end).
Deletehttps://youtu.be/TU6v_ujRink
I love this pianist and he was of course capable of playing without a hitch, but he was too early for this type of studio work that is so easy today digitally.
I had to rush through it a little. Once I started keeping beats, it was easy to detect the change in tempo. Jitter? Not so much.
DeleteFile A: 05 - Steve March Torme - On The Street Where You Live (Original).flac
SHA1: 12774242e4099690d138a7ceda00899bd5ce24ff
File B: 08 - Steve March Torme - On The Street Where You Live (Drift + 5.5ns Jitter).flac
SHA1: 4def2204c53ded1079ffd8c4dc6a59e5dc69e7cf
Output:
DS : Speakers (USB Audio DAC )
Crossfading: NO
15:04:08 : Test started.
15:05:09 : 01/01
15:05:56 : 01/02
15:06:30 : 01/03
15:07:16 : 02/04
15:10:04 : 03/05
15:11:17 : 04/06
15:11:58 : 05/07
15:12:32 : 06/08
15:12:57 : 07/09
15:13:43 : 08/10
15:13:43 : Test finished.
----------
Total: 8/10
p-value: 0.0547 (5.47%)
Listening setup: Foobar2000 - FiiO E7 - NAD HP50 in office environment (background chatter, etc)
In digital TV, when the receiver has a sufficient signal level to decode adequately the data, the image is perfect and having a stronger signal is useless; and no matter the power supply of the TV set, no matter the price of the antenna and its cable, when the "bit perfect" is achieved, the displayed image is exactly as transmitted. If the signal goes under that "bit perfect' threshold level, the image then gets hardly "pixelized" (or totally lost) and you don't need "golden eyes" to see that it's obvious that something got wrong with the image. AND... you do not loose subtle image resolution details nor you see "snow" in the background that hides something like we used to see on analog receivers affected by interference. So, the analog content being re-built at the receiver, why a non perfect digital audio transmission would create analog like problems (loosing subtle details) while a non perfect video data transmission creates only digital like problems (image, no image)? Isn't digital transmission, digital transmission ?
ReplyDeleteYes. :-)
DeleteCan't disagree.
Incorrect comparison taking into account heavily compressed TV signal. Standard 8 bit HD picture requires 2.94 gigabit signal rate. In real TV broadcast it is about few 10th Mbit or even less.
DeleteIt seems that the ghost of Harry Pearson continues to haunt the "audiophile" world.
ReplyDeleteUpscale Audio has been around for years, and are dependable woo merchants. Overpriced tubes and boutique audio jewelry still have their followers, and Upscale has labored mightily to keep the fever high.
Some people will always succumb to audio insecurity. Upscale and others of their ilk will always be there to feed that insecurity, and, not coincidentally, make a good buck off it.
Hi jsrtheta,
DeleteIndeed the ghost if not legendary status of HP lives on in the audiophile world and media that feeds it.
I personally have no problem with tubes and certainly tube amps can sound great. Just not "sold" that tubes are as great as some idealize them to be of course given their limitations of relative inefficiency, noise, performance shift over time, and life expectancy.
I guess it's all within the capitalist spirit that they can charge whatever the market will bear for old, rare tubes...
I just started following your blog as I am just recently getting back into 2 channel sound after a many year foray into surrender sound. I’m wondering if the “Bits Are Bits” would apply to CD transports. I assume the laser mechanism for reading the bits off a CD has to be quite the delicate piece of engineering, but once the bits are read properly, what else is involved that could possibly account for the ridiculously high prices I see on some transports? Of course perusing the professional on line publications and on line forums you can find all kinds of differing opinions with unsubstantiated claims. I’m an electrical engineer and we have a favorite saying when somebody makes unsubstantiated claims, “show me the data”. Sadly there is very little data backing up claims in the audiophile world. Why is that?
ReplyDeleteHey there Unknown,
DeleteIndeed, I would consider "bits are bits" applicable to CD transports as well. While we could see bit errors with CD reading in which case devices will tend to interpolate, the temporal performance of the audio playback is fine. In fact, jitter is minimum with most CDs I've checked out (and I'm not even talking very expensive "hi-end" stuff either!). This is to be expected since CD players have data buffers which are especially important for "shock protection" like the portable Discman models.
Check out the jitter performance of a few CD players over the years; nothing to worry about:
Sony Laserdisc: http://archimago.blogspot.com/2016/04/retro-measure-1994-sony-mdp-750.html
Sony SCD-CE775: http://archimago.blogspot.com/2018/04/retro-measure-2001-sony-scd-ce775-5.html
Oppo UDP-205 looking at jitter as DAC and CD playback: http://archimago.blogspot.com/2018/06/measurements-oppo-udp-205-part-3-jitter.html
Why is there little data to back up audiophile Industry claims? I suspect they figured out long ago that it's easier to sell to consumers' imagination and emotions (all subjective of course) than doing the harder work of providing reality-based evidence/data. Doing this might work for a little while but inevitably leads to an unhealthy consumer base built on weak foundations...
Thanks for the reply and links.
DeleteWish I could edit that. Surround sound, not surrender.
ReplyDeleteI thoroughly enjoy reading your articles, as you are able to nicely articulate and provide empirical data that supports views that I've long held (far better than I ever could).
ReplyDeleteI've always been amused around the talk of audiophile ethernet cables. The notion that data being transmitted about a home (using the TCP/IP protocol with error checking) will somehow 'degrade' sound quality and introduce 'noise' unless you have a $200 ethernet cables baffles me. If this logic were true then any data downloaded from the internet to their local hard drive would also be riddled with these errors and they would forever be baked into the file. It simply doesn't work like that! An article explaining in more detail how network error checking works would be excellent.
Perhaps in extreme circumstances where network signal loss was so bad (I'm thinking audio playback over poor WiFi) and error correction did not have sufficient time to re-transmit a corrupted packet before it was sent to the DAC for playback this could occur. Or perhaps if for some reason a network protocol without error checking was being used. However in any wired network without faulty components this should not be an issue and data should be transmitted and arrive error free time after time.
For a time I had a Roon endpoint an older Intel NUC connected over Wireless N. The system had a custom install of debian minimal - as an experiment and to stress the network I upsampled all audio to DSD128 (the highest rate my DAC would support). I monitored the network interface from the command line and after hours of playback there was zero packet loss and zero re-transmits required.
Good job with the DSD128 Roon streaming test Bikkitz,
DeleteLooks like you had a good wireless connection!
The way I see it, so much of the audiophile world regurgitates the same insecurities about how bad things could sound if it weren't for tweaks and esoteric (expensive) things like their ethernet cables, special USB cables, various "filters" or even "audiophile ethernet switches"
The moment one asks for evidence why they hold such beliefs, suddenly the silence becomes deafening.
IMO, the audiophile hobby must go beyond such nonsense. It's only right that individual consumers and indeed the body of audiophile consumers be encouraged to ask such questions and companies unable to explain even the basics of what they claim be noted as suspicious (and IMO best avoided).
Yes - especially considering the NUC only had wireless N capability and it was still fine. I was trying to establish from my comment that even DSD128 (which required about 7MB/sec in bandwidth from memory) could be transferred reliably on a wireless N adapter without dropped packets. Any good wireless AC and definitely any wired connection will be more than adequate - especially for 16/44 PCM which requires far less bandwidth.
DeleteQuick correction - DSD128 consumes just less than 1.5MB/sec - not 7 as I stated.
DeleteI remember a guy in a UK hifi magazine who, out of curiosity, decided to compare different audiophile USB cables with printers. Seriously, he made a number of prints of photographs and sought to examine them for different types of printing artefacts. This was like something out of Monty Python, and even writing about it just now feels that way, but it was a serious article (I think it was HFNRR). If I remember, the conclusion was that he thought he could detect differences but wasn't sure, and anyway, music was much more complex than the mere printing of pictures.
ReplyDelete"Oh darn, look at the jitter in that photo"!!!
LOL,
DeleteIf anyone has a copy of this article, would love to have a peek :-).
Hilarious stuff and indeed would make a wonderful Monty Python skit. Obviously it's fine if a writer did this as an experiment to "double check", but to think that presumably a serious magazine writer would even think this way and expect differences can be seen appears to be a gross failure in understanding how these things work!
I hope the audiophile USB cables at least worked to specifications for his computer/printer... I've heard that some audiophile cables are even more error-prone and outside of specifications.
Archimago, compliments on another great article!
ReplyDeleteI have performed some experiments to assess the audible consequences of changing individual bits in a piece of music. I am a statistician and work with software (called R) that allows me to manipulate large amounts of data so I was tempted to play around with the data from a piece of music.
I took a ~2 second mono wav file (16 bit 44.1KHz) converted it to a file containing only 0s and 1s and then started introducing random 0s and 1s. Then I reconverted this file to a wav file again and listened to the result.
One of the original wav files contains 78798 datapoints (1.78 sec x 44100 Hz) in digital format this represents 1260768 individual 0s and 1s (16 x 78798).
I randomly overwrote ONE single bit of these 1260768 bits with either a 1 or a 0. In order to avoid one off results in either direction I generated multiple copies. Some of the wav the files I generated had clear audible artefacts like a pop or a click. Others had no audible differences. In the latter cases a 1 might have been replaced with a 1 or a change happened in or close to the Least Significant Bit (LSB) and this would only have resulted in subtle differences.
To assess whether low or high frequencies were affected differently I used one audio fragment with a drumbeat and second with a female voice. Both files were affected exactly the same by changing single or multiple bits.
These experiments show that random errors, like a change in only 1 out of a million bits, can result in an audible difference. This means IMO that if a digital cable is not good and introduces random errors the listener will hear it immediately. The experiments also show that minute random changes (errors) in the digital domain result in very audible artefacts like plops and clicks and not in subtle changes like hearing more “air” around an instrument.
Archimago, I am willing to share the wav files if you want to upload them. I used excerpts from commercial recordings; perhaps I should repeat the experiment with public domain recordings.
Hi AudioDutch,
DeleteWow, nice experiments. Not surprised that even single bit errors can introduce audible clicks and pop when introduced into the wrong places! :-) Even more disturbing if the stream is compressed and the error affects more than a single bit.
If you have the files available, I'd certainly like to have a look and listen as well and perhaps put a link for those interested. Just E-mail me at the address above!
This is really interesting! Would be nice if you sent it to Archimago so he perhaps could publish them here. And also the unaffected source files as well for some null testing :)
DeleteGreat Experiment! Makes total sense and to me best evidence i have seen that bits are bits...
ReplyDeleteI'm so sad that it took me until now to actually find this blog. I wish I'd been reading it all along; now I just have a lot of reading to do. ;-)
ReplyDeleteKeep fighting the good fight of rationality and common sense against the audiofoolery, and I'll be here to cheer you on.
Incidentally, while I haven't seen it mentioned in the above discussion, your comment with ". While we could see bit errors with CD reading in which case devices will tend to interpolate" needs a response. The way all CD transports I'm aware of work is that they use the CIRC error correction inherent in CD encoding to restore any incorrectly-read bits to the stream. If a bit error is noted, it is corrected (not interpolated) before passing the data along. If there are too many bit errors to correct, Red Book standard indicates that the player should mute. Interpolation actually only applies in that gray area of bit error count between correction and muting, which is pretty small. From test results I've seen going back to 1984-85, not many players interpolate more than a few samples before muting, so it wouldn't be a continuous process. Just FYI.
Also,
Delete"If a non-audiophile reads the Upscale Audio "bits are not just bits" article were to come and ask you:
"Wow, audiophile friend! When did you realize that there were so many issues like 'noise' from digital cables, timing problems such as 'jitter' or 'drift'?"
What would you say?"
I would say "Your question presupposes facts not in evidence.: ;-)
Archimago, love the blog and suck a bunch of time into comparison test Articles. Love to see what the points of diminishing returns hits its limit.
ReplyDeleteThough I have found somewhere that clock drift/difference. When streaming audio via multicast to multiple playback devices. Such as a whole house au
(Huh no,way edit the above post?)
Delete....whole house audio via multicast Or synched start. The clock differences between devices means by ~3min in the sound is noticeably out of synch between rooms. By 5-7min, it will drive you insane.
The solution I chose was to use an open source program that buffers the audio for a bit, and uses tcp packets to keep time to a server. It does skew the audio here and there to keep in it synch. But I have never noticed that...
Of course the other option is disturbtion amp... but what fun is that?