|For the 2016 edition, go here.|
For years we've been worried about the "dreaded jitter". However, we know that these days, with asynchronous interfaces like USB and ethernet, there's nothing to be concerned of. Sure, we can see jitter anomalies with old S/PDIF, but I doubt anyone should purposely not use the interface for fear of audible issues assuming otherwise decent gear. For years and still to this day, various "practitioners" of audiophilia hang on to beliefs around cables of all sorts; assuming normal hook-ups with decent quality cabling (and even with poor quality cables), we are typically hard pressed to find evidence of audible differences.
Then we have beliefs that bit-perfect streamers sound different, really? We also hear of esoterica like folks who think lossless file formats sound different, seriously? How about the folks who think that computer OS's make a difference, or software players sound markedly different (assuming it's all bitperfect of course and sent to the same DAC)...
Feel free to browse this blog for discussions and thoughts around countless other audiophile items of faith. Today, let's address the audiophile "myth" that has gained prominence among those trying to sell things and those that advertise said "things" over the last few years.
As per the title, today, let's explore this "myth" of the detested ringing with digital filtering and audio playback since I've been posting a bit of a series on this topic over the last little while.
I. A Little History... How in the world did we get here?Anti-imaging filters in digital-to-analogue conversion have been with us since the start of digital audio of course. They're needed to reconstruct the continuous nature of the audio information encoded in our digital files and prevent unintentional frequencies to seep through. Let's stroll through the archives of Stereophile over the years to see how a focus on filtering has crept into the audiophile mindset as a topic of discussion and debate. As I've said in the past, despite my criticisms, at least Stereophile is the most technical of the mainstream audio magazines easily found in North America and good stuff can be gleaned especially in the measurements section of their reviews.
Audiophiles had jumped on the idea that CD's specs were not up to perceived expectations shortly after the introduction of the format (the ideas were reviewed in this 1986 article). These days, all kinds of people claim they "knew from the start" that CD fared poorly against vinyl LPs in terms of sound quality. Sure... We're in an era where there's a retro-coolness to owning turntables and LPs so unless it's documented somewhere years ago, it doesn't count :-). All I remember about the early 80's digital audio was marveling at the beauty of the discs themselves thinking how "cool" lasers are. A few years later, reading the excellent review of the first Sony CDP-101 in an old copy of Stereophile I found, I remember how I "needed" one of these by the time I reached my teens!
As I re-read the articles now and the followups including J. Gordon Holt's later comments (like this one), it's hard not to have respect for the man who stands his ground when he saw real progress and could clearly describe the hows and whys the changes represented a step forward in fidelity. Clearly the CD format was a step forward compared to the obvious deficits of vinyl. From the start, questions about dynamic range, the importance of dithering, and uncertainties about antialiasing/anti-imaging filtering had already been posed. Remember though that those were the very early days of digital, when most consumer CD players and later DACs were not truly capable of 16-bit resolution, before digital upsampling and oversampling, and before the use of delta-sigma modulation or complex hybrids of multibit and bitstream as we see these days. Even then, CD's sounded good!
Some of the earliest devices that focused our thoughts on digital filtering were devices from Theta and Wadia. As far as I can tell, Theta devices performed standard linear phase digital filtering with DSP chips of the time (starting with the DS Pre in 1988, with measurements of the DS Pro Prime by 1991). Wadia went a different direction with their filtering algorithm - like this review from 1990 of the Wadia Digimaster X-32 "digital processor". Check out the Measurements section and marvel at the use of atypical (at that time) slow roll-off filtering with clearly measurable roll-off before 10kHz and strong imaging artifacts above Nyquist. Interestingly, the square wave result looks like they used an intermediate phase filter. All this being done by 36MIPS of processing power (not bad back in 1990)! (For comparison, remember that the humble Raspberry Pi 3 today is capable of about 2400 Dhrystone MIPS, and 190 Linpack double precision MFLOPS - all for <$40US - isn't Moore's Law great?)
Over the years, Wadia continued to release new versions of the Digimaster - such as this 4th version of the Digimaster 2000 Mk.2 by 1996 and the 27ix by 1999. As you can see, they continued with the tell-tale slow roll-off filter but the square wave in the latter review suggests that they went back to a linear phase symmetrical filter. By 1999, technological advancements allowed the Wadia device to be capable of 96kHz input and true >18-bit resolution.
One does wonder whether the filter designs back then were truly what the engineers intended or simply the result of compromises made due to computational limitations. Just the same, even if known to be suboptimal, there's a market to be made by being "different" even if these filters were actually not any better in terms of fidelity.
Another interesting device worth mentioning was the Meitner IDAT (Intelligent Digital Audio Translator) DAC from 1993 (square waves also shown here). This "intelligent" device used four Motorola DSP56001 processors (90MIPS, baby!) and would flip between FIR and IIR filtering. Hmmm, I don't see a point to doing this these days, but I wonder how the detector would select which filter and whether distortions would be introduced by the switching; presumably it could switch mid-playback and perhaps multiple times in a song...
The Japanese "major" consumer electronics companies got into marketing of filter tweaking only to a small extent from what I could tell. For example Pioneer had their "Legato Linear" / "Legato Link Conversion" spline-based interpolation in the early 1990's (eg. Pioneer Elite PD-65 in 1992, mentioned as a feature with Pioneer PD-S 507 in 2000). I have not come across measurements of this filter.
In the late 90's and early 2000's, there were claims put out by companies like Audio Note and Sakura Systems that digital reconstruction filters were unnecessary. "1x oversampling" was the rallying cry for Audio Note and Sakura hailed the low complexity of Wadia's 13-tap filter and Luxman's DA-07 "Fluency" filter (released in 1988, go here to download a paper on the interpolation method, measurements for the 2011 Luxman DU-50 here) claiming that Non-OverSampling (NOS) was the way to go. Yes, NOS DACs do sound different, but if aliased, squared off "stepped" digital sound, with some high frequency roll-off is what you want in 44.1kHz playback, then those devices are for you. These days, we still see NOS devices like the MHDT DACs, Computer Audio Design devices, and the Pro-Ject DAC Box typically still using the old Philips TDA154x chips hailing from the early 1990's (you can of course get cheap versions on eBay like this one I measured awhile back).
By 2005, we have Ayre and their C-5xe Universal Disc Player providing intriguingly various digital filter settings labelled "Measure" and "Listen". Later they released their white paper with the introduction of the MP upgrade of devices by 2009 (also the same year as the QB-9 USB DAC). The "Listen" filter is the familiar slow roll-off, anti-imaging reconstruction filter used in the PonoPlayer among other Ayre-designed gear.
By 2006, I remember reading this Stereophile article and becoming curious about this whole digital filter business. But although Ayre had done something similar earlier as noted above, it wasn't until 2009 with the Meridian 808.2 and their intriguing use of the term "apodizing" (which really was just "minimum phase" when it came to that filter) that I sense the whole idea of using digital filtering as a differentiating factor marketed to consumers at large started to take hold of the audiophile psyche. I remember asking about this by 2012 in the Slim Devices forum as I read through the material and started to wonder just what the big deal was about! For awhile I played with custom upsampling using Viruskiller's settings on my Squeezebox Transporter as documented on that Slim Devices thread.
Fast forward to this decade, and we see an evolutionary progression where impulse response measurements are typically included in the suite of objective assessments (including here on this blog). Many DACs will allow the user to switch settings (including high end consumer players like the excellent Oppo BDP-205 UHD-BluRay player). There's also a move to standardize the filter characteristic (à la MQA for compatible devices).
And that friends, brings us to today... In 2018. After all these years, do we have a clear concept of what we want in a digital filter? Do we understand whether there's actually anything to worry about? Specifically, are there "ringing" or time-domain issues we need to worry about with an orthodox linear sinc filter - supposedly the "ideal" time domain filter for low-pass, antialiasing/anti-imaging?
I think if you've read through my last 2 posts (here, here, plus the one in 2016, plus the blind test in 2015), you will have seen that on the whole, I believe having digital filtering is important (no NOS for me) but let's not "make a mountain out of a mole hill"! In fact, as you saw last week, my preference is clearly more along the lines of high quality linear phase filters being more frequency and phase/time coherent than minimum phase filters as employed by Meridian, Apple, MQA, and Ayre.
II. Demonstrations - with actual, real, music! :-)Today, let's get into the real world! I'll provide impulse response graphs for correlation only, no synthetic square waves like last week... Let's look at actual music and the magnitude of supposed issues like the much detested "ringing".
If we are to pixel-peep, let's zoom into the leading edge of perhaps the most dynamic sound recorded on digital - a cannon explosion on the CD layer of Telarc's 2001 1812 Overture SACD. Let's look at the waveforms after running through SoX upsampling using different anti-imaging filter settings:
Notice the impulse response graphs to the right. To the left, you see the original 16/44 data up top - I've zoomed in enough to reveal the actual sample points. And then below are the 3 upsampled/filtered/antialiased waveforms with a basic cubic interpolation underlaid to provide a time-domain comparison after upsampling to 352.8kHz. What do we see? The filter with the least temporal change is the linear phase filter. Even with the impulsive leading edge of a cannon blast!
Using the same filter steepness, the minimum phase variant with its non-linear frequency-phase/time relationship creates a slightly different upsampled waveform. And you can see at the bottom that I've included my intermediate phase suggestion from last week which in this situation with an excellent recording, only shows a very slight temporal change from the "ideal".
How about another example? This time of a snare drum. Snare drums are a great test of temporal accuracy. Here's a good sounding free snare sample. I normalized the peak to 100%, zooming in on the highest peak in the "attack" (this whole waveform shown below is only 7ms). SoX -b 95 used again to upsample to 352.8kHz. As above, I've underlaid the cubic upsampling waveform for temporal comparison:
Feel free to try this with other "impulsive" sounds such as cymbal clashes (yes, I've looked at this as well, but not shown). Looking at the samples above, do you see any extra ringing anywhere whether it be "acausal" pre-ringing or "likely masked" post-ringing even with a steep filter (95% passband) used?
NO! NOT WITH GOOD RECORDINGS.
The key here is to remember that within a properly bandwidth limited signal where all the frequencies are below Nyquist, a linear phase FIR filter actually does not create ringing regardless of the impulse response appearance. As I have said in the previous weeks, any decent recording will follow this rule. And if it does, then the ideal filter to use is clearly a linear phase, sharp filter that can reconstruct all the frequencies in the audio data with essentially ideal temporal resolution.
And that folks is the "myth" we need to say goodbye to in 2018! Linear phase, steep "brick wall" type antialiasing/anti-imaging digital filters performed with high precision, and with no intersample overloading, do not "ring" with good recordings that only contain "legal" frequencies below Nyquist. Sure, some people might prefer minimum phase slow roll-off filters because they sound different (as per Ayre's "Listen" filter), but technically if you care about time domain performance and frequency domain accuracy of 44.1kHz playback, you would go with a high precision, reasonably sharp, anti-imaging linear phase reconstruction filter (as per Rob Watts' video linked 2 weeks back).
But there is a slight issue we have to be aware of! What if the music we listen to is poorly engineered and not properly bandwidth limited? Sadly, these days with loud, dynamically compressed, "limited", and all-out clipped music, there are quite a number of "indecent" recordings out there.
They aren't hard to find. Just look for tracks in your music collection with low DR (typically DR5 or less), then have a look for squarish waves. Here are a few examples of pixel-peeping into rock/pop songs and finding "problematic" waveforms in the FLAC CD rips:
"So what?" You say... Well, let me show you what happens when we put the segment like Pink's song through a steep digital filter making sure we avoid intersample overload (as above, I include the Cubic Interpolation comparison):
Voilà, your "ringing" as the digital filter negotiates the "illegal" parts of the waveforms. For context, that whole waveform above is only 4ms of the Pink song!
Underwhelmed by the severity of ringing? :-)
You see, in real music, even with egregious examples of degenerate waveforms like these, the intensity of the ringing is never as impressive looking as what is shown in the impulse responses when looking at DAC measurements.
As you can see circled in pink for the linear phase filter (with a 95% passband), the "acausal" pre-ringing is actually very tiny. In fact, this P!nk sample is one of the worst examples of pre-ringing; most of the time it's not even as noticeable as above! With the minimum phase filter we see that the post-ringing is more intense and notice what happened to the double "flat tops" due to the post-ringing.
I also present the same waveform using the "-p 45" intermediate phase suggestion which again I think is a great compromise that clearly reduces the pre-ringing and still maintains the "flat tops" quite well due to much less intense post-ringing; at the expense of some very mild phase shift primarily with higher frequencies of course.
In some situations such as the Californication sample (zoomed to about 5ms so we can see the ringing better!):
Notice how in this sample even though poorly bandwidth limited, the linear phase filtering changed the waveform only very slightly and there was actually no meaningful ringing added. In fact, as you can see, by going minimum phase, even if it's primarily post-ringing, the waveform was altered much more significantly.
This is an example of a situation where using minimum phase filtering gained nothing in the process and in fact worsened fidelity. There was no pre-ringing to suppress and all it did was cause temporal delay and "blurring" of the frequency time relationships. Again, it's hard no to be underwhelmed by how little there is to be concerned when it comes to "ringing"!
III. So... In Summary...While it's not fully a "myth" that ringing cannot happen with playback, here are the key points:
1. Generally there is nothing to worry about with ringing. They only happen with poorly mastered music - typically those with strong dynamic compression, low DR, and a propensity to clip.
Strictly speaking, these types of recordings are not worthy of a "high fidelity" audio system any way! Surely as audiophiles, I hope we're not spending thousands of dollars just for listening to The Black Keys, Justin Bieber, P!nk, Beyoncé, Miley Cyrus, Imagine Dragons, Bruno Mars, The Weeknd, Taylor Swift, Lady GaGa, etc. without throwing in a bit of acoustic, jazz, and classical into our musical repertoire. The truth is that one would never be able to appreciate the nuances and dynamics that high quality equipment can convey when fed essentially compromised audio material that doesn't even need 16-bits resolution, much less 24-bits. This is not a judgement against the value of the artists or genres (hey, I love my rock and pop!), but rather an observation that high fidelity hardware can deliver way more resolution potential than what this kind of "software" demands.
Ringing is seen only with albums that contain poorly bandwidth limited waveforms. In general, classical albums and acoustic recordings will not have this problem. No surprise then that even with very steep filters, our blind test back in 2015 did not show a clear listener preference between linear and minimum phase filtering using good recordings. As shown above, even when it happens, they're not of strong amplitude and typically inaudible as discussed before.
2. If you typically listen to good quality recordings - you know, most classical, vocal and jazz recordings, stuff from DCC, Telarc, Channel Classics, MoFi, Analogue Productions, AIX, Wilson, Chesky, classic RCA Living Stereo, etc... Stick with a high quality digital filter that is: linear phase, relatively steep that doesn't roll-off in the audible spectrum, and provides overhead protection against clipping/overload. Something like Rob Watts/Chord's accurate high-tap sinc filter or using piCorePlayer with the "Chord-like" SoX settings from a couple weeks back will be best for these recordings from the perspective of highest fidelity. As I said in that previous post, I don't think a very steep filter is necessary, something like 95% bandpass with SoX is more than good unless you are sure your hearing acuity extends beyond 21kHz :-).
In this light, I actually cannot help but think that when companies introduced certain "new" filtering ideas, these steps actually worsened fidelity. For example, though intriguing at first, these days I consider the Meridian "apodizing" steep minimum phase filter from 2009 a step backward in terms of fidelity. Sure, it created a stir in the audiophile community, maybe sold a few Meridian units, and put the idea of switchable filters on the feature list of many DACs; but it causes more time-domain issues than it actually solved. All because it dazzled even those in the audiophile press with the lack of pre-ringing impulse response? Here's a quote from that review:
"The brevity of these preresponses that occurs with high sample rates has led some commentators to decide that this is why high sample rates produce better sound quality, and not the octave extension in bandwidth itself. On the face of it, however, this improvement seems paradoxical: the preresponse comprises ringing at half the sample rate (called the Nyquist Frequency), which, even with CD, is above the limit of human hearing. But the preresponse does appear to have an audible effect, perhaps because the human ear/brain system acts as a wavefront detector rather than as a spectrum analyzer."The most fascinating part is the piece that I italicized. Notice the ambivalence of the sentence before where he expressed factual recognition that the ringing is at Nyquist. But then right afterwards, takes it as some kind of subjective "truth" that the effect "does appear to have an audible effect" with no evidence for such a thing whatsoever! Who said they could hear this and with what music? Probably not Mr. Atkinson himself because earlier in that quoted paragraph, he referenced the 2006 article where his own listening test did NOT clearly show evidence of an audible "preresponse"! Such is sadly the state of the audiophile mainstream media... Full of contradictions and an unwillingness to be critical of claims no matter how unsupported. Of course Meridian wants us to believe pre-ringing is audible to sell their goods; but do we as audiophiles need to believe this, and does the media take any responsibility for educating, clarifying, and investigating before promoting such ideas?
BTW, I think Apple's switch to minimum phase playback in iPhones and iPads (compared to the classic iPod) also doesn't help fidelity - there's no point getting picky playing typical AAC iTunes/Apple Music lossy audio in any event.
3. If you listen to lots of 44.1kHz modern pop/rock/electronica/loudly remastered CD's and likely will run into anomalous waveforms, go have fun with my "Goldilocks" intermediate phase setting from last week. IMO, it's fine for all that I listen to and incorporates small tweaks that may not be "ideal" but I do not believe any human would be able to notice. The intermediate phase setting will significantly reduce further any risk of audible pre-ringing (assuming we're still worried after all I've said and shown here!), phase anomaly will be minimal compared to a steep minimum phase filter, imaging suppression is excellent, and there won't be any issue with intersample overload in actual music with 4dB overhead as suggested.
4. If you listen to 88.2/96+kHz high resolution, don't worry about the filters for the obvious reason that any acausal ringing from a linear phase filter would be way beyond human perception. Of course, if we're buying 88/96+kHz music, it better be well engineered! Even then, I'd just stick with linear phase, and ensure the passband isn't insanely low to cause roll-off below 20kHz - something is very wrong if one ever sees this. The most important aspect of a good digital filter for high samplerate music is to make sure it doesn't overload when the DAC upsamples!
5. In an ideal world, all recordings will be properly bandwidth limited so nobody could point to ringing during playback! If that happened, then follow point 2 above.
Folks, it comes back to the quality of the recording, mixing, mastering. Fooling around with filtering in the hardware is at best just an honest attempt to plaster over poor recordings (and at worst a marketing attempt to sell something "different" without admission that it likely compromises fidelity). Support what you can to encourage better quality engineering and studio techniques. An end to the Loudness War would of course go a long way, support Dynamic Range Day, don't buy crappy recordings, etc...
Have fun listening everyone and let's stop getting excited/worried/panicked about "the detestable ringing" in 2018. With that, I think I'll take a break from talking about filters for a bit. I've probably covered the topic enough for awhile.
Time to just enjoy the new albums I got over the holiday season... And time to get back to the real day job... :-)
[BTW, if you're wondering why this is Myth #260, this is my 260th blog post! I must have missed a few other ones here and there along the way :-). Almost a full 5 years into the blog with regular updates, my how time flies...]
For audiophiles, here's an interesting presentation on MQA and DRM by Drs. Christoph Engemann and Anton Schesinger. The slides are also available as PDF. No question MQA implements a form of "conditional access" even if it doesn't prevent the ability to copy files at this point.
I'll leave you to review the information provided by these gentlemen.
Until next time, wishing you all happy listening!