I've heard over the years some people wondering whether there really is a point to 24-bit DACs. After all, there is little if any evidence that "hi-res" 24-bit music actually sounds any better - in fact, you might recall that way back in 2014, here on the blog we ran a blind test and the results did not show significant audible benefit among respondents. More recently last year, even with Dr. AIX's "HD-Audio Challenge", very few people were able to experience benefits to "hi-res" audio (no surprise of course!).
But just because "hi-res" music is not necessarily perceptible, let's not "throw the baby out with the bath water" :-). These days, modern DACs are already capable of 24-bit audio as a standard feature and there are situations where this is clearly beneficial. Remember that 24-bits allow a digital dynamic range of 144dB which is clearly more than enough for any audio reproduction for human consumption. In fact, no DAC analogue output is capable of the full 24-bit digital resolution, the maximum we can achieve these days is around 22-bits of effective resolution and a very good DAC provides something like 20 to 21 bits. Unless you've replicated the "world's quietest place" which measures at around -20dBA SPL (Brownian motion limit at room temperature around -23dB SPL), our homes typically have a noise level of around 20-50dB SPL already. If we consider that the "threshold of discomfort" is up at ~120dB SPL, that means we effectively have something like 100dB maximum range we would ever use.
Remember that while short bursts of high amplitude sounds can be OK, prolonged exposure for hours at >85dB SPL will damage hearing. Taking these things into account, realistically, something like 60-70dB of dynamic range should be good enough even in relatively quiet rooms - conveniently, this is also the limit of vinyl LP which is why good sounding LPs are possible with credible dynamic range whereas cassette tapes sitting at around 50dB (improved with Dolby NR) typically would not be acceptable as a "high fidelity" medium.
16-bit digital audio with 96dB potential dynamic range in home and mobile scenarios is easily more than good enough (hence results like the aforementioned negative 16 vs. 24-bit blind tests). Remember that the perceived dynamic range of 16-bit audio in the audible spectrum can be further improved with noise-shaped dithering by >10dB depending on the algorithm used.
So, is there a point to going 24-bits? Sure! While 16-bits may have adequate ability to retain all the dynamics we'll ever need on playback, it does not have much "overhead" if we start manipulating the data and want to ensure fine details are preserved. And it's precisely when we start manipulating the digital data with signal processing as end-users that it's very nice to have the 24-bit DAC around. Here are a few examples:
1. Digital volume control. Having a large dynamic range means that there are more bits to play with when we start reducing amplitude during playback without sacrificing quality. One could reduce a 24-bit signal's amplitude by 48dB in the digital domain and still maintain all the detail in the original 16-bit audio. I specifically said "digital domain" because since no DAC is perfect, the effective resolution of each DAC will vary once we consider the analogue noise floor. As discussed above, in the real world, a good DAC can provide around 20-bits of resolution which is still >20dB of volume reduction of a 16-bit signal before we encroach into the noise floor...
Just to demonstrate this fact, suppose we took the RightMark 16/44.1 test signal and attenuated the amplitude by 25dB, here's how the waveform would visually appear in Adobe Audition:
As you can see, that's a rather significant reduction in the amplitude if we were to listen. What if we then run that test signal through the analyzer comparing either playback as undithered 16/44.1, dithered 16/44.1 or simply a volume reduced 24/44.1 signal through my Oppo UDP-205's DAC and captured back using the RME ADI-2 Pro FS?
Here are some results to consider:
|"Tri-Dither" = triangular dither, "Tri-NS Dither" = triangular dithering with noise shaping.|
Finally, on the right side is the measurement of the -25dB signal, but processing the signal and feeding this signal to the same DAC in 24-bits. Notice numerically the significant difference when we allow the signal to be reproduced in 24-bits. Here are some composite graphs:
As expected, noise shaping shifted noise into the higher frequencies while reducing the noise floor where human hearing is more sensitive. I wasn't quite expecting the higher distortion results with noise shaping though; presumably this is the effect of the feedback algorithm on the artificial test signal and would not be an issue with actual music. But what's of course most important is the "real world" demonstration that even with a -25dB drop in amplitude, the 24-bit DAC reproduced the original 16/44.1 signal at essentially the same quality level with full dynamic range and very low levels of distortion.
This is good news if you're using software-based volume control on a computer or devices like the Raspberry Pi feeding into high quality 24-bit DACs. Realize of course that the result above is no surprise. As shown previously, we know the Oppo UDP-205's internal ESS ES9038Pro DAC is at least capable of 120dB dynamic range which is 20-bits of measurable resolution. Dropping a 16-bit signal by 25dB is equivalent to having the 16-bit LSB signal attenuated down by 4 bits into around the 20th bit. As expected, the DAC has high enough resolution to handle this.
So, if you're using a 24-bit digital volume control, there's no need to be neurotic about loss of quality if you're feeding the audio data into a good 24-bit DAC. Note that if you're routinely dropping the digital volume >20dB, you really should be re-evaluating the amplifier gain and speaker efficiency for your room.
2. Volume normalization. Like above, this is another form of changing volume in the digital domain. These days, I have batch-analyzed all my FLAC and MP3 files to embed ReplayGain tags in the metadata using the excellent dBPowerAmp. For my library, I've found that ReplayGain's default target of EBU R128 -18LUFS (for more info on LUFS, see here) works well and I have tagged the files for both Track and Album gain.
As you know, due to the Loudness War, depending on the mastering and age of the recording, there will be a large range in average loudness between tracks and albums. I very much enjoy jumping between tracks but it sucks when going from a low average amplitude classical piece to an insanely dynamic compressed DR4 rock track risks hearing loss and blown tweeters because the volume was set too high. Volume normalization is the answer and one of the most universal ways to do this is to have a program like dBPowerAmp analyze the track and put these metadata tags in there to tell compatible playback software about how loud the music is. Most modern audio playback software can handle these including Foobar, JRiver, Roon, WinAmp and Audirvana. ReplayGain is handled on many phone players as well and library software like Logitech Media Server.
By targeting -18LUFS average loudness, the software will automatically turn up the loudness of quiet tracks and reduce the volume of loud tracks. Because the range can be quite larger, it's not unusual to see -10dB applied to compressed stuff (these images are taken from my Logitech Media Server library with ReplayGain playback turned on):
That's Jason Mraz's recent dynamically compressed pop album Know. with a DR7 score. In order to achieve an average album loudness of -18LUFS, almost 9dB of attenuation will need to be applied to the songs.
In comparison, some "first pressing" CDs from back in the 80's particularly could use some volume increase:
Here's an early release of Rod Stuart's Greatest Hits showing that ReplayGain would increase the amplitude by almost +3dB to target an average loudness of -18LUFS for this album.
There are a couple of nuances that is good to be aware of. First, ReplayGain can be applied to Tracks or Albums. If the player finds consecutive tracks from the same album being played, then it can use the Album setting and keep the ReplayGain value the same through the album - this is good for live albums for example where you don't want to hear loudness discontinuities when one track merges into the next! Track normalization is the type of normalization used with streaming when mixing different songs from different albums and maintaining the same volume setting. In LMS, the "Smart Gain" setting will determine this for you, and in Roon, select the "AUTO" setting when volume leveling:
|My "Target Volume" for Roon. I see that by default in Roon 1.6, they're using -14LUFS. IMO that's fine for mobile audio, but I'll stick with -18LUFS for my high-fidelity setups. If I were mainly a classical guy, I'd go for -20LUFS with the main rig.|
This old Flim & The BB's TriCycle album has an amazing DR18 dynamic range. Notice that to achieve a -18LUFS average loudness, the software would have needed to add 5.76dB. But because of the strong dynamic range with peaks extending up to 0.06dBFS, in order to not clip the audio, the software could at best add that tiny 0.06dB amount.
3. Other DSP - EQ, room correction. Of course a good 24-bit DAC is important by the time we start getting even more sophisticated when using DSP - things like adding EQ, crossfeed, room correction. Remember that internally, DSP engines typically use 32-bit or 64-bit precision to perform the math and then converted back to 24-bits sent to the DAC. Roon is excellent when it comes to telling us what's going on behind the scenes, for example:
That's what it looks like tonight as I play the CD rip of George Michael's Faith through ReplayGain volume leveling, upsampled to 176.4kHz, processed through the headphone crossfeed DSP then back to 24-bit playback to my ASUS Xonar Essence One DAC using the bit-perfect WASAPI driver. Notice that Roon internally processes the audio in 64-bits.
Sure, we can of course have the DSP engine dither down to 16-bits and it'll still sound pretty good but with a 24-bit DAC, a higher level of precision will be maintained from the DSP output through to the DAC. One could debate whether the difference is audible of course even though objectively we would have no problem showing the difference.
As you can imagine, the difference between 16-bit and 24-bits is about the extra precision those 8 bits can provide. Manipulation of the data like volume attenuation even to a significant degree (like -25dB) will not result in loss to low-level detail and subtle nuances will be passed on to a good hi-res DAC after DSP manipulation. Of course, audio engineers have been using 24 or even 32-bit audio in the professional setting for ages for the best audio quality.
Despite these benefits, remember that in the audiophile world there are companies still proud of their "standard-resolution" 16-bit products and some even demand very high prices for them (Audio Note products for example and the various NOS DACs based on the old Philips TDA154X chips). Presumably these companies agree with the science that suggests that straight 16-bit CD resolution conversion is adequate and they do not expect the end users to significantly manipulate the 16-bit data in a way that could make the resolution limit audible. I personally am not of the camp that would forego readily accessible technological improvements like 24-bit resolution.
Speaking of volume normalization in this post, I think it's good to consider the fact that normalization is an important component these days of streaming services. As we know, loud, dynamically compressed music results in "wimpy" sounding recordings. That sucks.
There has been hope that the Loudness War can be won because of streaming services like Spotify, Apple Music, Tidal, YouTube and the like. I suppose if we look at the last few years, at least many new recordings are not as dire as those DR4 and lower masterings back in the early 2000's. But that's not saying much when DR6-8 still seems very common. I guess we'll see if any real trend develops towards more dynamic music.
As I mentioned above, I've been using the default ReplayGain target of -18LUFS and this works pretty well in general for my collection of rock, pop, jazz, and classical with a small serving of EDM, rap, and country. With a target of -18LUFS, there should be plenty of headroom and I have seen high dynamic range albums (DR13-DR15 material) not require any adjustment at all.
However, notice that the streaming services are not unified in the average loudness used though at least reasonably close. Spotify and Tidal software aim for -14LUFS with normalization turned on by default (although I've seen mention of Spotify targeting -12LUFS), Apple Music targets -16LUFS, and YouTube is loudest at something like -13LUFS. I'm not sure what Qobuz does for loudness normalization or if like Tidal, it uses metadata for the player software to customize; I suspect it's this latter approach. IMO the embedding of metadata in the stream for loudness normalization makes the most sense and allows the user to control the nuances like Track vs. Album gain or whether there should be any limiting applied for those soft, very dynamic tracks where the user might still want to target a certain loudness level when doing shuffle play (even though this would affect sound quality).
It looks like the AES has published on "Recommendations for Loudness of Audio Streaming and Network File Playback" back in 2015. Worth checking out... They suggest a target between -16 to -20 LUFS. Short programming (like commercials) not be >5 LUFS louder than the target so our ears aren't suddenly blasted by ads - I think this is still too high, maybe +3 LUFS is generous enough! And peaks not be above -1dB "true peak" to limit clipping/overload.
For other ideas, Ian Shepherd reported on the Eelco Tidal research with some good discussion of preference for using Album normalization even when listening to a mix of songs/albums.
There's also an ongoing online petition to "Bring Peace to the Loudness War" if you want to add your voice. They're almost at 10,000 signatures.
To end, remember that the "Do digital audio players sound different? (Playing 16/44.1 music.)" blind test is still running (just getting started!). Please look at it, have a listen, and submit your thoughts...
I have not looked at the results so as not to bias what I might say. So far, I do know that I have almost 30 responses which is pretty good given that this is only into week 2. I'd love to see >100 responses when this is all said and done.
I think it's important to remember that audiophilia has its share of myths, opinions, rumors and actual truths. How we navigate through this and come out as rational audiophiles does demand some "work" and experience not just with visiting your local dealer, reading magazines and note what "that audiophile writer said" or buying stuff without some rationale as to why bother. This blind test, IMO is a rare opportunity where I have done the "hard work" for you so that you can speak with experience on a simple but significant question with far-ranging implications! Your opinion on "the sound" of different devices might be validated, or you might be shocked by what you hear or not hear especially once I reveal the test procedure and devices used...
Let's be honest. This kind of blind testing will never be published by equipment or DAC manufacturers even if internally they've done testing themselves. Furthermore, do you think an audiophile magazine would ever run a blind test and report prospectively on the final result? To do so would likely not help with equipment sales and magazines have trouble with direct equipment comparisons especially if it ends up where a much less expensive device trumps a luxury product.
Let's not be passive audiophiles. Submit your experience with this listening test! You have until the end of April.
I'll be away a bit over the next few weeks so long postings like this might be more sporadic, will see. Enjoy the music folks!