I don't have an MQA-capable DAC myself (and honestly owning one is not high on my list of priorities), but a friend does happen to have the Mytek Brooklyn which is fully MQA-native and has the ability to decode all the way to 24/384. Furthermore, he has the use of a professional ADC of fantastic quality - the RME Fireface to make some recordings of the output from the DAC.
|Image from Mytek. Obviously very capable DAC!|
1. Can we show that hardware-decoded MQA is closer to an original signal beyond the 88/96kHz decoding already done in software?
2. Can we compare the hardware decoder with the output from the software decoder? How much difference is there between the two?
Procedure:As you can imagine, in order to best objectively assess hardware decoding of MQA beyond 88/96kHz which the TIDAL software is able to do, we would have to start with a track that is "true" high resolution with ultrasonic content and low noise floor. We would also want to have access to the original native resolution >96kHz file and on TIDAL so we can compare to pure software decoding. To "capture" the sound, we can use the same DAC and record the track in the actual original resolution, then have TIDAL software decode and record that, and finally record the DAC output using the "full" hardware/firmware MQA decoding from the DAC.
Because of these requirements, my friend and I decided on using the freely available 2L track "Blågutten" from the Hoff Ensemble's Quiet Winter Night available on the 2L demo page as well as on blu-ray. As you can see on the demo page, the original DXD (24/352.8) file is available and would represent the "purest" and "original" resolution for the DAC which can be used as reference. Recall that a similar technique was used last year when I looked at the Meridian Explorer 2 DAC.
There are a few caveats to note however which may or may not be significant when these recordings were done.
1. The RME Fireface 802 ADC used to capture the DAC output is capable of up to 24/192kHz. Therefore, although we can playback the "original resolution" 24/352 DXD and the MQA hardware decoding is of the same claimed resolution from the Mytek Brooklyn, all comparisons will be done at most at 192kHz samplerate.
2. The measurements are taken from the Brooklyn's XLR balanced outputs for lowest noise and best resolution. However, to not overload the ADC, the Brooklyn XLR was jumpered to +4V and -5dB volume. On the Fireface ADC side, sensitivity for XLR input was set to +4dBu. See settings below:
|Mytek Brooklyn settings - notice running firmware 2.21. (Latest firmware at publication is 2.22 which was only a display bug fix and should not impact the sound.)|
|RME Fireface 802 settings & mixer.|
Alright then, with these formalities out of the way, let's analyze the results...
Results:Unlike my previous post comparing hi-res downloads to MQA decoding on TIDAL sourced directly from a digital rip, remember that these tests are dependent on the quality of the ADC/DAC used; this is of course why I spent time above describing the procedure. As such, the signal to some extent will show limitations of the DAC/ADC process including noise floor characteristics which I will try to point out where I can.
My friend was extremely meticulous with these recordings. For the purpose of answering the basic questions posed above, I'm going to start with making some comparisons between 4 primary recordings:
A. Brooklyn DAC playing the original 24/352.8 "DXD" track - "Reference"
B. Brooklyn DAC playing the Tidal software MQA decoded version - "Software MQA"
C. Brooklyn DAC playing as a native hardware/firmware MQA decoder - "Hardware MQA"
D. Brooklyn DAC playing the Software decode then further hardware/firmware MQA decode - "Soft-Hard MQA"
Notice the "Soft-Hard MQA" recording. According to my friend, the Brooklyn DAC will recognize this partially decoded version from TIDAL in 88kHz (MQA adspeak calls this the "MQA Core" output) and then the DAC will decode and presumably upsample internally all the way to 24/352.8. The MQA indicator on the Brooklyn turns red (normally only blue and green) when it does this.
One by one then, let's try to answer a few questions... Note that I subjectively listened to each track so I already had my own impressions, but to really help you appreciate what I "heard", let us objectively run Audio DiffMaker for each comparison to actually show you just what the "residual" difference is over the first 30 seconds of the music. I think it is only in this way that we can understand the magnitude of similarity or difference in the sound even if subjectively you're just going to have to listen for yourself.
For the record, here is the DiffMaker setting used - essentially the default settings:
1. How much difference was there between the "Reference" playback and "Software MQA" from TIDAL?
Here's the spectral frequency plot of the "difference":
As you can see, below around 60kHz in the spectral frequency plot, there's really very little difference! What this tells us is as I described last time, indeed, MQA "works" as advertised by creating high-resolution output from the MQA file to a certain extent. Even more interesting is that the amplitude of the difference between the actual DXD playback and the software MQA decode is averaging around -70dB RMS right off the bat! Folks this level of difference is low especially considering that most of it is just the ultrasonic noise above 60kHz!
For the moment, I hope we can all agree that noise above 60kHz isn't necessarily desirable nor audible so I'll come back to it later. So let's ignore that ultrasonic noise and downsample the 192kHz file to 96kHz (iZotope Rx 5) to just see how well frequencies up to 48kHz "null":
For perspective, let me show you what the equivalent plot/amplitude results looks like if we ran that reference recording through an MP3 320kbps LAME 3.99 CODEC and differenced it:
Obviously we can see that the MP3 version is less accurate compared to the MQA decode above. I believe for those who have tried to blind-test the sound (like what we did a few years ago), LAME MP3 encoding at 320kbps for the vast majority of folks would be "transparent" compared to lossless 16/44. Compared to the "Software MQA" decoding, the difference is a significant 15dB represented by the difference in average RMS power in the "null" file (remember, the dB scale is logarithmic).
This IMO is meaningful in terms of recognising how subtle any effect is! If this is all you remember from this blog post, it might in fact be enough :-).
2. How much difference was there between the "Reference" playback and "Hardware MQA" from the TIDAL stream?
Again, nothing but noise >60kHz, so let's just look at the stuff up to 48kHz:
You can see in the spectral frequency plot that indeed the hardware decoded output is even "less different" than the software decoding above. Likewise, there's a reduction in the RMS power calculated for this "null" file of around 6-7dB lower on average! Very impressive. That's a good sign that indeed the hardware does add to the "accuracy" of the decoding, at least up to 48kHz audio frequency.
3. How about the hybrid "Soft-Hard MQA" where the first step in the decoding is done with TIDAL software to 24/88 then the Mytek Brooklyn hardware decodes all the way to DXD?
As I mentioned above, there is an interesting decoding option which is software decoding with TIDAL first (MQA Core) then the Brooklyn takes it from there. As above, let's start with the spectral frequency plot of the "difference" with the full 24/192 recording:
Yet again, we got nothing but noise above 60kHz, so let's just focus on the difference up to 48kHz with a high quality 24/96 downsample using iZotope RX 5.
If you compared those RMS power results, the pure "Hardware MQA" decode with the Brooklyn is still marginally lower (more "accurate"). Considering that we're running the signal through a DAC/ADC step, I'd say this very small difference is likely insignificant and this "Soft-Hard MQA" decode combination is essentially identical to the "Hardware MQA" quality based on the objective results.
4. Okay... So if there's a difference between "Hardware MQA" and "Software MQA" decodes, can we compare it in the frequency domain?
Sure... Suppose we take a look at the same point in the music around 9 seconds in (there's a waveform peak in there that makes it easy for me to set my marker within milliseconds), the frequency FFT looks like this between the different recordings (this is the actual Fireface 24/192 recording, not the DiffMaker "difference" file of course):
So what do we see?
The yellow plot is the original "Reference" DXD file from 2L which I presume (but have no way of knowing) is what was fed into the MQA encoder. In green, we have the "Software MQA" TIDAL decode which we know "unfolds" up to 88kHz samplerate only, hence from ~44kHz on in the FFT, it follows the noise floor of the ADC. Of interest is that up to 43kHz or so, the software decoding seems to be the closest following the DXD original but this is just one time point and doesn't necessarily generalize across the track.
Both the violet and blue traces are the hardware decoding options using the Mytek Brooklyn's native decoder - either direct ("Hardware MQA") or going through the initial TIDAL software decode to 88kHz first ("Soft-Hard MQA"). Indeed, both tracings seem to have some "extra" content over the TIDAL software decode all the way to about 75kHz when I play the recording back and monitor in realtime. However, interestingly the ultrasonic signal is not the same level as the DXD "Reference" downloaded from 2L.
Notice that from about 60kHz up, the "Reference" signal shows quit a bit more noise than the hardware MQA decodes. This is the source for all that high-frequency noise difference on each of the comparisons (which I filter out when downsampling to 24/96).
There are a couple of possibilities about why there's all this 60+kHz noise. First perhaps the 2L DXD download is not the same source as used in the MQA encoded version. The second option is that the MQA process filters down the ultrasonic noise in the encoding or makes assumptions on what the noise floor should be; sure, there's some content way up there above 45kHz but it doesn't bear a very strong resemblance to the original DXD file. Only the folks at 2L can tell us if indeed the DXD download is what was used to produce the MQA track.
5. What is the difference between "Software MQA" and "Hardware MQA" decoding?
Obviously not much :-). The ultrasonic noise difference this time especially from 75kHz up is the result of the rising noise floor from the ADC as seen in the FFT above. So then, if I filter that stuff out in software with a low-pass filter at 60kHz, this is the difference that's left:
As expected, if you zoom in on the spectral frequency plot, you'll see a transition zone where the software decoder rolls off just above 40kHz. Everything above that presumably will be "extra" stuff the hardware decoder was able to reconstitute until of course the 60kHz low-pass limit.
Notice that the average RMS amplitude of the 24/192 "null" file is down at -90dB or so (very very soft).
6. Out of curiosity, what about comparing non-decoded MQA and DSD from the 2L download site for "Blågutten"?!
Using a similar overlay of the FFT from the same spot in the music as above, here it is with all the different encoding formats 2L has gracefully been able to provide of this music:
As you can see, the undecoded MQA drops off at 22kHz with a slightly elevated "noise" level above 20kHz (we've seen this before when I looked at MQA last year). DSD64 shows the usual high level of noise shaping ultrasonic hash above 25kHz. And likewise DSD128 does the same but one octave up from about 50kHz. Obviously, multi-bit PCM is free from these noise floor limitations - of course, we are looking at ultrasonic noise and I'll leave you to ponder if this holds any audible significance when handled properly with filters. It looks like we'll need DSD256 in order to achieve a low noise floor up to 100kHz to compete with PCM 192kHz.
7. Ok... This is all well and good with a hi-res 2L recording, but what about more typical "hi-res" recordings like popular rock albums which probably originated in analog and mastered louder with dynamic compression?
Glad you asked. I asked my friend to record Led Zeppelin's "Good Times Bad Times" (from Led Zeppelin, a DR8 track) for a comparison. Although he recorded it in 24/192, since it only decodes to 96kHz and everything above is noise, I downsampled the recordings. Remember that I examined the track a few weeks back comparing the MQA decode with 24/96 high-resolution download and found that it's very similar.
Here's the "difference" between "Hardware MQA" and "Software MQA":
Interesting. As you can see in the spectral frequency display above, the difference between hardware and software decoding is more evident with this recording than the 2L track above (item 5). Interesting isn't it that a loud, pseudo high-resolution track originally laid down on analogue tape actually can demonstrate more of a difference than a pristine hi-res recording done in DXD. Realize though that an average RMS power difference is still quite soft at below -70dB with much of this in the ultrasonic frequencies.
Here's a hint as to why there are differences:
Above is another look at the "Good Times Bad Times" comparison but in the frequency domain at the same point in the song. We can see more clearly the "difference" between software and hardware decoding in the 35-45kHz range. It looks to me like the digital filtering parameters between TIDAL and the Brooklyn firmware appear a little different resulting in earlier roll-off with the Brooklyn.
Also, we see a small amount of noise around 55kHz with the Brooklyn suggesting that when fed a 24/96 signal from TIDAL, the anti-imaging filter used is probably stronger than the upsampling algorithm for MQA decoding when fed the 24/48 MQA data. Mathematical precision is another possibility, especially the potential for intersample overload especially in the louder segments. If this is true, the louder the audio track with more samples approaching 0dBFS, the more noticeable the effect of the filtering differences will be. Finally, a third possibility is that this extra noise is due to the Mytek's "processor noise" from the decoding (I doubt this is the case).
Conclusions:Time to wrap this up. This is what I learned in this exercise...
1. Like I noted in my previous post comparing high-res downloads using the same mastering as TIDAL's MQA software decoder, I can say that MQA does "work" as claimed to reconstruct material >22/24kHz with reasonable accuracy. It uses the bits below the noise floor to reconstruct the high frequency material above the 22.05/24kHz "baseband" Nyquist frequency. Hardware decoding as explored in the track "Blågutten" indeed does have reconstituted ultrasonic frequencies beyond 44.1kHz which is the limit for TIDAL's "MQA Core" software decoder which goes up to 44.1/48kHz (corresponding to 88.2/96kHz sampling rate).
2. Although I cannot be sure if the MQA encode of "Blågutten" is based on the same DXD sample available, the reconstituted waveform above 44.1kHz from the Mytek Brooklyn does not seem to strongly correlate with the ultrasonic noise of the DXD. Looking at how the MQA technique is supposed to "fold" down, the higher octaves seem to have less bits to work with, perhaps this is just a reflection of the lossyness where each higher octave above the baseband becomes less accurate (more lossy) with the decoding.
3. When comparing TIDAL software MQA decoding with hardware decoding from the Mytek Brooklyn, the output is in fact very very similar. Sure, the Brooklyn hardware/firmware decode does seem more "accurate" than the TIDAL software decode, but I'm just not impressed that I hear a difference.
4. The Brooklyn is capable of decoding the 24/88.2 (and presumably also 96kHz) output from TIDAL's software decoder further. This tells us there is some kind of fingerprint in the software decoder's output such that the DAC can still detect that it's coming from an MQA source and proceed if needed to "decode" or "render" (forthcoming AudioQuest Dragonfly Red/Black supposedly). Other than the visual (red) indicator on the display, the final decoded output appears to be very similar to the direct 24/44 hardware/firmware decode with the Brooklyn.
Up to now, I have not discussed my subjective impressions of the various recordings in any detail. Folks, I think the Audio DiffMaker data speak for themselves! With such high levels of correlation null depths, I was simply not able to ABX differentiate the TIDAL software decoding from hardware decoding using Mytek's Brooklyn with any consistency. The technique is proprietary so we don't know what special customizations have been implemented for this DAC. Whatever it is, the effect is obviously very small and subtle to the point where a >US$1500 professional ADC operating at 24/192 is unable to capture much of a difference. In fact, when I play back one of those digital difference files (like the difference between the original DXD and TIDAL software decode in #1 above) on my ASUS Xonar Essence One through headphones with the headphone amp volume jacked up to 100%, I can barely hear the small signal above the background noise! With any normal music signal, doing this would be intolerably loud playing to my good ol' pair of Sony MDR-V6 workhorse headphones. I fail to "hear" how such low level differences (remember, the dB scale is logarithmic) will make any difference especially when real music is playing and low level subtleties become further masked.
If anything, it is actually the louder, dynamically compressed Led Zeppelin track that seems to show a larger difference between TIDAL software decode and Brooklyn hardware decode. I suspect this speaks to how the filters handle the peaks in a compressed DR8 recording (the term "Authenticated" as referring to sounding "the same as in the studio" always was meaningless IMO). Remember that a desktop PC would be much more capable than typical processors on a DAC when it comes to upsampling quality and I'm sure if MQA wanted to, the TIDAL software decoder could be very precise and better able to handle intersample overloads than the DAC firmware algorithm. I would not make any assumptions that the software decoder need be in any way inferior to a hardware implementation.
By the way, yes, I did ask my wife to sit down with me one evening in the basement soundroom to have a listen to the various MQA decodes (Raspberry Pi 3 CRAAP™ config --> TEAC UD-501 --> Emotive XSP-1 --> dual Emotiva XPA-1L monoblocks --> Paradigm Signature S8v3 speaker + SUB1)... Let's just say she lost interest after 5 minutes with a sense of indifference and we decided to watch another episode of The Young Pope :-).
Based on what I found last time and now evaluating the output from an actual high quality MQA-decoding DAC, I can commend MQA for creating an interesting compression CODEC for streaming that works to unobtrusively embed data below the noise floor and reconstructs the first unfolding to 88/96kHz "MQA Core" quite well (I do have some reservations for the octaves above this). However, I see no evidence that whatever temporal "de-blurring" is being performed is audible. This is interesting considering that I'm evaluating one of the 2L tracks here which should be able to benefit from the full capability of the MQA algorithm given that it was recorded in very high-resolution and information like the microphones and impulse response measurements should be obtainable. Furthermore, these 2L demos were much ballyhooed a year ago at CES2016 as benefiting greatly from the MQA process.
Is any of this surprising? I don't think it should be since modern DACs are extremely accurate already, and honestly, I was not holding my breath despite the usual cheerleading hype articles like this. Remember, as I expressed months ago in my article on room correction, if we truly want time-coherent sound, we must take into account the speakers and room interactions. The only way to do this is through customized measurements in your room. For years the home theater crowd have known this and DSP-based calibration systems (like Audyssey) have been built into receivers.
Here's one last demonstration/comparison. If I use Acourate to create a room correction filter (as I did here) and run the 24/192 DXD reference file through the convolution DSP using JRiver 22, what do you think the "difference" will look like when I compare the room corrected output with what I fed in?
That's what frequency and time domain correction for the sound room looks like! The effect in smoothing out frequency peaks and valleys become obvious in A/B testing. Time domain improvements in soundstaging and 3D experience of the "space" the recording takes place in are noticeable. This "difference" can be objectively demonstrated with the amplitude statistic clearly showing that the sonic impact of the DSP results in much higher RMS power in the residual file. Whatever MQA supposedly is doing is extremely minor compared to this level of correction of course.
Well everyone, I think that's all I have to say about MQA (famous last words?). At the end of the day, there are really no surprises here. In fact, how could there really have been (unless one still believes in advertising hyperbole)? From start to finish this was always a mechanism of compressing "high-resolution" PCM for streaming. High-resolution was never all that audible to begin with so we can't expect the partially-lossy-compression technique to sound much different. In fact, we should be very suspicious if it did sound remarkably different from the high-res source it's derived from! As for the "de-blurring", who knows what they were referring to. Maybe it's just about their minimum-phase upsampling (Yippie... Remember the results of this blind test?) or maybe there really is some background DSP going on during encoding utilizing ADC/DAC impulse responses. In any event, I'm not seeing (or hearing) much impact. Sure, one could criticize that I'm testing based on recordings made by the RME Fireface and that somehow the MQA benefits have been lost going through that ADC. If this is the case, I would submit that the difference would truly be insignificant given the quality of this recording device!
Considering the little difference Mytek's Brooklyn hardware decoding made, I certainly would be in no urgency to upgrade my DACs specifically for MQA. In fact, I'm still of the opinion that software decoding is the way to go as discussed previously. This is of course in no way a reflection of the impressive recordings my friend made of the Brooklyn DAC; clearly a very accurate device allowing me to compare the files with such deep correlated null depths using the excellent RME Fireface ADC!
Lately, I've heard that the spectre of DRM has resurfaced. A couple years ago, when I first wrote about MQA, I did wonder about "copy protection" including the possibility that the word "authenticated" is more about security than sound quality. Well, it seems that the good folks poking around in the MQA software decoder have found that the software is capable of decoding various forms of "scrambling". Although we have not seen severe sonic degradation thus far; that is, the MQA files I've come across so far sound pretty good and are close to CD resolution without an MQA decoder, future files may sound worse by design when played back on a standard non-MQA DAC.
Just to be clear, I do not want to sound hysterical or come across as alarmist; if anything I'm personally rather indifferent about MQA these days and writing on it since it's the topic du jour. I'm not saying the record labels or MQA want to or would do this. It's just that they could. Sure, one would still be free to back-up the music files perfectly (some folks seem to think that this in itself makes MQA not a DRM scheme), but one would still need to use MQA-capable hardware or software decoding for high fidelity (especially if the MQA scrambling purposely makes the undecoded sound unacceptably poor), hence the freedom which we enjoy now with "flat" high-res files would be constrained.
Bottom line for me: so long as there's standard PCM around for the music I enjoy, to this point I personally have no need nor am concerned about MQA based on my listening and objective comparisons. Sure, if you think high-res streaming is important, then it has its niche with TIDAL as intended.
Thanks again to my friend who provided the "virtual" use of his Mytek Brooklyn DAC, RME Fireface ADC, and his time to make these meticulous recordings!
Have a great week ahead everyone... Over the years, I've really enjoyed Arvo Pärt's compositions. This past week, I've been listening to Adam's Lament (2012). Another beautifully deep, moving, and spiritual choral and orchestral experience.
As usual, I hope you're all enjoying the music :-). I would of course love to know your thoughts on the MQA "sound" if you've taken time to do some controlled comparisons...
Addendum: I just noticed a statement on the MQA website I linked to in the text above. They still claim that an MQA file when played back undecoded sounds "superior":
"Widely agreed by mastering communities to be superior to CD"? That's actually news to me based on discussions I've had "off the record"...
Also check out the mumbo-jumbo rather than proper scientific reference for "MQA draws on recent research in auditory neuroscience, digital coding..."
As for Bob Ludwig's testimony:
No. Nyquist-Shannon theorem did not turn on its head (although they may be rolling in their graves)... Bob Stuart just didn't explain to Mr. Ludwig how it worked properly I presume; not exactly "mind blowing" stuff here when you take some time to think about this. "Exact analogue", eh? What does "exact" mean again?
Oh yeah, let me remind everyone that minimum phase filtering with no "very unnatural" pre-ringing has been heard probably by everyone at some point for years; this remarkable music playback device is called the iPhone (and the iPad).
Referring to The Absolute Sound magazine!!! Bob L., please say it isn't so! An MQA ad just had to end off with that "revolution" word again of course :-).