Friday, 22 January 2016

MEASUREMENTS: MQA (Master Quality Authenticated) Observations and The Big Picture...

I. Preamble:

As expected, MQA articles are popping up at the usual audiophile sites. So far, what has been disseminated to the public about the technology remains rather nebulous beyond the basic core ideas. Probably the best technical descriptions come from John Atkinson's article back in Dec 2014. It focuses on the "encapsulation" process of reconstructing high-frequency content. Then there's the Robert Harley articles from May/June 2015 where there seems to be more attention paid to supposedly important time-domain factors and MQA's role (alas, the PDF link I referenced in this post got taken down!).

Since CES2016, I see that articles continue to focus on subjective experience, and there's a chorus of testimony about the greatness of MQA (see here, here, here...). Notice the vacuousness of interviews like this. Testimony is fine and is what it is. I'm sure the MQA folks are smart... And I'm sure Mr. Stuart is a great guy... Yes, clearly Mr. McGrath can show off some well recorded material... But it was perhaps surprising how there remains so little spoken further about what the algorithm is doing or even better yet, providing technical clarification on simple questions. In sum, there's little to suggest that the science makes sense in the way it's supposed to make the music sound "much" better despite vague claims like how research in neuroscience is supposed to support the benefits of this technology (Meridian/MQA, care to reveal which neuroscience papers you're talking about?!).

I posted 7 questions on the comments section to this Stereophile article before CES; for completeness, here they are:
--- Some questions to ask, Jason...

When you talk to these guys, could you ask a few questions to clarify what we should expect?
1. Does this MQA format have any DRM component that we as consumers need to be aware of?

2. Do encoded MQA files provide anything more than 16-bit resolution? Even if we accept improvement to time-domain accuracy, are the PCM files essentially dithered down to 16-bits... (Looks that way based on previous reports.)

3. Please try to find more information on this "de-blurring" algorithm. Other than the most purist recordings, it's unclear how this is even possible given that most studio projects mix different tracks done by all kinds of equipment not to mention DSPs and effects processors involved in the production chain.

4. Can you confirm if a person does NOT have MQA decoding capability, does the file still retain a *full* 16-bits (and presumably 48kHz sampling rate) resolution if they try to play back the data?

5. Clarification on "lossless" please. Clearly frequencies >24kHz are not losslessly compressed in the usual way we think of "lossless", right?

6. Are there different "container" sizes used in MQA. Looks like the typical is the audio data being put into (lossless compressed) 24/48. Is that the bitrate they anticipate streaming audio to be delivered? If other data rates are possible, do all devices including the Meridian Explorer 2 have the ability to decode all potential data rates?

7. I would be very interested in your listening impressions to 24/48 lossless FLAC vs. MQA (presumably also delivered at 24/48) in an A/B comparison using level matched *same source mastering* since the data rate would appear to be about the same for both. (CD-resolution 16/44 would be about 30+% lower bitrate, right?)

As much as it's interesting to test out new technology, remember to represent the interests of the consumer as well given what appears to be rather hyped up claims; much of which appears to stretch the truth. Even if some of the claims are true, we also need to evaluate the value of the "improvement" since much of it likely would have subtle effect even with an excellent sound system (remember, standard 24/48 FLAC files are more than CD quality and sound awesome already!).

I trust these are relatively straightforward questions which I personally would find useful to understand the process and whether the claims make sense as an interested audiophile and aiming to be an educated consumer. Remember, we are dealing with science and application through engineering, it's not unreasonable to ask how something works - to not be able to explain the workings of the CODEC would be rather unusual and in the worse case scenario, suggestive of pseudoscience. Question #1 about DRM is important as I think we have a right to know if anything is embedded to track these files from a privacy perspective. Also, I would have hoped the suggestion about comparing MQA with an actual 24-bit file also makes sense if one were to A/B compare. Note that these questions are based on thoughts I had about MQA almost exactly a year ago.

Although I may have missed something, as of this blog post, I'm not sure if anything more has been posted about MQA to shed light on the technical aspects.

II. A Look at 16/44-sourced MQA:

Despite the lack of answers so far, as inquisitive hobbyists, I think we can still take some time to find answers for ourselves... Now that the 2L music label has posted freely available MQA-encoded files, perhaps we can "have a look under the hood". Particularly interesting for me was this item off the download page:

Well, well, well... We have a sample here (Carl Nielsen piano composition recording) which supposedly originated in 16/44 according to the description, and is now encoded in MQA as a 24/44 file. Indeed, in MQA it got bigger. No surprise since MQA is supposed to only provide bitrate reduction for high-res recordings/streams. If I download these 2 files, and convert the original "DAT" WAV as a FLAC level 8 compression, I get a file size of 30MB (and excellent 67% compression). Do the same with the MQA file and we see a file size of 76MB (only 46% compression) - the MQA file is 2.5x that of the "original" 16/44! Right off the bat we see that not only has the file grown unnecessarily large for what began as 16/44, but for some reason efficiency of compression went down significantly! Whatever is added in those lower 8 bits of MQA obviously significantly diminished the ability to losslessly compress efficiently.

Loading the file into an audio editor, we can confirm that the MQA process did not change the amplitude of the recording - this makes A/B/X testing viable without volume adjustment:

Although there are many details we cannot look at because of the proprietary nature of the format, we can at least look at the "top" most significant 16-bits of the MQA file to see how much resolution from the original 16/44 file was maintained. We can do this with Audio DiffMaker to help align the samples and do a digital subtraction. Simply select about 30 seconds of the file (the program runs into memory issues with long audio segments >1 minute) and run it through to see how much "correlation depth" the program finds:
30 seconds selected for Audio DiffMaker.
Result from Audio DiffMaker.
Approximately 77dB null correlation which as expected tells us that the original "DAT" file and the MQA will sound very similar. For some context around "null correlation" and accuracy of reproduction, running the original "DAT" recording through the LAME 3.99 MP3 encoder at various bitrates shows the following:
     MP3 320kbps - 98.5 / 88.4
     MP3 256kbps - 85.0 / 87.1
     MP3 192kbps - 78.9 / 75.1
     MP3 128kbps - 58.0 / 70.8

Interesting, mathematically, the amount of correlation between MQA and the original WAV is about the same amount of change seen when you take that WAV file and encode it in 192kbps MP3. Realize of course that 192kbps MP3's sound very good and provides some perspective on the relatively subtle change between the original 16/44 file and this 24/44 MQA when played back without a decoder.

Now to better define the subtlety, we can take the digital difference file and examine the amplitude of the difference:

Again, more evidence that the difference is subtle; the maximum RMS power of the "difference" is only down at -64dB with an average running around -75dB difference for these 30 seconds. That means we're "listening" for differences down below the 10th bit in a 16-bit signal in order to notice the change... Like I said, subtle. (Please do not misconstrue my comments above comparing the variance between the files. I am not implying that MQA sounds like MP3 - clearly the algorithm is not based on the lossy techniques used in psychoacoustic compression... Just that the original "bit-perfect" file has been altered in the bits here and there to a similar magnitude as running the file through the LAME encoder at 192kbps.)

What of the lower 8 bits of data then in the MQA encoded file? If I strip off the top 16-bits, and amplify what's underneath by 48dB, here's what I get:
Basically it just looks like random white noise! Since the original file is bandwidth limited to 22kHz, is this what the "encapsulation" looks like when it's devoid of any high-frequency material to fold down? Realize that this random noise is impossible to losslessly compress. This is the reason why the MQA file was poorly compressible and the MQA filesize was 2.5x the original 16/44 compressed as FLAC.

Finally, one last observation I noticed; if you look at quiet parts of the music and examine the noise floor, this is what you see:
Quiet part of the music selected...
Noise floor in the quiet part.
Interesting to see a change in the noise floor which usually is determined by the ADC's noise-shaping modulator or the dithering algorithm used when converting 24-bit to 16-bit audio. Looks like the MQA algorithm lowered the high-frequency noise a small amount. I am however a little suspicious that maybe the MQA encoder was actually fed a 24-bit file and what we're seeing is the result of a slightly less aggressive noise-shaped dithering algorithm.

III. A Quick Look at a 24-bit DXD-to-MQA File:

It's harder make any conclusions with high-resolution source files fed into MQA since without a proper decoder, we cannot assess the actual final output. However, I want to show you a similar noise floor FFT as the one above; this time with the Arnesen: Magnificat DXD (24/352) sample vs. the corresponding MQA:
Confirmation that the original DXD and MQA are the same amplitude (<1dB difference).
Quiet portion selected...
Hmmm... How does the noise floor look like? (Note: DXD downsampled to 44kHz for comparison.)
Question #2 in my Stereophile comments post was basically about this. As far as I can tell, MQA is a 16-bit PCM format. Therefore if you feed anything >16-bit resolution, the extra dynamic range is lost. Remember, the lower 8 bits of the 24-bit MQA file is used for their "encapsulation" process which essentially is a lossy ultrasonic frequency reconstruction scheme as best I can tell. The FFT above seems to be consistent with this impression... Basically the excellent 24-bit DXD noise floor has been dithered up by the MQA algorithm into what looks like noise-shaped 16-bit audio.

By the way, if you use old Adobe Audition 3, that noise floor looks like the same kind of dithering achieved with 2.5 bits of dither depth and the "Noise Shaping C3" setting. That's a fair amount of dithering.


IV. Impressions & General Thoughts:

So, assuming what 2L posted on it's download site is representative of the "final" encoding using MQA and assuming that the Nielsen 16/44 "DAT" source is what was fed into the MQA encoder, here are a few observations (as noted above, I'm not sure about this since I speculate they might have fed a 24-bit signal to the MQA encoder for the Nielsen track):

1. Feeding a 16/44 source into the MQA encoder with 24/44 output results in a non-bitperfect PCM audio file in the most significant 16-bits (which is what a non-MQA DAC would play).

2. I assume the changes in Point 1 therefore are the results of this "de-blurring" process being talked about and is supposed to represent some kind of time-domain improvement which is claimed to make these files sound better even with non-MQA DACs. Notice that what I have shown with this file suggests the difference is subtle based on the digital subtraction technique (77dB null correlation depth, maximum RMS amplitude -64dB for the 30-second sample tested). Though not shown, if we difference the loudest portions, the amplitude difference only goes up to at most around -52dB. I have compared the two with my Sennheiser HD800 headphones using the upgraded ASUS Essence One and managed "17.2% chance of guessing" with a quick and dirty ABX focused on a short passage just with standard DirectSound output at 16/44 setting (oops, forgot to tell foobar to use ASIO!). Evidence that it's audible but not an obvious difference.

Subjectively, I cannot say that I preferred the MQA version [is this the first time anyone has claimed this!?]. What I focused on was the general gestalt of the piano sounding tonally a little "sharper" and "clearer", an analogy I thought of was the audio equivalent to a "sharpness" control on a TV set making the edges a little more defined and contrasty though not necessarily natural. Interestingly, I played the samples downstairs in my main system on the HTPC, TEAC UD501 DAC, Emotiva XSP-1 preamp, dual Emotiva XPA-1L amps, to Paradigm Signature S8v3 speakers for my wife (no digital room correction, bit-perfect ASIO driver, balanced interconnects). Without any idea of what I was testing for, she clearly preferred the original "DAT" 16/44, describing the MQA version as sounding "synthetic" in ways similar to my description above with ABX testing... Well, what can I say, can't disagree with my better half's ears - younger, better looking and she plays the piano as well :-).

3. Even though there is no audio information >22kHz in a 44kHz recording, MQA seems to be putting something into the lower 8 bits of the 24-bit file. This area is supposed to contain "encapsulated" high frequency data and in the MQA file based on 16/44 source, the "information" in this portion of the file appears to be stochastic noise that is non-compressible by FLAC (0% compression when normalized to 100%!). Seems inefficient. For all the bandwidth saved in streaming a "high-resolution quality" file compressed down by 50+% using encapsulation (for example a 24/88+ file squashed into 24/44 or 24/48), if one were constantly streaming 24/44 MQA and most of the music originated from 16/44, on net would there actually be any reduction in bandwidth used?

4. At this point, there's not much we can say about the output when MQA is used to encode/decode high-resolution material with ultrasonic extension. Clearly with only an extra 8 bits to play with sampled at 44/48kHz, I don't see how this is capable of fully reconstructing the potential spectrum >22/24kHz other than within limited circumstances. Therefore I believe it is lossy in this regard. What I believe is more important is that based on everything I have seen, MQA appears to be a 16-bit PCM format. I don't understand why this is not mentioned anywhere nor have I seen the "journalists" simply ask about this. Already we have wonderful recordings containing >16-bit resolution and tons of DACs capable of very low noise floor performance. Whether one believes the extra bit-depth resolution is beneficial or not (I guess Meridian does not), it looks like we'll be saying goodbye to that extra dynamic range when converted to MQA. Again, this is by definition lossy isn't it!? (Of note, if you visit the download store, 2L currently sells their MQA files at a 15% discount compared to other hi-res versions. Special offer for early adopters? Lower storage costs? Or recognition perhaps that the content quality is lower?)

Let's think about the big picture. If we consider sound quality, I really have my doubts whether as a whole, this helps. For one, technically is it even possible to accurately "de-blur" many albums these days unless it's a specialty label like 2L with meticulous recording techniques, utilizing a few well understood DACs, little processing, etc.? (See Question #3 in the comment up top.) Second, Meridian likes to talk about "end-to-end authentication" as if this improves sound quality, bringing home what supposedly was heard in the studio. I find this somewhat silly when clearly MQA cannot address the main issue with fidelity - the speakers/headphone and room acoustics. The digital signal path, which anyone can quite easily maintain to be "bit-perfect" these days is not where degradation typically happens. Time domain inaccuracies with speakers and headphones are orders of magnitude more than actual ADC or DAC timing variances (you know, the microscopic impulse response graphs folks like to make a big deal over) - you just have to look at the difference in step response between speakers for example to see the milliseconds of variability. This is obvious when auditioning speakers with clearly very different sound signatures compared to at best subtle changes between digital filters as demonstrated a number of months ago with the digital filters test we did on this blog.

Then there's that term "authenticated"; clearly a concept of importance in the MQA acronym. For audiophiles with objective leanings, I feel "we" already have a technical definition of accuracy. The gear should be neutral and resolving (based on objective technical characteristics). These characteristics can be measured and verified in our own sound rooms. And an objectivist would hope the studio recording/mixing/mastering the music and the sound engineers followed best practices to achieve the intended quality and have gear accurate enough to not add unwanted coloration or distortion. Even this does not "lock" an objectivist into only one "authenticated" sound since he can deliberately "tune" the sound to taste using technology. For example, by using EQ, room correction filters (see here and here which includes time-domain correction), or purposely reprocessing with software like HQPlayer.

So what is the meaning of Meridian/MQA wanting to convince us that the final sound should be "authenticated" if it's not really capable of ensuring actual sound quality? Well, all that is left in my opinion is a mechanism that locks consumers into a specific digital processing chain which controls the flow of data (thus "authenticated" as in "controlled"). IMO this whole mechanism is like creating a Rube Goldberg contraption to process audio signals with supposed benefits (smaller size for hi-res, putative improvement in time-domain performance) but at the same time apparently introducing technical limitations (being locked into 16-bits dynamic range is a significant one I think) and of course loss of freedom to use any DAC for the full experience. Honestly, I don't see anything all that positive except for Meridian/MQA benefitting from control over the closed ecosystem they create with potential royalties through software licensing, and in the hardware decoding side. A beautiful business model to generate revenue to be sure! All they need to do is create the hype, hope consumers bite, and manufacturers implement yet another feature...

Sure, I would be curious to have a listen to an MQA decoding DAC. Who know, maybe I'd be impressed. But I honestly feel this is merely a "solution" looking for a problem. Yes, the ideas are clever, but I fail to see any "revolution" in the making...

I'm curious, for those with the Meridian Explorer2 DAC capable of MQA... How does it sound with and without the decoding? On a side note, if the digital processing can be done on this little DAC which has been out for close to a year, I assume the algorithm would be relatively straightforward to implement on a computer in software like what we can do with HDCD decoding these days in dBPoweramp (legality issues of course if not sanctioned by Meridian/MQA). Anyone know what DSP chip is in the Explorer2?

Final thought: for hi-res streaming, what's wrong with lossless compressed 24/48? Basically the same data bandwidth as MQA. Universal playback, likely compresses better than MQA with upsampled 16/44 and retain arguably everything needed for human consumption (no need to be afraid of up/downsampling these days). If one still felt the need for the "true" high-res 24/88+ experience, then just pony up and buy the HDTracks/Pono/ProStudioMasters/Qobuz/etc. version.


I hope this stimulates some thought and exploration. As usual, feel free to drop me a note. If you have more details about MQA, feel free to let me know or correct where I may have been mistaken!

Have a good week ahead folks. Enjoy the music...

February Update: Make sure to check  out the follow-up post: Meridian Explorer2 Analogue Output - 24/192 PCM vs. Decoded MQA where actual decoded MQA was analysed.

Until I was almost finished with this article did I run into recent posts from Miska on Computer Audiophile. Also, some analysis from Mans here. Worth having a look!

Addendum 2:
Some more videos to consider:
I think there really needs to be some clarity on what the terms "perfect", "lossless", and "exact" mean to Mr. Stuart... For example, are we talking about "perceptually lossless" or just "lossless"?

Ah yes, the dreaded and horrifying ringing! Close cousin to the dreaded and horrifying jitter!

Another 11 more minutes... All the talking points for those who missed it over the last year - "revolutionary British technology", "neuroscience", "authenticated" with a lit LED to prove it! :-) Hmmm, software decoder in the Android phone, interesting.

And from the Pono playbook - TESTIMONY:

Some less good looking, less well known, and just as old celebs.

Ooooo... Did someone say "vinyl" there!? "Epic!"

Addendum 3: (January 29, 2016)
Alright, so it looks like enough questions are being posed on forums and places like this that Meridian/MQA/mainstream audiophile sites are starting to post some objective information.

AudioStream just posted this page with some graphs of 2 of the songs made available by 2L (the Nielsen piano piece, and Gjeilo North Country II).

1. Interesting to see that in both samples, the CD noise floor is superior to the non-decoded MQA up to about 15kHz (basically the most important part of the audio spectrum); basically a reflection of the dithering techniques being used and possibly some MQA encoding effect in the lowest bits of the 16-bit word, no big deal but interesting.

2. Confirmation that the MQA encoder was fed a 24-bit master for the Nielsen piano piece that was labelled as sourced from "DAT 16/44" on the download site. I had noted this suspicion already.

3. The only good thing I see out of these graphs is that decoded MQA appears capable of >16-bit resolution in the Gjeilo sample - at least when it comes to the noise floor. I think it'll be interesting to confirm once someone digitizes some decoded material to compare...

Here's a serious question. Does anyone think the mainstream press would ever publish listening tests (probably non-blind) that says anything other than decoded MQA sounds awesome like the "Morten's Notes" results? Did any mainstream press reviewer ever say anything to question the minimal difference between 320kbps MP3 vs. lossless, that loud dynamically compressed HDTracks "hi-res" isn't worth it, or that maybe hi-res music isn't exactly necessary nor audible on a portable device like the PonoPlayer (never mind them even attributing sonic differences to the unusual filter settings used with 44kHz playback). I think it's pretty well a given that you know exactly how Stereophile, The Absolute Sound, and their related sites are going to be reviewing this scheme...

MQA-like? Block Diagram - from PatentScope


  1. Hi Archimago!

    Well, i've read some articles about MQA and have similar questions. Didn't read the addendum links (i'll do it other time, it's 01:30 AM here, and i have stuff to do in the saturday morning ;)
    But to me it looks like MQA will be the next HDCD or SACD/DVD-A. Too much marketing for too little in return (assuming we will have something in return). It's just a way to resell the entire catalog of the labels one more time in the new ultimate "ultimate" remastering process, and a way of locking us in a closed format (software is becoming more and more open source than ever... didn't they get this?).

    Digital audio is passing through an unstable period (hi res audio is far from having a standard, physical media will die or not, MP3 files are being sold in "diamond" 96/24 package, and i've read some days ago about labels "watermarking" downloaded files with some sonic compromises in some files... oh boy).

    Anyway, for what i've read, MQA will make some success with Tidal users. But will it worth it? Time will tell.

    Have a nice weekend!

    1. Hey there VK. Wow. That's some late night audiophile reading in your time zone on a Friday night :-).

      Yeah, I think it's a given that MQA will launch on TIDAL - it better after all the ballyhoo!

      As you can see from my writing here, I have concerns. Although very early on, I'll make a prediction here and see if I'm right over the next couple years... I suspect MQA will be *less* successful than HDCD or SACD in terms of how long the technology lasts in the eyes of the consumer. HDCD did grow for about 5 years from 1995 to 2001 peaking around 5000 CDs then faded out by 2005. So lets say it grew for about 6 years. SACD came out in 1999, and disks were readily available until about 2007 or so around here. SACDs from small labels are still being released and of course the whole DSD download thing is an off shoot. As of 2016, SACDs have at least survived for around 16 years.

      But in 5 years do we think there will be many streaming services using MQA and download sites selling them? I don't think so.

      MQA is a burden. You're basically sitting with an ugly 24-bit file with at best 16-bit resolution. It demands a new DAC to extract anything more than standard CD resolution. Even if you get a decoder, you're not getting more than 16-bits. I don't know what the licensing cost will be but it's there - probably a hassle for small high-end manufacturers who probably would prefer to do their own thing.

      I think from a sound perspective, the most interesting part of MQA is the DSP algorithm Meridian is using to "de-blur". This can be sold as a plug-in for studio production use. I could not honestly care less about this fancy lossy "encapsulation" process. Intellectually interesting as a talking point with potential "wow" factor but I doubt it adds anything of significance to the actual sound. My experience has been that bit-depth is more important than samplerate... Remember, below Nyquist, it is bit-depth that determines time-domain accuracy.

      I seriously think MQA is going to have a hard road ahead because of the hard sell. Plus I think the consumer is going to quickly figure out that there is really nothing much in it for him/her.

    2. Haha, i was awake at 01:30 AM because i had a very late dinner, so it's better stay awake for some time before go to bed ;)

      Obviously MQA will not hit the mainstream market (and barely the audiophile market, maybe a niche of a niche of a niche).

      And about the "de-blur" feature, how many times we heard about other algorithms that "recover the lost sound", or "clean" our recordings. Idk, we all know that we cannot change what was already recorded. Sure, we can improve sound quality with some clever remastering, but haven't we already have this tech? Is the MQA algorithm so much better than the others we already have that don't require an end-to-end closed system?

      In the meantime, the most important thing - quality of the masters - never is discussed by the big players...

    3. Indeed that is why I think if we look at this separately, then there might or might not be merits to each of the techniques used... Quite possible the de-blur is nothing more than fancy DSP techniques that have come before. As I said in the text, neither myself nor my wife thought the MQA of the piano piece sounded better than the original so I already do not understand all the superlatives put onto the sound quality of this process.

      All about mastering quality - I concur as per last week!

    4. Well, that could be the real "advantage" of MQA - an advantage for the industry - should it be possible to implement "smarter" water markings.

      If those water markings were not audible when using new decoders, that would also create the need for new gear. Who knows what their plans are - I do not.

  2. Time will tell. I see MQA as a smart way to compress hi-res data into 24/44.1 file. I am sure audiophiles will have a blast commenting on the sound quality as the marketing people behind MQA really pushing this as a new dawn on digital audio. Thanks for your analysis but the interesting part will be how an MQA decoder unfolds all the data into a 24/192 file or stream. What bothers me about MQA is all the secrecy and the closed source / nature of the algorithm. While not strictly DRM, having an unknown source signing my FLAC files does not resonate well with me. Again, time will tell.

    PS 1. When speaking to MQA people at the show, If memory serves me, they mentioned that MQA does not use ALL 8bits and in fact it uses much less.

    PS 2. MQA "hides" all the encapsulated data beneath the noise floor, so any artifacts observed in the high frequency should be something else. If this is a result of a MQA DSP, then that is no good.

    1. Yes indeed time will tell... But it's fun to make predictions and speculations based on what we anticipate :-).

      I anticipate there will always be a bit of secrecy around "trade secrets" on how they do the DSP of course.

      My sense is that if they claims that there is audible "de-blurring", then the effect will have to be present in that top 16/44 part of MQA. If all that happens is in the "audio origami" part of the spectrum >22kHz, then it's all just ultrasonic manipulation and IMO nobody is going to be impressed!

      I included in the "Appendix" the block diagram from Meridian's patent that seems to be relevant. Notice that the top 13 bits seems to be the least "molested" but along the way, there are opportunities for this de-blur algorithm to be implemented. I would imagine the de-blurring can just be a DSP plugin used early on with the original signal fed in.

      This block diagram is the "Rube Goldberg Machine" incarnate. Again, what is the point of all this complexity? All I see is a needlessly complex system that adds some "de-blur" DSP + lossy ultrasonics. The "de-blur" DSP is what we actually hear and can be ABX'ed, while the lossy ultrasonics is IMO worthless simply because it is "ultrasonic"!

      I believe Meridian tried too hard to impress people with this "origami" stuff and ended up sacrificing what is most important in hi-res - BIT DEPTH. If they had at least retained *true* 18-bit or 20-bit resolution plus whatever fancy lossy ultrasonic encoding scheme and just be honest with that, IMO this would not be so objectionable. (In this regard, I think HDCD got it right in what they were trying to achieve at a time when technology was even less sophisticated!)

      One more thing, part of the constraint with Meridian/MQA is that they tried to maintain compatibility and ended up with this monstrosity. IMO they could have just started fresh. Imagine a SuperMP3/AAC/Vorbis/Opus scheme which encodes up to 1Mbps and decodes to 24/192 with a "Super Duper Certified Golden Ear"(TM) psychoacoustic model that tries to keep as much detail as possible giving preference to 20Hz-25kHz and top 18-20 bits of a 24-bit input. Extra bits available apportioned to ultrasonics above 25kHz as appropriate. In the decoding, plenty of room to apply a gentle filter from 40kHz up if desired. Something like that would be fantastic for high-resolution streaming IMO while saving massive bandwidth (since with VBR, a 16/44 source can easily be streamed down at 320kbps) but then they would have to fight with the loss of compatibility and negative PR around it being openly "lossy"...

      (On reflection, if one were to propose a new lossy CODEC, why not go all the way and allow the lossy stream to potentially also include hi-res multichannel material? Allow the max bitrate to go up to 2-3Mbps for the day when multichannel streaming and Internet speeds allow when >2 channels used... Make it open-source and have one final lossy scheme to rule them all. :-)

      Let's see...

  3. I was going to point you at Miska's and my graphs, then I saw you'd already found them. Suffice it to say, I am not impressed.

    1. Thanks Mans... Yeah, didn't look like you were impressed :-)

  4. Hi Archimago, awesome blog I must say at first!

    Have been very thrilled about MQA but as the rest have basicly found 0 techinal specs on what it actually does in the files. While many sites raves about it.

    Reading your post makes me very sceptical about MQA. I was hoping it to be masters before dynamic compression or so in the likes, but oh boy does it seem I was wrong.

    Using a Lyngdorf amp myself(tdai-2170) what is your take on their "ICC"(inter-sample clipping correction), and also have you tried their roomcorrection(RoomPerfect, McIntosh has bought license to use it in a few of their products) and if so whats your take on that?

    Love your technical layouts and recommends your blog to all my audiophile nerd friends :)

    1. Hi "Unknown". Thanks!

      No question the most important thing to have is a good master! After all garbage in, garbage out... And 2L is a great label to show off some fantastic sounding albums for Meridian.

      It's all a 2-edged sword of course since 2L recordings begin as good quality 24-bit audio and taking that excellent material, processing it with whatever DSP and then encoding it into this MQA package might actually work against Meridian if listeners start wondering if there has actually been degradation to the pristine natural recording (which is what I thought when I listened to the piano recording).

      Sorry I don't have much to say about the Lyngdorf amp and their room correction technique as I don't have access to the hardware. I actually have heard the McIntosh MEN220 on one occasion at a showroom and thought it improved things. What is your take based on your experience?

      Yup, when doing processing like up/oversampling, especially with today's compressed recordings with minimal if any headroom, "intersample overs" happen not infrequently. Hence the need for "ICC" as Lyngdorf calls it. Based on the CDP-1 page online, I see they provide an extra 3dB overhead similar to the Benchmark DAC2 HGC with 3.5dB headroom per their specs. I think 3dB is usually adequate although I have run into something like 5dB on occasion with crazy DR1 albums (think Iggy & The Stooges "Raw Power" 1997 remaster)!

    2. Thanks for the quick reply.

      I really hope MQA turns out into something very good though, just if it could be the "non loudness war" affected masters would be more than enough for me to jump on the train.

      Have not specificly tried out 2L's library but have quite a collection of 24bit music from HDtracks amongs others, ofcourse I double check the DR rating on the album as I know some HDtracks 24/192 albums actually sound worse and have worse DR rating than the original release way way back. Will try some 2L recordings asap!

      I'm fully sold on the RoomPerfect from Lyngdorf, it counters some major problems in normal setups where for example you place your speakers way out from the back wall, this resulting in a more even bass response but much worse timing in the bass. With RoomPerfect you place the speakers up against the wall(and woofers in corners/also up to the wall, if you use subs(2 prefered ofcourse)) resulting in a much better timing and with the correction a very good response.
      Ive tried different room correction programs like Dirac and Audyssey xt32 but nothing has come even close to what roomperfect did for me.

      I'm not as technical as you but if you even get a chance to demo one and examine it at home I know I'd love to read what you think about it.

    3. Great Peter. Glad to hear RoomPerfect is doing the job for you :-).

      I've been using Acourate for my room correction since last summer and it has been working out great as well as written in a previous post! A bit more involved to get the correction filters done accurately but well worth the effort...

      Yeah, as for MQA, I think at the end of the day it does come back to the importance of a good master. No matter what compression scheme or DSP used, it can improve the sound of the source in the room as you've experienced, but I believe to do it right, it must be customized for the *speakers* and *room* to make it worthwhile.

  5. Thought-provoking as always, but also part of the second wave of speculative preemptive debunking now in progress following the promotional launch/demo hype. I'm still as skeptical of the skepticism as I am of the hype and the suspiciously glacial pace of the rollout. Meridian is running out of runway to get this out there wowing the audiophile masses and driving some kind of painless adoption process, or all the cynicism is going to look prophetic.

    Meanwhile I don't think SACD/DSD are DOA yet by any means at least in terms of amazing sound. Though all of the above still needs to step up if trying to convince people they need something besides well-mastered 24/96 PCM to live happily ever after in the digital playback realm.

    1. Right. Interesting way to think about this!

      Agreed. SACD/DSD not dead. But after all these years of DSD I'm still looking for a polished product with a modern file format and tools like compression CODEC available to all who want to maintain a serious library. It's too much of a mess out there... I still think fixing this foundational issue is important to at least get it more acceptable...

  6. We can estimate MQA only after "live" file coder coming. Musical signal is not good stuff for analyzing due sophisticated spectrum content.

    1. IMO yes and no. The only way to test the de-blur algorithm is by listening to the music which we can start to do. Also we can already determine noise floor characteristics as I noted with the 24-bit sample.

  7. Wait, you're "already unimpressed" with undecoded MQA? Don't you think you might want to here decoded MQA before getting all unimpressed? I know Bob said undecoded would be better than standard but that's certainly not what all the hubbub is about. All this trashing of something people haven't listened to is tiresome.

    1. True about the caveat that we're not able to listen to full decoded yet. But given what we know about digital audio and what we can reasonably assume, what we hear in the undecoded top 16/44 is likely the main effect of MQA.

      Ultrasonic origami is unlikely to add much. And to make decoding efficient, other than up sampling filter tricks and this origami reversal, I have doubts the decoder would be doing much more to keep the process simple. I do not expect MQA decoding to be as processor intensive as H.265. :-) A good example of this I think is that a firmware upgrade to the Auralic Aries is supposed to allow MQA decoding. The Aries came out in 2014, has a quad Cortex A9 @ 1GHz - not bad but considering all the other stuff the CPU has to do, the MQA decoding algorithm likely cannot be too complex. (I certainly doubt the Aries has enough horsepower to run FIR-based digital room correction filters on 24/96 audio smoothly for example.)

      If I am correct that at best this is capable of 16-bits at a time in history where "studio master" 24-bit quality audio is already easily available, should I be impressed?

      Willing to change my mind of course...

  8. Oh boy... why did you posted this videos? :P

    That's the same garbage marketing all over again - "remove harshness", "epic", "game changer" "we do not hear in digital so...bla, bla", "hear the music like if you are at the studio"... people still fall for that? Well, i guess they do, that's why they make things like MQA, Pono, 384/32 DAC's...

    Man, this really makes me feel upset. Audiophile industry clearly is going the WRONG way! It's not about sound quality anymore. It's just about making money recycling old stuff and promising the sound paradise over and over, decade after decade.
    I know, everyone need to make money, and i'm not against it at all (i work to make money too :) But AUDIOPHILE industry was supposed to make money selling REAL audio breakthroughs and not more empty promises.

    BTW, i dont't want to sound harsh to anyone. This is just my point of view. And i hope that someday someone bring all that sound improvment that they always talk about but never really happens.

    Have a good week! Or better, a good week for everyone!!

    1. No worries VK... Just drink up the Kool-Aid! See how happy and shiny those young, good looking, stylish people were in the "First Time Reaction" video - doubly so in slo-mo!? That could be you! :-)

      Of course, if the industry were about sound quality they'd be doing the right thing at this moment - getting out there themselves and encouraging the general public to demand better quality recordings. Let's wait and see how many other audiophile companies are actually interested in implementing this!

  9. Interesting article ... only just read it!

    Wanted to comment on one thing. If I read you correctly, you are suggesting that because the data contained within the lower 8 bits appears to be stochastic noise and is incompressible, that it is somehow 'inefficient' (although I'm not clear on what you mean by that). It seems to me that any **efficiently** compressed data would - if you did not have the knowledge to decompress it - be indistinguishable from incompressible stochastic noise, no? So there is potentially something relevant encoded within those 8 bits.

    That aside, I share your general skepticism, and look forward to learning more about it.

    1. Hi Richard,

      Yes, true about efficient compression already will result in zero further compression...

      But remember the situation we're dealing with here! This is a 16/44 source file. There should be no noise or other data being encoded in that lower 8-bits so it would be nice that this portion take up little extra bandwidth.

      Since there is so much 16/44 audio data out there, everytime one of these songs is streamed out, it would use up significantly more data bandwidth than necessary. I think this is significant for both a site like TIDAL as well as the user streaming to mobile devices with potential data caps; hence my concern about "inefficiency" as in unable to scale down when there's nothing to "encapsulate".

    2. The whole thing is bizarre. If the source file is 16/44 then the additional bandwidth overhead of MQA serves no purpose whatsoever. But if the source file is higher resolution, then MQA allows either the original hi-res content, or a 16/44 version of it, to be extracted with a minimum of processing.

      MQA seems to rest on three pillars: (i) the majority of users, being consumers of the 16/44 content, have no problem committing to a 50% higher (or more) streaming/downloading bandwidth to deliver it; (ii) the DSP required to decode MQA represents some sort of implementation advantage compared to the DSP required to convert 'conventional' hi-res PCM to 16/44; and (iii) the status quo of existing PCM file formats is somehow the limiting factor in the ability of the industry to deliver hi-res audio to mainstream consumers.

      Please let there be more to it than that!

    3. :-) Yup. Good summary. As we all know, there's no "free lunch". To maintain compatibility with a standard DAC, they're committed to that top 16-bits or so (as far as I can tell). And there are repercussions and a "price to pay" as always.

      As I wrote in the Squeezebox forum today, it's basically "1-2-3" in a nutshell:
      1. Meridian wows the "audiophile" with a fancy DSP supposedly to lower impulse response graphs. This part is audible as I listened with my ABX. Note that this could have been implemented with any incoming stream as demonstrated with 16/44 "DAT". The reality is that this DSP could be implemented at the studio level if they wanted.

      2. The scheme happens to include this "encapsulation" packaging to sell to Internet streamers. Sadly the process seems to limit everything to dithered 16-bits. And of course all that information above 22/24kHz is lossy compressed (at best subtle if you believe ultrasonics add something), and since this has to take up space, the lowest 8-bits in a 24-bit container.

      3. We as audiophiles/music lovers lose our "right" to use any DAC we choose if we decide to accept Meridian's scheme and control... Ooops, I mean their benevolent "authentication" with a friendly and funky lit LED to remind you of the "privilege". :-)

      I too would be happy if the above were not the case.

  10. A 17.2% chance that you were guessing --i.e., the probability that you would get at least 7 out of 10 'right' by just tossing a coin on each trial (17.2% = you'd get that 'success' rate on average ~17 times in 100 tests of 10 trials each) -- is not very compelling evidence that a difference was heard. It's well short of the standard (though arbitrary) 'significance' threshold of 5%. The evidence might firm up with more trials, or not.

    In any case should it be this difficult? Meridian is basing a lot on its claims for the audibility of 'temporal blurring'. If that's not truly a big audible deal, then in theory their whole house of cards collapses. Though in practice, since most consumers will be listening 'sighted'....

    And yes, this reminds me of HDCD and SACD all over again...a solution in search of a problem.

    1. Correct. It's not a "slam dunk" difference from what I heard and this is consistent with differences below -50dB to -60dB or so on the differencing test. Maybe on other tracks it would be significantly more...

      Again, this brings up the question - just how "bad" are ADC and DAC converters anyways. I really don't believe they're as anomalous as Meridian claims or suggests!

    2. BTW:

      Regarding SACD/DSD - at least this allowed audio lovers to listen to a different type of encoding technique. Whether one likes it or sees major limitations, it was different from the 16/44 we all listened to up to that point in the late 90's. I suspect these days, studios would be using DSD128/256 for archiving if they're still into a pure DSD chain.

      Regarding HDCD - You know, I was genuinely impressed by the ingenuity when I first read about this back in the mid-90's! In the world of biological systems, HDCD is like a "commensal" organism. It lived in the PCM stream but didn't hurt it. It existed in the LSB (16th bit of the 16-bit word) and for the most part acted as dither, but in some situations like with a proper decoder using features like Peak Extend did provide some noticeable benefit IMO.

      I fear that MQA is more a "parasite" using the biology analogy in many circumstances. It makes 16-bit source material bloated to 24-bits unnecessarily. True 24-bit hi-res material loses the resolution >16-bits from what I see. And places a burden on consumers with need for another decoder/DAC/etc...

      As you noted, it all hangs on whether you like their DSP which supposedly "de-blurs" (and to a much smaller extent IMO whether you think their upsampling filter in the reverse "origami" process actually sounds any better as they reconstitute the encoded ultrasonic material).

    3. I think it's very hard to know if the Peak Extended HDCDs were really 'benefiting' from PE, because the proper comparison is not to the same HDCD without decoding -- it's to a full-range 16-bit mastering.

      IOW, yes, decoded Peak Extended HDCDs typically (but not always!*) had greater dynamic range than their undecoded states. But did it achieve anything that couldn't have been achieved using 'normal' Redbook mastering tools (including noise-shaped dithering, where you could achieve DR up to ~120dB)?

      *I've seen at least one case where even though Peak Extend is showing as 'enabled', it was not actually employed. i.e. The track is simply reduced ~6dB when decoded, without any peaks being extended, so no gain in DR.**

      ** there are also numerous cases where the HDCD flag is 'on' , indicating that Keith Johnson's HDCD ADC was used, but none of the features of HDCD (peak extend, low-level gain control, transient filter switching) were enabled. Here, you're 'only' getting the benefit of the (admittedly first-rate) converter.

    4. Right. Very fair comments about HDCD and PE. I've had my fair share of HDCD LED "on" but no difference; just the HDCD carrier encoded into the data.

      At least with HDCD, it did not affect the 16/44 PCM stream in a significant way since it was buried in dither and posed no burden on the consumer. But with MQA, without the decoder, one would literally be streaming worthless bits.

    5. HDCD without a decoder gives soft-clipped peaks. A plain 16-bit master probably sounds better.

    6. Good point on this limitation on those disks using PE. In the last year, I've slowly converted my few old HDCDs with soft-clipping/PE to "flat" 16-bit or 24-bit files using dBPowerAmp to "correct" potential anomaly.

      For more details for folks curious about this, Illustrate has a good thread:

    7. Re your reply at 08:40 Archimago. A p-value of 0.172 is not a 'slam-dunk' difference, it's not a difference at all. This really is quite important. Even a p-value of 0.05 merely indicates that the result might happen 1 time out of 20 purely by chance, and it's the highest level at which we should begin to consider the result as evidence.

      You got a negative result and we can't conclude anything from it. If several other people repeat the experiment and also get negative results, then we can begin to conclude there's nothing there. (Though from your coverage of this it doesn't look like there's anything there in the first place.)

  11. A few years back I used hdcd.exe to decode (in software) and archive any track in my collection that had peak extension with 'enabled' status. I didn't bother converting any of the other HDCD track, because the only 'benefit' would be a -6dB renormalization, i.e., any clipping would still be there -- though rethinking it now I suppose there could be the possible benefit of avoiding intersample overs on playback. But I can use replaygain settings in my player (Foobar2k) for that.

    1. What I normally do is have dBPowerAmp decode to 24-bits, check the peak levels in foobar. Then apply gain to the nearest 1dB to get the level optimal for the whole album. If there are no peaks that encroach into that top 6dB space, then I just use the original 16-bit rip... Doesn't really deserve any special treatment.

      If I think the decoded album deserves to be in 24-bits (rarely if ever), then I'll just archive as 24-bits, otherwise, I just run them through my usual iZotope RX MBIT+ dither setting and store them as 16-bits. Good enough I think! :-)

  12. MQA with Digital Volume Control or Digital Room Correction?

    Could someone, that have a MQA certified DAC check the point, what will happen, if I reduce the level of the digital signal, coming from Tidal or any other Playback Software, say just about 1 dB. Nearly all playback software still do sound good, if you reduce only 1 dB or so (in the digital domain). But in this case, will the MQA light will still lit up and will the MQA stream still be "unpacked", or will this be end up in the 44k1 24 Bit digital stream, as when having no MQA certified DAC?

    My thinking is, then also no digital room correction will work in that way, that it will benefit with the MQA unpacking, and all files, no matter what bandwidth they every had in the MQA encoding process, will end up with just 44k1 24 Bit, but mainly only 13 Bit of in band audio information.

    My saying is, that less compressed master files with real 24 Bit 96 kHz sample rate from start to finish (in a real lossless codec) will do a more practical job, in our daily homes, especially when you plan or already use a Digital Room Correction (in the digital path, between or in the playback software, and before the DAC).


    1. BTW: Thank you for the informative Block Diagram of the MQA process. This does help a good part, especially when we haven't yet seen any measurement, showing the de-blur effect, or any other measurements for the de facto lossy codec (in the technical point of view).

    2. Appreciate the note Juergen.

      Agree, I wonder if anyone "in the know" can comment on the tolerance for *any* kind of change to the digital data even if it's just a minor volume adjustment.

      I don't think it'd be surprising if the answer is "no" in that any kind of detected "violation" from "bit perfect" MQA will result in the LED turning off and the de-encapsulation process prevented.

      Considering the recent withdrawal of MQA from the firmware upgrade with devices like the Auralic Aries (where there's potential for digital decoding and output without a "certified" DAC), I imagine Meridian will insist on not only bitperfect decoding AND use of a specifically sanctioned DAC to maintain their claim of "end to end" "authentication".

      Sounds like ANY digital processing in between is out. Room correction will have to occur with yet another ADC --> DRC processing --> DAC step. IMO, this is silly and restrictive of course!

  13. Realistically, "de-blur" can only be some sort of time-domain treatment.

    A digital filter with a flat frequency response but with a phase lag tuned around a particular frequency can be easily created (in filter-design-speak) by using a single pole-zero pair, each having the same phase (i.e. resonant frequency), but whose magnitudes are the inverse of each other. Quite elaborate phase responses can be concocted using this principle. Sorry for being rather technical about it - but at BitPerfect we have played with these ideas, and it is a bit of a Pandora's box.

    I have no idea if MQA does something along those lines. I do know that MQA's proponents like to talk about "Minimum Phase" filters as being sonically ideal. Even if you have no idea what "minimum phase" actually means in that context, it is easy to persuade yourself that it is surely a good thing. Anyway, in principle at least, if you know what the phase response of a particular audio chain is, you could conceivably implement a phase correction in a DSP to modify the net phase response of the audio chain into something that approximates what a "minimum phase" audio chain would have sounded like. Is this what is meant by "de-blurring"?

    All that is pure speculation on my part, and I have no idea if MQA operates on that kind of thinking. But if it did, you can imagine a whole forum's worth of questions that it would prompt.

    1. Thanks for speculation on the technicals as to how this DSP might work. :-)

      Plausible indeed!

    2. Hi Richard

      From a 24 Bit 96 kHz song from 2L, I have created a Minimum Phase down sampled to 44k1 24 Bit and a Linear Phase down sampled 44k1 24 Bit version of that song and have compared it to the MQA (non decoded, and so also 24 Bit 44k1) version from the 2L website.

      The MQA (non decoded) sounded too different to the both other versions that I have created, there must be something more (different) going on, instead of "pure" Minimum Phase filtering.

      Also the resolution of the MQA (non decoded) file is not as high as the down sampled 24 Bit 44k1 versions and much noisier (as all the origami puzzle has to fit into the area below the "noise floor".

      Maybe I will find the time in the next days, to pick out a specific impulse in the song, and compare the "time smearing" of that impulse in the MQA (no decoded) compared to the MP and LP versions of the "regular" down sampled versions.

      And as said above. Just some "basic" measurements from the MQA team, would be great.


    3. Sound of non decoded MQA:

      But I would like to add my impression, that the sound of the non decoded MQA file, even it doesn't have the resolution of the 44k1 24 Bit PCM file, it does sound "pleasing" and not fatiguing at all. It is "easy" to listen to, even there are some tiny information, like room reflexions missing. It sounds a bit into the direction of the DSD version (but just a bit).

      The 44k1 Bit 24 Bit and the 96k 24 Bit do sound more similar to the 192k 24 Bit version (in the case of tonal balance and how the sound gets into the room), than the non encoded MQA file does. So there was clearly something done with the file, even compared to the 192 kHz version.


    4. Thanks Juergen. No question, whatever the DSP is doing does result in audible changes which are of course measurable.

      As usual with DSP, the ultimate question is whether we as listeners like it and find it a meaningful extension/benefit to the "high fidelity" experience.

      Looks like in a week or so Meridian will finally release an official firmware for their DACs... Hopefully some folks will be able to add to the conversation with objective results on how the decoding sounds and looks like; especially with the less expensive Explorer2.

  14. The thing about "minimum phase" is that the phases of all the frequencies go into the "minimum-ness" calculation. For the most part, digital audio filters are of a low-pass type, whose purpose is to attenuate frequencies outside the audio band. Most (but not all) of the phase linearity occurs at frequencies where the filter is doing stuff. So, in effect, a "minimum phase" filter is minimizing the phase shifts at inaudible frequencies. I get tend to get preferable results by focusing on the phase response within the audio band.

    Now it may well be the case that out of a bunch of filters the one with "minimum phase" is the one that sounds best. But that does not necessarily mean that a different filter with a "non-minimum" phase response cannot sound better.

    There are other arguments that you hear on the subject of the shape of the impulse response, based on arm-waving notions that if it shows "pre-ringing" then you will hear the music before it actually begins. This is behind the term "time smearing". For the same reasons that the phase errors tend to build up around the frequencies where the amplitude response is changing most rapidly, the pre- and post-ringing that you see in an impulse response is dominated by those same frequencies which are therefore usually inaudible.

    Arguments concerning phase response and impulse response are simply different expressions of the same issues. The two are inextricably linked by ugly mathematics which, incidentally, neatly express Jagger's Law: "You can't always get what you want. But if you try, sometimes ... you get what you need."

    1. Of course, I SHOULD have typed: "Most (but not all) of the phase non-linearity occurs at frequencies where the filter is doing stuff."

  15. Hey, Archimago, now you can pose questions, both those that Serinus didn't pose, and others, to Bob Stuart, who's agreed to answer them (well, I guess at least look at them) over at Computer Audiophile

    BTW I very much enjoyed reading this piece on MQA. Thanks.

    1. Thanks! I saw the post and looks like folks already posing lots of good questions.

      The real question is how will Mr. Stuart answer and how deep he'll go into it... Hopefully we'll see soon.

  16. I think that "authentication" will eventually mean DRM. That's why it will be appealing to labels.

    1. Yeah. That's a worry. Hence the first question being what mechanisms are there for DRM - quite possibly embedded in the system already that we might not know about or activated yet...

  17. Nice addition to that on AudioStream:


    1. Thanks Juergen for this link! This info is for me, because I can't get blue light on my MQA capable Meridian Explorer2 when listening these 2L MQA files...:(
      I use Audirvana Plus trial version. I have blamed Audirvana so far... but I have just seen news about releasing a new firmware at 4th of February which enables MQA decoding capability in Explorer 2. This confuses me since Explorer 2 entered the market more than 1 year ago without MQA capabilities...or I misunderstood something in this "noisy" MQA related world.

    2. Thanks Juergen. Added a little "Addendum 3" segment to the write-up. Decoded 24-bit content at least looking better in the noise floor. As I said in the little note, I'd be surprised if mainstream "listening tests" could ever report anything different from what "Morten's Notes" would suggest will be the order of preference.

      Considering with the Meridian Prime decoder, there was a preference of MQA over the original DXD, the implication is then that MQA's DSP has value in making the sound "better" than the full original DXD; that "correcting" the microscopic timing differences between ADC and DACs are more important than non-lossy-compressed full spectrum audio. We'll see not just what the reviewers say, but ultimately what the MARKET says!

      Armand - Sounds like you'll have some fun comparing on Feb 4th :-). Let us know how it goes!

    3. Archimago I will definitelly update you! But I must admit I failed in one of your blind test when comparing mp3 with cd rip (Pink Floyd -Time). So I need to train my ears..:)

    4. :-) Armand.

      Actually, if my suspicion is correct based on the listening test I posted, I think the MQA DSP "de-blur" effect will be more obvious than hi-bit MP3 vs. FLAC.

      The question really is "Do you like what Meridian/MQA has done to the sound?"

      Have a great listen!

      [And of course IMO, the truly big question is whether the market wants to support a parasitic "origami" "compression" scheme that in itself does not improve sound quality for the hundreds of millions if not billions of excellent DACs already in existence!]

  18. You wrote: "The question really is Do you like what Meridian/MQA has done to the sound?"

    That isn't always the whole story. Bear in mind that the 'loudness wars' problem arose because some people decided they "liked what it did to the sound".

    Be careful what you ask for ... :)

    1. :-) Indeed.

      Pure subjectivity without mindful evaluation of what's technically "proper" or "best practice" is often the path to madness... Sadly various subjectivists websites promotes this concept solely.

  19. Hi Archimago. First comment here. Found your blog a few months ago and have been catching up with your older posts. Lots of good, common sense, stuff!

    Like you, I'm finding the MQA premise dubious. It seems to be limited to 16-bit, so you are sacrificing bit-depth that might actually be useful for content > 22 kHz which few can hear.

    I think the whole hi-res industry is a shambles. It would have been much better to have upgraded the CD standard to 20 bit x 64 kHz. 20 bits because 16 is too tight and 24 is a waste of space (is there any material with content below the 20th bit, and are there any DACs that can resolve it?), and 64 kHz because there is material with > 22 kHz content so it should be available for the few that can hear it, plus it allows much gentler filters than 44.1 does. 20x64 would take only about 1.8x the space of 16x44 uncompressed and using FLAC or ALAC you would not be wasting space on absent content, likely above 20 kHz with a lot of material.

    On another topic, I hope you can lay your hands on a Chord Mojo and test it. I'm tempted but could it be too good to be true? There's few reviewers I trust these days.


    1. Thanks for the comments Otto.

      Indeed, 20/64 would be great as a music carrier. And yes, this whole "hi-res" monkeying around with different samplerates and PCM vs. DSD nonsense, now with the promotion of proprietary encoding like MQA is ridiculous. As you and Miska suggest, there's no need to stick with the "standard" bit depth and samplerates. A 20-bit depth ("2 bytes and a nibble") would be great for consumer consumption! Likewise, there's a huge spread between 48kHz and 88kHz which could be exploited to reduce data bitrate and not worry about any filtering side effects.

      Of course, this chaos has allowed all kinds of hardware to be produced with tons of flexibility! (I guess the irony is then with all these potential PCM/DSD options to choose from, why do we need MQA!)

      And with this flexibility comes devices like the Mojo, right? Relatively small, light, rechargeable battery, USB/SPDIF inputs, 2 headphone outs, all the samplerates up to 32/768 and DSD512 one could ever wish for and unlikely to ever be able to find a single commercially available recording for!

      Yeah it looks great and certainly could be all that a mobile DAC ever needs to be... I am intrigued as well and will have to see if I can borrow a sample unit around here. Not sure I'd want to buy one as I have no need :-).

    2. Thanks. I have no need for a portable DAC as my iPhone sounds fine (with RHA phones, not the Apple ones), but I don't have a serious DAC in my main system and it looks like the Mojo will do fine for that (and as a mobile DAC *if* I want). I like products that do more than one thing, especially if they do them well.

      I stream music from a NAS using a DroidPlayer (this seems to be the same thing). It sounds surprisingly good even using its own DAC but it has an S/PDIF socket so I'd like to see if an external DAC would be a big improvement. I don't see much to compete with the Mojo even ignoring its portability.

    3. 16/44.1 is perfctly fine for music reproduction.

    4. Yes, of course 16/44 is fine for music reproduction!

      But is it "perfectly fine" and how do we define "perfect" anyhow!? And is it fine for all situations and every DAC? (Remember, we got stuff like NOS DACs out there!)

      The stuff we're talking about in most of my blog posts are targeted at the "perfectionist" audiophiles and the minutiae just beyond the limits of 16/44... For example, the aliasing potential of various filter settings, whether "ideally" there's benefit to going beyond 16-bit quantization, etc.

      I said some things about this back in the day (2013):

      Well, I see it's Feb 4th today and the promised MQA firmware is out... Would be great to hear what owners think!


  20. <<16/44.1 is perfctly fine for music reproduction.>>

    Oh good, I'm glad that's all cleared up then.....

  21. Stuart has replied, sort of , to this blog post

    1. Yup. Thanks for the note Steven. I saw that :-).

    2. Have you noticed Stuart's allegation that you blocked him from posting corrections? Has he tried to contact you?

    3. Since you asked Pelmazo... He did leave a message on the board here but was deleted. I did not block that or do the deletion. I actually know what he wrote because Blogger sent a copy of the message to me.

      Based on my reading of the post, I did not get the impression that much of what I have written is being contested and would be happy if more info were presented. His message did not point to any further measurements or objective results than the FAQ.

    4. That's unfortunate. I'd have liked to have read Stuart's response to what has been a generally skeptical reception (at least on this board).

      I'm not sure how I'm supposed to interpret what you wrote. Is Blogger able to delete posts all by its lonesome (I don't think so)? Or are we to to suppose that Stuart deleted the post himself? The impression that comes across is that Stuart is miffed because he believes you are blocking him, and for your part my reading of your post is that you are denying this.

      My suggestion: Since you have a copy of Stuart's original post, if he thinks you blocked it and insists he did not delete it himself, then he won't mind if you offer to re-post it for him :) With his permission, of course.

      I know, I know, I know .... I'm sowing mischief!

    5. Hi Richard,

      All I can say is that I did not block anything at any time. In fact, as you can see I don't moderate the messages at all. The only time I ever erase any messages is when I see a clear case of spamming or if the writer posted something then deleted it and I just want to clean up any "residue".

      I do not know if Blogger erases messages on it's own. If so, then this must be an example.

      Now as for the message itself... I have it opened up right now in another window and I can tell you that it was thoughtfully composed and did not appear rushed or incomplete. Instead of posting the whole note to respect whatever reason resulted in the deletion, I will just summarize a few points which I don't think would be contentious:

      - There may have been different versions of encoded files released so noise floor could have varied.

      - He's grateful for 2L for releasing the test bench. There's potentially "cross-checking" that needs to be done to make sure the versions released are from "consistent sources". (I of course agree and thank 2L for allowing us to compare the different resolutions.)

      - Acknowledgement of working on questions posed - I assume he's referring to the eventual Q&A posted in Computer Audiophile.

      - ** The main objection to the results: He feels the noise floor is equivalent to 22-bits below 10kHz and "always below the noise-floor of the recording".

      - Regarding the PS Audio blog post: the development decoder was not updated to production version and "would have been operating in 'bypass'" mode so he felt Mr. McGowan and company were not experiencing the benefits of MQA.

      That's pretty well the gist of it... I trust there's no "mischief" here and most of these points are covered and expanded on in the Q&A post on CA.

      Unlike recent political events such as the "debates" among Republican candidates :-). No, I trust there's nothing like a bar brawl going on here. As I noted, Mr. Stuart's response was well composed and gentlemanly throughout - as I would of course expect from him and for these discussions basically about academic and empirical matters.