Saturday 3 February 2018

MUSINGS/MEASUREMENTS: On "blurring" and why MQA probably worsens transient smearing.

Yes, I know. The last time I specifically addressed MQA was supposed to be "FINAL final" :-). But I got curious again. This is especially after the exploration of various filters recently suggesting to me that there actually is a case to be made that "blurring" can be seen with minimum phase filters. The question is, what do we "know" or "believe" MQA is able to do, and can we demonstrate that MQA even "de-blurs"?

Let's talk and think about this for a bit...

I. Intro...

Let us mull over the question "What is blurring?" as a problem promoted by MQA needing to be resolved. MQA appears to suggest that "blurring" is ubiquitous in the world of PCM, so presumably as consumers we should be able to notice the problem and I would imagine at least we should have some general concept of the cause. Alas, as far as I can tell, the answer to that question is not so clear.

In the last couple of years, I have been in touch with some knowledgeable individuals whom I would have thought should be "in the know"... Alas, I notice a collective shrug when I bring up this question. Of course, nobody claims that every microphone, ADC, DAC, DSP plug-in, playback system operates at the same resolution. Comparatively, some would have poorer time-domain performance demonstrable with recording, manipulating or playing back signals with rapid transients. But is there some kind of clear "blurring" that is inherent in PCM as a digital representation of analogue waveforms? And is there some kind of fundamental "correction" that could be applied to studio productions for consistently "better" playback? I certainly have not heard of an affirmative to these questions from the people I spoke to...

But yet here is MQA claiming that this is possible.

While I know many of us have already read Bob Stuart's Q&A on "Temporal Errors in Audio" and "Temporal Blur in Sampled Systems" from 2016 where he lays out some facts, figures, and claims, a clear outline of what the "de-blur" algorithm actually does is rather obscure.

MQA accepts that "temporal blur" is not "quantised at the sample rate" and that 16/44 already resolves to 220ps. Rather, what "temporal blur" is seems much harder to define. Notice the first sentence of his "What we mean by temporal blur..." section: "There is no standard measure for temporal blur but we believe our use of the term is clear and intuitive." No standard measure yet the use of the term is "clear"? Is that even possible in audio engineering these days?

From there, the Q&A article proceeds with claims about how audible "fine details" seem to be "on timescales as short as 5µs". But since we already agreed that 16/44 resolves to 220ps - is there a problem!? Then he starts talking about limitations of the audio chain and how temporal errors within one component can limit the ultimate resolution of the rest... Sure. But he seems to be telling us that in post processing the loss of microsecond temporal definition can fully be repaired and you can overcome the resolution limit of that link in the audio production/playback chain after the fact!

[As an aside, I have heard of variations to that "220ps" figure. Perhaps the best laid out explanation of "time uncertainty" in a 16/44.1 system is this calculation in Kahrs & Brandenburg (Applications of Digital Signal Processing to Audio and Acoustics, 1998). Their final figure is 110 ps. The point is that we're talking picosecond values here!]

For the hardy reader who turns the page and considers the section on "Temporal Blur In Sampled Systems", the article series then meanders into idealism like how perfect low-pass "brick wall" filters are impossible (is perfect anything possible!?), appears to try to disparage fs/2 (Nyquist frequency) as either "tight engineering or lazy thinking" (???), and later on lays the groundwork for why he thinks the linear phase filter "smears" due to the "ringing". While he agrees "the human listener may not be able to hear the ringing frequency", he still insists "we are nonetheless sensitive to the overall envelope" referencing his own article on MQA. Within that paper, I do not see any testing referenced regarding "ringing" with actual listeners showing that this "overall envelope" is audible.

As best I can understand, the rest of that Q&A section jumps back and forth between criticizing "brick wall" filters, asserting that current systems are imperfect, at times seeming to justify that aliasing is okay ("The existence of aliasing is not evil—it is after all the right signal reproducing at the wrong frequency..."), eventually ending in vague discussions around the importance of "complimentary" filters in the encoding and decoding which presumably is what MQA thinks they can do "better" than current systems. Within this mash-up of incompletely argued ideas, is it any wonder then that "blurring" remains a topic of controversy?

Let us now spend a few moments exploring potential meanings of what "blurring" could be referring to, and if possible, explore whether MQA helps fix the problem...

II. Potential attributions and meanings for the term "blurring"...


A. Blurring as ringing, especially long "acausal" pre-ringing as "seen" in an impulse response waveform.
Let's not spend much time on this. As I've already discussed recently and made suggestions about using an intermediate phase filter, concerns about pre-ringing when looking at impulse responses is at best neurotic (as in obsessive-compulsive) and at worst irrational. MQA of course obliges with an impulse response that looks like this with no pre-ringing and of short tap length:

Remember that "ringing" is not a problem with properly anti-aliased audio recordings, and when it does happen during playback with a good quality linear phase anti-imaging reconstruction filter, it's typically because one is playing a hyper-compressed or clipped album that's not really mastered to sound "natural" or of "high fidelity".

In my opinion, to keep picking at ringing as an issue falls squarely in the "fear, uncertainty, and doubt" strategy of trying to persuade audiophiles/music lovers to accept something which superficially might look reasonable, but ultimately is of little consequence. This is in the same ball park as those "stair-stepped" representations of digital vs. smooth analogue waveforms; "intuitively" this might look correct, but in fact is a misrepresentation for a vast number of situations.

Given the linked articles already published, I really have nothing else to say about "ringing". It's not an issue inherent in properly mastered PCM, pre-ringing when the filter is supplied with a Dirac impulse is a response to being fed the "illegal" waveform, and I already talked about using a mild intermediate phase filter to suppress the ringing without much side effect if desired.

In the December 2017 Stereophile article by Jim Austin, it is claimed that the "de-blurring" is not simply about the filter used. Fine. But he still offers no details as to what it is!

B. Blurring as phase anomalies / group delays through the audio chain.
In the articles above, especially when examining the nature of steep minimum phase filters, we started discussions on another way of looking at "blurring". We discussed the fact that minimum phase filters create inherent phase shifts and this is something I want to focus on deeper this time in relation to MQA.

As a brief recap, remember that minimum phase filters (the "Tested filter" in the diagram below) results in typical graphs like this one off the SRC comparison page:

We see that there is a non-linear phase shift (relative temporal relationship) as the frequency increases.

These temporal shifts (they can be called phase deviations measured in degrees or group delays measured in microseconds) will create time-domain effects which can be seen in the waveforms of actual music as I presented previously:

To drive this point home, here's what happens when we "stack" 5 samplerate conversions using minimum phase as compared to the same manipulations in linear phase. I basically took a 24/96 150Hz square wave and ping-ponged this signal between 192kHz and 384kHz conversions in SoX using a very high quality 95% bandpass filter setting:

Every time a minimum phase DSP process is used (even something as straight forward as 192-to-384kHz sample rate conversion without attempting to affect frequencies), those temporal shifts become more prominent, eventually reducing the "slope" of the leading edge of the steep transient as you see above. This is why in general, it's better to utilize linear phase filters in audio processing (unless of course there truly is audible pre-ringing in extreme cases) because they do not cause "transient smearing".

You can imagine for example if an ADC used minimum phase filtering to capture the signal, in the studio the engineers used minimum phase EQ'ing, then performed minimum phase downsampling, and finally when the audiophile plays the signal at home with their minimum phase DAC filter, those temporal shifts are exacerbated with each step. Could MQA be trying to improve this form of "blurring"?

Who knows exactly what they're doing... But we do know MQA uses minimum phase filtering itself.

Recently, I spoke to a friend who works as a professional audio engineer with decades of experience and he was gracious enough to examine the digital filters on actual MQA DACs. He has access to measurement equipment and it's always good to have someone else independently review test results :-).

Using his Audio Precision gear, he was able to verify that both the relatively inexpensive Meridian Explorer 2 and higher end Mytek Brooklyn DAC utilized the same MQA filter design using the "Reis Test" also employed by Jon Atkinson in Stereophile to demonstrate the filter effects. Note the very long transition band. The upsampling algorithm is likely performed by the DACs' XMOS microcontroller (both 44.1kHz and 96kHz shown):

As discussed previously, MQA uses a weak anti-imaging reconstruction filter that will allow quite a bit of distortion to pass beyond the Nyquist frequency.

While we've seen graphs like the ones above already such as when Stereophile measured the Mytek, notice that we have not seen published measurements of group delay introduced by the MQA filter. Voilà, detailed measurements of each of the 4 filters available for the Mytek Brooklyn DAC using a combination of my friend's AP gear and FuzzMeasure on the Mac for group delay plots.

Fast Roll-Off (linear phase):

Slow Roll-off (linear phase):
 Minimum Phase (fast roll-off):

MQA (minimum phase, slow roll-off) - notice the "overload" with the 20kHz 0dBFS tone:

I'm perhaps over-inclusive here with all these charts and graphs, but they do tell a rather complete story about the different filters available to users of the Mytek Brooklyn DAC.

The group delay plot confirm that with the linear phase filters, there is no delay between bass, mid, and treble - no "temporal smear". In comparison, both the minimum phase and MQA settings do indeed add a delay to the treble frequencies. Specifically, for the MQA filter, by 18kHz, there is about 40µs delay compared to <1kHz (not as bad as the sharp roll-off minimum phase filter with 100+µs). In other words, the MQA filter itself creates temporal smearing instead of "de-blurring"!

Also if you have a look at the square wave oscilloscope tracings in the center of each cluster of filter measurements, you can see the rise time measurement. For the linear phase filters (fast or slow roll-off), it's about 20µs, fast roll-off minimum phase has 34µs, and MQA sits at an intermediate of 24µs. Another indicator that from a time performance perspective, linear phase reconstruction filtering maintains faster transients and less smearing. Though not shown here, temporal smear is also present when the filter is applied to 88.2 and 96kHz playback (we can explore this perhaps another time).

I suppose MQA might argue that these temporal distortions introduced by the reconstruction filter are accounted for in the encoding step and therefore neutralized (they'll have to explain if this is true). What can I say? It seems like a lot of work to perform this kind of "Rube Goldberg"-esque manipulation when the bottom line is that a studio desiring to produce high resolution audio recordings could just use some high quality microphones, an excellent ADC, and maintain linear phase processing throughout to minimize transient smearing (remember that linear phase processing does introduce latency so I'm not saying this is always the best choice; the audio engineer/artist will need to decide what sounds best). On the consumer side, by using a high quality linear phase reconstruction filter (again, something like the Chord or high quality SoX settings), and you've ensured the highest quality time domain performance any human being could ask for in standard PCM capable of delivering down to a resolution measured in picoseconds!

C. Blurring as a more subjective phenomenon?
There is no standard measure for temporal blur but we believe our use of the term is clear and intuitive. 
-- Bob Stuart (August 2016)
What is happening here is that the encoder (using system metadata and/or AI) resolves artefacts that are obviously different in each song according to the equipment and processes used. When these distracting distortions are ameliorated then the decoder can reconstruct the analog in a complementary way.
-- Bob Stuart (June 2016)
For completeness, this last interpretation of "de-blurring" is obviously not a technical one but rather taking up MQA's invitation to think "intuitively". Could MQA "de-blurring" just be a DSP that analyzes the incoming audio signal and applies some kind of algorithm that either "cleans up" the perceived "distortions" or "excites" the sound? Some kind of "harmonic exciter" that tightens and "brightens" the frequencies; making them sound "faster" with more "air". Remember stuff like Harman's "Clari-Fi"? Of course not, because audiophiles don't use stuff like this :-).

There is talk of a "white glove" approach to some audio tracks with impulse response measurements for each piece of gear in the production chain informing the algorithm in the "de-blurring" system. Maybe in these cases, they can use the impulse responses to determine how much group delay was introduced as the data was processed along the way. But if so, this would get very complicated very fast for multitracked recordings (all kinds of microphones, different ADCs used, etc...) with various kinds of EQ and other effects added. Hard to imagine studios putting much time and labor into doing this for a significant number of albums in their archives assuming it's even possible!

Rather, what's most likely is that MQA essentially "batch converts" masters given to them by the labels. Without the individual impulse responses and detailed studio information, the "AI" system then must guesstimate what the music "would" or "should" sound like if idealized (based on whatever "metadata" as per the statement above).

So how "accurate" would this be? Unless proven otherwise, I remain skeptical that any DSP technique would be that "intelligent" in deconstructing time-domain "artefacts" in each track of a complex recording, figuring out how many microseconds of "blur" each instrument/voice was subjected to and at what frequencies, and be able to reconstruct an almost magical version of the music at the end that accurately resembles the "studio sound" (whatever that was!).

[Of course, if batch encoding to MQA, claims of the sonic output being "what the artist/producer/engineer intended" would be rather meaningless! But that's nice that the blue/green light turns on :-).]

III. Concluding thoughts...

While not fully satisfactory, these are some ideas as to what "blurring" could mean in the context of MQA's claims. Here are then a few conclusions we can draw and be quite confident in:

1. The whole "ringing" business in a linear phase filter is irrelevant and does not contribute to transient smearing or "temporal dispersion" (whatever technical term you want to call "blurring") in a practical sense. Yes, MQA doesn't use a filter with pre-ringing in the impulse response. So what? We cannot just look at a non-pre-ringing and short impulse response and make claims that it's "better" for time-domain performance because of appearance. In fact, taken to extreme those characteristics also limit both time and frequency-domain accuracy.

2. If we believe "blurring" is similar to "transient smearing" as seen in the use of minimum phase filtering (actually, any non-linear digital reconstruction filter), then the MQA filter worsens this - it actually "blurs" rather than "de-blur"! This was demonstrated using the Mytek Brooklyn DAC. The MQA filter at least is not as severe as a steep minimum phase filter when it comes to the amount of group delay it introduces, but isn't this still a bad thing when MQA is marketed as sounding "exactly how they were recorded in the studio"? Perhaps just like the fact that MQA is partially lossy, words like "exact" probably refers to "perceptually the same" rather than to a form of the Platonic ideal.

MQA could claim that somehow they have accounted for the group delay in the encoding step perhaps by purposely introducing an inverse temporal shift! But doesn't this then purposely "blur" the audio for those listening without an MQA decoder/filter? Yet they claim that even when not decoded, we're supposed to hear the benefits of "de-blurring" (as per the 2L Test Bench comments)! What a mess...

Speaking of the Mytek Brooklyn DAC and its filters again, my friend kindly also measured the distortion (THD+N) at 44.1kHz of the various filters using his AP gear:

Notice the MQA filter actually worsened the THD+N from the DAC! To demonstrate that this increase in distortion is not just with the Mytek Brooklyn, it's also present with the Meridian Explorer 2 although not as obvious due to the lower resolution of the device. The fact that the filter cannot be switched to another one in the Explorer 2 is rather unfortunate:

So... Not only does the MQA filter introduce phase shifts/group delays/temporal "blur", but it is "leaky" as a reconstruction filter allowing imaging/"upward aliasing" (as per BS), easily overloads (as can be seen with the 20kHz 0dBFS tone with the Mytek), and it also measurably worsens overall distortion!

Some objective results to consider regarding the MQA filter (@ 44.1kHz):
- Treble frequencies are delayed up to 50µs by 20kHz.
- Transient rise time is slowed by 4µs.
- THD+N as measured by the Audio Precision gear goes up to 1% by 20kHz.
I find it hard to believe that this is the kind of reconstruction filter an audiophile interested in achieving the highest playback fidelity should accept or at least feel confidant about; much less desires to see standardized across numerous devices and applied to large numbers of so-called "hi-res" albums from the big labels! Objectively, the MQA filter is the worst quality filter of the 4 offered by the Mytek Brooklyn. For this reason, I believe DAC makers must make sure the MQA filter is not being used except when decoding MQA data as intended!

For example, it's good to see that Stereophile withheld a full recommendation when they reviewed the Aurender N10 recently because the device was inappropriately using the MQA filter. This is especially bad when applied to non-MQA 44.1kHz sample rate material.

3. At the end of the day, despite all the hoopla around "white glove" encoding and impulse response measurements of the production chain, "de-blurring" might just be a DSP process that is meant to subjectively "improve" sound. Who knows how well this all works. The algorithm is proprietary and under wraps. We don't know how "intelligent" it is. And as far as I can tell, MQA has never taken the time to explain things better nor truly demonstrated the potential for consumers to appreciate.

To MQA: Why not release 24/192 pre- and post- "de-blur" audio tracks for customers to have a listen to? Even better, have 2 "de-blur" versions - one using standard "batch" de-blur and the other with a "white glove" approach to show the level of improvement achievable. This will provide an opportunity for audiophiles to assess just the claimed time-domain improvements apart from worries about the MQA filter, bit-depth reduction of 24-bit material, and the whole "origami unfolding"/data compression pieces. Surely you must have access to high quality demo recordings with permission to distribute. This should not be difficult after all this time since much of the processing must already be done or highly automated (maybe just use one of the 2L test tracks)!

To close off this main blog topic in the hopes of not having to talk much more about MQA in the days ahead, I just want to address the "analogue to analogue" claim. It's as if involving the word "analogue" excuses all kinds of clear limitations evident in the digital analysis of MQA! Over the years I have heard MQA supporters parrot this claim as if significant. For example, the phrase "MQA starts with the analog signal in the studio and ends with the analog signal on playback" in this The Absolute Sound article from May/June 2016. Oooohhhh... Sounds impressive, right?

Anyone who's still unsure of what I'm talking about can watch this video from way back in RMAF 2015 (October 2015, Chris Connaker did a good job with the interview and moderation, allowing the guests to talk quite freely in those "early days" of MQA's introduction when much less was known about it):

Have a listen to what Mr. Stuart says at 35:45. MQA is not about delivering lossless digital, "We're trying to bring the analogue sound to the analogue in your room." He claims this whole system is "beyond lossless at quite a profound level". He even makes an unusual claim about the benefit of the "physicality of vinyl" somehow satisfied by MQA!?

Folks, all we need to say about "analogue to analogue" is this... Beyond the electrical DAC output (which doesn't seem to be all that similar comparing certified MQA DACs), what further benefit does MQA provide to the "analogue sound" in your room? MQA cannot "certify" your analogue preamp. It cannot ensure your amplifier is of sufficient dynamic range or "speed". It cannot improve your speaker's frequency response or time-domain qualities. And it certainly cannot provide room correction to improve anomalies!

So what claim does MQA have about the "analogue in your room" as the sound waves enter your auditory canal? Absolutely nothing special.

MQA wants to impress upon us that MQA is a "philosophy". However, IMO, the only philosophical paradigm here manifested as a "business model" is that MQA wants to be involved in everything from the production side, to audio data encoding for the record labels, to "authorizing" (aka "authenticating") the playback with your computer software or DAC firmware. In each step, there are perhaps some licensing fees to be extracted (this is the essence of Linn's allegations). As we've explored the technical side of MQA, at each point we see limitations compared to current lossless high resolution file formats (like FLAC, or APE, or ALAC) - even suggestions that it worsens transient smearing as discussed in this post... Yet the company still basically promises qualitative perfection!

I remain amazed by the steadfastness of the audiophile media in promoting this failure of a "philosophy" as if it's "the next big thing" towards higher resolution or can benefit the consumer. That's even without thinking about the DRM potentials. For a magazine like TAS recently to claim that a DAC that does not decode MQA is "obsolete" is both ridiculous given MQA's limitations (summarized here) and premature considering that there's barely any interest in MQA beyond the debates among a small group of audiophiles after more than 3 years since the introduction. I suspect many audiophiles at this point recognize that they're simply chasing after the wind with MQA and will eventually be disappointed if they entrust their faith in this myth of a "better" encoding format.



If you've been around the message forums, you'll know that the long-running thread on Computer Audiophile titled "MQA is Vaporware" started by Rt66indierock in early 2017 is now an unwieldy 300+ pages! Other discussions like a 50+ page thread on Steve Hoffman Forum was unceremoniously deleted from existence a couple weeks back with at least some dissatisfaction I suspect.

In regard to the Hoffman thread deletion, it's worth thinking about censorship and the lack of transparency when something like this happens. Even though a forum moderator might have the right to pull the plug, there is still something to be said about the fact that time and efforts were contributed by the membership of those on the forum. While it's within the rights, it might not be just. To close a thread to further comments if felt to be unfruitful or delete inappropriate, rude commentary is reasonable. But IMO it should not be because of disagreements, expressed unhappiness, or because someone argued a certain viewpoint. A society that values truth (even if it "offends" some) and the search for answers (even if the results contradict the agenda of others), must also fight for freedom of speech. No, MQA battles will not change the world at large, but even in our small corner of the Internet, my sense is that to completely wipe off many well reasoned thoughts and comments is obviously draconian and speaks to some kind of insecurity. Hopefully the thread can be reinstated at some point even if closed to further commentary for the sake of posterity.

Around the same time as the MQA thread deletion, the Hoffman forum also pulled the plug on the very interest poll question: "Do you think High res audio is an audible improvement over CD quality sound?". Here's a screen grab of the results up to January 12, 2018 with 813 votes (multiple selections allowed):

It is encouraging to see some sanity out there that mastering is recognized as being very important. Personally I would have voted a "Maybe" with "Only if mastering is different/improved" which I think is consistent with my previous blog commentary on this topic.

Years ago I wondered if music labels could "keep it as simple as possible" and release two mixes/masterings for albums to target different purposes and listeners (ie. a "Standard Resolution" dynamically compressed version for radio/lossy streaming and an "Advanced Resolution" master with retained dynamic range for those who value high fidelity). Obviously, not all music needs both versions. This would be similar to a movie studio releasing an SDR (Standard Dynamic Range) movie for Blu-Ray and a HDR (High Dynamic Range) remaster for UHD Blu-Ray these days. The difference is obvious especially with higher quality screens. I believe only when there is clear differentiation around what is an elevated mastering standard versus a typical release can there be any trust and "value" in the logos and certifications as representing a product that's "better". Only then will anyone be excited about owing the higher quality "definitive" version of their favourite artist's music in higher bitrate. Only then can the record labels be seen as doing something more than just selling the same thing yet again. Needless to say, IMO, MQA has no part in encouraging higher quality music production.

The way it is right now, depending on the music, I believe it may be possible to hear the difference with a recording captured and produced in 24/88+. These would have to be modern digital recordings. I stopped buying yet another version of Kind of Blue a long time ago - even if Peter Qvortrup thinks recording quality peaked in the 50's and 60's and loudspeaker performance peaked out in the 30's and 40's - LOL!

I'm not sure what degenerated in that hi-res Hoffman poll thread since by now, I suspect most audiophiles have had plenty of opportunities to hear high-res tracks and form their own opinions. Again, what a shame that open discussions were shut down.

Have a great week ahead everyone! Hope you're all enjoying the music...

Addendum - February 14, 2018
Looks like the Steve Hoffman MQA thread has been resurrected but closed to further comments as expected. Happy Valentine's Day.


  1. Let me do my BS imitation here:

    Human hearing can respond to as little as 5 microseconds of blurring. It however is not capable of infinitely steep response. Using a minimum phase filter that introduces 4 microseconds of blurring slows down the transients to near the ability of human hearing to respond within that 5 microsecond window. If you will it humanizes the signal transients in a way that makes it easier and more natural sounding to the human ear. Leaving the ear to be slower by only a microsecond allows it to sound as if it were blurred by only 10 meters of air. It really is quite elegant.

    1. Hi Dennis. :-)

      Nice work rationalizing! Clearly you've been brushing up on the latest neuroscience to gather that insight into the benefits of "euphonic" encoding.

      But then if we look a little closer, we'd also have to rationalize whether it's OK that a 15kHz tone starts something like 15us later than a 100Hz tone if they were meant to be time coincident. I'm sure there are other great rationale...

      Folks, ultimately remember that we're talking about microseconds here. I'm certainly not freaking out about what MQA claims is audible or not (just like I find those who hear amazing improvements between standard PCM and decoded MQA rather hysterical). Just that I don't see a point in promoting something that actually seems to *degrade* the potential already inherent in what we have.

      Seems bizarre if not disingenuous for anyone to promote MQA in this light as "the next great thing"!

  2. Bravo Archimago,

    Your presentation and analysis is very clear. And. I agree with virtually all your conclusions.

    I am an electrical engineer and a wannabe audiophile. My wife says I am a geek. But I bow down to your geekness :-) You, sir, are an Uebergeek!

    I love listening to recorded music. But, I am constrained by a budget and a recognition that my ability to hear is not perfect and getting worse with each year that passes.

    I listen daily to music on equipment ranging from portable bluetooth speakers to a system with separate components. I can hear audible differences in these systems on all music. "Everybody" who listens to both of them can hear the clear differences.

    But, when I listen to the same song on my top system from digital versions encoded with various forms of compression/encoding methods, somewhere between 192VBR MP3s and Redbook audio, I can't reliably hear a difference. Without A/B comparisons 192VBR MP3s sound pretty darned good! (to me anyway) Maybe me 10 years ago or other people can hear the differences at even higher resolutions and samples rates. More power to 'em.

    I participated in your MQA poll and simply could not hear a difference. It is so clear that MQA is a DRM scheme. Audio quality capture/reproduction beyond the limit of human perception is already possible with non-MQA means. Your articles have provided several examples. MQA is not solving a problem...Not one that I have anyway.

    If (as you have pointed out) we could apply better techniques to correct loudspeaker performance, THEN we would have something. If we could get better source files, we would have something. The next level of performance improvements will be with DSP corrected speakers.

    Keep up the great work of analyzing products and exploring the nuances of music reproduction.

    1. ABX listening tests tell something, especially when there are obvious differences. But e.g. when talking about MP3/AAC compression, when doing longer listening sessions, you'll realize that there is something missing in their sound, especially at 256 kbps VBR or lower. This is not meant to promote some new voodoo (like MQA for sure is), but from my experience, even if I usually did not detect MP3/AAC in ABX tests and actually at first listening I liked them, later I realized they simply feel different to especially 24/48 FLACs.

      By the way, AAC is technically better that MP3 so if I do some lossy compression e.g. for mobile or friends' use I usually use AAC at 320 kbps VBR.

    2. Greetings James!

      I'm just a guy who likes his tech and enjoy using it as a pass-time and distraction in the evenings after a day at the office doing the day job and once the kids go to sleep :-). Even more fun when I can contribute and meet interesting people like the audio engineer friend who provided me with most of the measurements in this post!

      Thanks for participating in the MQA test! Yup, what you describe is typical for the majority of us. 192kbps MP3 sounds quite good and for most music will be hard to distinguish from lossless. By 256 to 320kbps, I wouldn't want to do a blind test for fear of embarrassment and have my audiophile membership card taken away (actually, I think the "high end" guys already repossessed my card a few years back :-).

      I certainly am looking forward to more developments in the DSP space to see what kind of techniques come down the pipeline to tickle our aural fancy.

      Have a great time with the music, man!

    3. BTW, The best AAC's I've heard are those I've bought at ITunes, since Apple has strict guidelines for studios and they are usually encoded from 24 bit masters. They're usually 256 kbps but quality is usually very good, they in some cases are clipped even less than CDs (as a result of those guidelines for enconding). While FLAC is of sources preferred, when talking about distributing lossy music online, ITunes is OK :)

  3. My issue with the deblurring is not specifically a technical one but a logistical one.

    Lets say that deblurring does exist and from my understanding of MQA, they can correct this because they will know what ADC was used in the studio.

    They also talk about “remastering” but I get the feeling that this term relates to the deblurring process and not remastering in the traditional sense.

    In May 2016, MQA signed up Warner Music Group and Universal in February 2017.

    In Jan 2017, MQA titles started to appear on Tidal with the initial release being ~500 albums and we are up to ~7000 now (with duplicates releases at different sample rates)

    I have no clue what the mix is relative to how many are Warner and how many are Universal.

    Back to my logistical quandary.

    If the MQA provenance story is to be believed, to remaster an album for MQA requires (at a minimum):

    - Someone determining the location of and creating a copy of the digital master
    - Someone researching what ADC was used for the recording session
    - The MQA engine needs to be prep’ed with the deblurring metadata for the album
    - The MQA engine needs to be run (maybe several times for different sample rates)
    - If they are to be believed, someone (artist/producer) needs to sign this off
    - The new MQA digital images need to be sent to Tidal to be uploaded

    Lets assume (to make the math simple) that both Warner and Universal signed up in May 2016 (~1.5 years ago)

    There are 2000 working days in a year so 1.5 years is 3000 hours.

    Its hard to believe that it takes, on average ~2 hours per album to do an end-to-end MQA process.

    Yes my calc is based on a single man year but MQA doesn’t have 100’s of employees and even if they threw 20 people at this, a process like this still doesn’t scale.

    Therefore I have to believe that no deblurring, specific to the recording session ADC, is generally happening and they are just batch processing these with a generic algorithm… which basically blows a huge hole in the MQA provenance.


  4. as a followup, I just crunched the numbers.

    There are 8610 MQA albums on tidal which when you eliminate duplicates due to different resolutions gives us 7406 unique albums.


    1. Wow. Awesome work with the number crunching, Peter!

      Interesting to know that we're looking at around 7500 albums on Tidal these days. From what I see on the Wiki and elsewhere, Tidal offers something like 48M tracks, so if an average album has 12 tracks (conservatively), that's about 4M albums. This means only 0.2% of the Tidal library is MQA encoded...

      Does that look about right to you guys?

      Peter, I agree... There's really no reasonable expectation that MQA would be able to take a final mastered 24-bit 44/48/88/96/176/192 album, figure out all the ADC's used along the way (not to mention possibly examining the microphones, DSP plug-ins, etc. and time-domain qualities), do a customized deblurring, and then even have someone listen to the darn thing to make sure it sounds good - much less ask the actual engineer or artist to "sign off" on the sound quality!

      The vast majority must be batch processed. The only thing "authenticated" is the unique cryptographic stamp encoded in each track so the green/blue light turns on. ;-)

      One final comment about the complexity of music these days. We must remember that a modern pop/rock song might have something like 6 tracks for drum mics, maybe 2 guitar tracks, a bass track, a lead vocal, maybe 2-4 backing vocals, 2-3 keyboard/synth tracks. Like I said this gets very complicated, very fast! For MQA albums like Bruno Mars and Beyonce, does anyone honestly think the MQA encoding was able to accurately "repair" time domain errors in every one of those tracks that eventually got mixed down into 2 channels; much less account for whether minimum phase EQ was used here and there, or if the ProTools plug-in "blurred" the sound, etc.!?

      At best, I imagine a "simple" acoustic recording like those from 2L with minimum processing might benefit from enough data about the recording chain to plug into the "de-blur" DSP. But I seriously hope MQA doesn't think music lovers would be so naive as to think what they're proposing would pass judgment without lots of us scratching our heads!

  5. Archimago, great work. This knowledge helps to focus on what is important for digital audio and decrease the importance of things that do not help. The whoel industry should focus on less loudness war, distributing music in lossless FLAC as close as the studio master possible, etc. Not creating new solutions that under deeper analysis create more problems than they solve.

    1. Absolutely Honza.

      Which is why if we take a step back and look at MQA, it doesn't make sense at all why they want to create this compromised "new solution"... That is, except for the potential for DRM which I can imagine would be quite an interesting proposition. "Distributing music in lossless FLAC" as per status quo is perhaps undesirable.


      1. "High resolution" files, though advertised as such are no longer true high-res any more (something like at best 17-bits to 24kHz + lossy ultrasonic reconstruction to 48kHz). The labels don't have to release their "crown jewels" as per Spencer Chrislu. The "crown jewels" can then be resold to you again down the road of course - "Audiophiles, you've heard MQA hi-res, but step right up and we'll now give you ULTRA-HI-RES with the FULL 24-bits!!!"

      2. The music now has the cryptographic signature embedded. Other than the green/blue light, we don't know what else can be done. In any event, "tagging" is now in place.

      3. Computer playback software and MQA firmware on your DACs will now be unified under one decoding algorithm. This means future "versions" can be implemented with software/firmware updates. It is worth commenting on the power this provides if they wanted to implement new "features" down the road. Certainly if I were a corporation, I'd just love to have my company's proprietary codec on every device and potentially dictate how and what files get "unfolded" to higher resolution.

      Of course MQA is a business and if those points above could get them more clients, that's what businesses do...

      I don't know if the above points are what may be the "solution" to the problem of piracy being discussed behind closed doors (it's certainly plausible, right?). But if so, as a consumer, I would have much preferred if the record labels and MQA came out and just said so instead of trying to pawn this Frankenstein's Monster audio codec as if it offers better sound quality!

  6. What if ... looking at the playback part of a chain (MQA is said to be a whole chain) is not the proper way to go about testing MQA ?

    A bit similar to the old cassette days where one could record in dbx or Dolby B or C or S or HX or High-Com. In order to get the original signal back you had to play it back through the same specific decoder.
    Playing either a non encoded signal through a decoder or an encoded signal without an encoder simply does not yield proper results.

    It looks like 'we' (us mortals, not being Bob Stuart) are only looking at part of a chain and conveniently forget, or can only quess, what happens in the encoding part.

    What if the 'secrets' in the encoding (actually sampling) part, aside from the elaborate 'recording characteristic voodoo' and subsequent folding / compression / encoding thing, simply consist of applying a reverse phase response of the playback filter into the MQA encoding process ?
    Without affecting frequency response of course.
    Would that not be a very simple way to get back to 'linear phase' without the (inaudible to me) pre-ringing and absent post ringing while maintaining the measured 24us risetime while encapsulating that in a 44.1/48kHz 'stream' ?

    What would a squarewave and needle pulse look like on digital sample level inside the chain when it went through the encoder ?
    What would the analog end result squarewave and needle pulse look like passing through the entire chain and its phase response ?

    I assume all of this has been suggested elsewhere on forums/websites etc. already but I am not following this whole MQA stuff (except here) as it does not peak my interest.

    The biggest real world issues in audio are on the transducer, recording and mastering side. Making a proper sounding recording is not an easy feat.
    The reproduction side can still be improved upon by the consumer, the first part is out of our hands.

    Of course I suspect Bob Stuart could easily come up with 'proof' and a 'technical explanation' if he wanted to, but it looks like he would benefit more from building a myth, people tend to buy into than to spill his beans.
    Why has Bob Stuart not shown us before and after waveforms or give us mortals comparative material ? Fake news ?

    1. Hi Frans,

      Indeed there are significant pieces that we need to know about the encoding side which we likely will never have access to - like the idea of a reverse phase manipulation in the encoding itself. The logical question of course is "Why bother having a minimum phase reconstruction filter to begin with?" if promising accuracy in the temporal domain?

      You raise good questions to ask of Bob Stuart. Questions I would love to hear the answers to as well! What does a square wave look like "analogue to analogue" from an ADC to DAC through MQA processing (compared to standard modern studio gear)? What is the difference? And of course where is the improvement?

      "The biggest real world issues in audio are on the transducer, recording and mastering side." Agree.

  7. After watching the RMAF video, it seems clear that MQA was created in order to solve a problem for the music industry... to fold together all the available codecs and resolutions into one format, which would essentially encapsulate all options in one file. This would save the the music industry all the trouble and expense of maintaining the 57 variations on a file that were necessary to serve customers. I would guess that the claim, aimed at consumers, that sound quality was improved by MQA was a secondary but very important marketing angle.

    1. Hi Martin,
      Yeah, we could attribute the underlying rationale to simply that of "keeping it simple". But as I suggested, why not just 2 SKUs that will serve those who need standard resolution and another for those desiring a "higher" standard. I suspect there is more to it than this given the gradual decline in the music industry...

      The sad part I think is that "they" apparently are deciding to use a highly compromised solution like MQA technically. And I think it's unfortunate that MQA is using such poor advertising with fundamentally questionable claims that's hard for the "audiophile public" to swallow.

      As I said in the Hoffman forum before the thread was deleted, I would have much preferred that the music industry came out and said something like this:

      "Alright music lovers. We are an industry, we want to make money, artists need to make money, so to combat piracy, we need to institute some form of copy protection like what the movie industry has done with AACS.

      By 2020, all hi-res lossless releases will be encrypted with the HonestPlay encoder instead of FLAC or other lossless format. It will be state-of-the-art and it will maintain full resolution - 24-bits, high samplerate, etc... And for streaming it can be used to maintain the security of lossy encoding while providing excellent quality.

      We will license this codec to all hardware manufacturers and software developers at very reasonable rates. All the major recording labels are signed on. We expect the next generation of DACs and devices later this year will incorporate the authentication features and you will see a blue light or 'HP' indicator on your new device to ensure full resolution decoding. Our Chief Technical Officer will now take the podium for the details and take any questions you may have..."

      I'm sure people will be unhappy, there will be forum debates, the pirates will be sharpening their coding skills to find the crack... But at least if that's what they want to do, it will be honest and transparent; perhaps even respectable.

  8. Wow, great analysis, gives a lot of food for thought. Once again for all those who have downloaded pure PCM files in their original format, MQA is still a lossy codec. That said from a streaming perspective, from what is presently available, for the size of file, and the sound quality it delivers it still sounds great and I am tickled and happy for its existence. If vendors can stream music on a platform that sounds better, at a comparable price, I am ready to jump ship. Doctorrazz

    1. Hi DrrAzz,

      And that does bring up an important point about streamed music. What is the quality we actually "want" or even "need"?

      I can speak for myself in that the music I "love", I will "own" as a CD, SACD, HD download, and those songs all live on my personal music server for relaxed, attentive, nighttime listening on the main sound system. I use streaming (usually from Spotify) as background music, kitchen music, jogging music, stuff I listen to on the subway (Skytrain here in Vancouver), etc...

      Seriously, of all the time music is streamed, how much of it actually is "consumed" in a situation that can benefit from hi-res anyway!? If very little, then why not just stay with 16/44 lossless at best. Forget about promises of "hi-res" because rarely is it consumed in a fashion where the nuances make any difference. And certainly stop with the inconvenience and extra cost to the consumer by increasing unnecessary data bandwidth use and proprietary decoding - remember, MQA streams suck up 30% more data bandwidth than just a straight 16/44 FLAC off Tidal.

  9. Hi Arch
    It still amazes me how some can't see through the marketing spiel and lack of evidence by the purveyors of MQA. The talking points are obviously directed at a section of the community that believe in certain audiophile myths. As you point out, any timing errors of say 16/44 is insignificant compared to the timing errors and phase shifts from mechanical analog recording or playback, perhaps a comparison of time smearing/phase etc between 16/44 PCM from a $100 DAC to a top line turntable/cart and reel to reel would be enlightening.

    It is sad that the MQA thread disappeared from the Steve Hoffman (SH) site, though hardly surprising. Anything which challenges audio woo does not last long on that site. I was following that other thread you referenced, ie difference between CD and hi res, and it was a good robust discussion. It wasn't just the thread that disappeared, several post magically disappeared while the thread was going, mainly the ones that dared to suggest expectation biases and placebo effects could be cause. There was one guy, who sounded like he was an engineer, who posted a double blind test conducted in a top Melbourne studio in the early 90s which they could not tell difference between three CD production masters and their master dubs, two analog and one DAT. This contradicted their host's earlier assertions that CDs cannot faithfully reproduce a master (whereas he claims a vinyl record can... unbelievable).

    I used to participate on that site but I got tired of having my threads/post deleted when I used science and technical arguments in debates. I once challenged one of their moderators on a deletion and he basically told me that this is a site for analog not opinions about digital - why don't they put that as a rule up front?

    The forum is still useful for finding music and better masterings of an album as there are some very knowledgeable people there. I usually choose masterings where there is a broad consensus of being "best" and avoid individual suggestions that claim it is good because it is "analog sounding"; they typically sound veiled with no top end.

    1. Thanks for the note and comment about the "can you hear hi-res poll" thread, Prep.

      Yeah, I always liked the Hoffman site for music recommendations and it's nice to see the viewpoints in the hardware section as well. It is unfortunate that "audio woo" with no evidence or apparent basis in reality does get pandered there and scientific/objective talk tends to get biased with deletions and thread closures. I appreciate Hoffman's work but I think there's a need to re-calibrate beliefs there for sure.

      Good point about comparing time-domain between analogue and digital... It would be no contest of course! A number of months ago, I posted this comparing vinyl to digital:

      Clearly, at least on my system we can easily see the interchannel 3.150kHz temporal anomaly with the vinyl playback, much higher noise floor, 2% harmonic distortion with vinyl, obvious wow & flutter... There's simply no comparison in terms of fidelity; microsecond level even with sharp minimal phase filters is child's play compared to anomalies in the world of analogue :-).

      Speaking of wow & flutter... Here's my comparison between turntable and digital:
      Technics SL1200:

      Roksan TMS:


    2. Hi Arch

      Thanks for your comments. Yes I have read your tests previously, they are always evidence based, well argued and you have a very engaging writing style.

      Of course, the evangelists would respond that your tests would be different if you used a higher end TT and cart but of course, it would still not match PCM on any test of accuracy and there is the virtue equation, how much does one need to spend to get close to the accuracy of a CD player and in any event, no matter how good the TT, it will still face real world limitations of the vinyl record.

      I know from my own experience that diminishing returns quickly set in after a certain point. My analog gear costs far more than my digital front end, and higher maintenance too. I have been progressively upgrading my TT over the decades where today it would cost around $12k to get another to compare it with (it is a totally upgraded Linn LP12 running a top line EKOS arm and Benz Wood SL MM cartridge running into a top line Yamana pre amp (circa 90s). It sounds great but one thing I noticed is that with each significant upgrade, the closer it sounded like good digital, yet it can't quite match the sound quality of my mid range NAD CD player playing a well mastered CD.

      Sorry for being a bit off topic but given that a lot of MQA albums were originally made from an analog source, I wonder why Stuart never gave any thought to challenge of going back to the source tape machines and analog consoles to correct the variations in phase, distortion and time domain errors which by order of magnitude are far more significant than PCM if he really believes in time smearing as he defined it.

    3. Hi Prep,

      "It sounds great but one thing I noticed is that with each significant upgrade, the closer it sounded like good digital, yet it can't quite match the sound quality of my mid range NAD CD player playing a well mastered CD." Ouch :-)!

      Them's fighting words for the hardcore vinyl audiophiles! Regarding how analogue-sourced material is processed, it's certainly a fair question and one for MQA to tell us whether any effort is made at all to correct analogue time distortions (along the lines of the Plangent Process perhaps).

      From an objective perspective, we can also look at the wow & flutter measurements Michael Fremer performed with very expensive gear like the recent Technics SL-1000R ($20,000):

      Ultimately there is no way that time domain accuracy can be achieved with vinyl close to even the least expensive of digital devices. I don't expect those who are longtime believers in vinyl superiority to necessarily change their tune or switch preference, but I do hope that at least they can appreciate and acknowledge the facts.

  10. I don't understand, WHY they convert 24/44 masters to the MQA domain for Tidal?
    They put some noise to the 19-20 kHz area. Well, it works for "unfold" 24/96 masters, but 24/44, for what? On the Tidal desktop player we have upsampled PCM stream 24/88 with bad aliasing and lots of clipping points.

    1. Correct.

      There is IMO no point to the conversion of 24/44 or 24/48 to MQA. It obviously will not save storage or improve bitrate. And as you noted it causes all kinds of side effects along with reduction of actual bit depth.

      Other than the "authentication" piece, the only excuse MQA has to do this is to claim something or other about "deblur" I suppose. Again, I see no evidence that they're able to do this - especially with the pop albums that are released as 24-bit 44/48kHz!

  11. We're so far into digital format, yet the industry cannot keep up and actually provide -23 LUFS NON-LIMITED audio material to the public with embedded ReplayGain tags that'd provide the user with CHOICE to compress/limit according to their liking.
    Why is that? Is it due to rather adoption of RG? Then think of something else dammit! :)

    How depressing is the future really, I played back 18 years old album (!!) which has been limited to Oblivion (DR6) and while there was some nostalgia factor, I would not consciously pop this one into my CD transport for any other reason.
    New generation would be growing up in music world which cannot appreciate the intrinsic importance of transients and dynamics regardless of genre.

    Gotta play some Michael Hedges now, sorry for ramblings.
    And of course hats of to Archimago for the article.

    1. By the way great Sir, can I somehow contact you regarding something?
      I would type it here, just don't want to stir too much into offtopic.

    2. Hey SUBIT,
      Agree with the above.

      Feel free to PM me on Computer Audiophile. We'll chat...

  12. A major MQA exec recently argued to me on Facebook that MQA is a "point of view" ... I'm not sure what that means, except another way to hide behind bullshit. I continue to not authenticate my mastering via the MQA processing on any relesaes with my work ... which is being done in bulk by the labels at the point.

    1. Hello Brian!

      Thanks for visiting and dropping a note about this. Yeah, for a technology and engineering firm, MQA/Meridian seem to be using a lot of vague terms like "point of view" or "philosophy". I'm not sure if we're supposed to be impressed or some how feeling more secure with this awareness!

      So... When these labels convert, can the MQA exec tell us *who* is doing the job and in what way these individuals are providing "qualitative authentication" rather than just technical "cryptographic authentication" of the files???

  13. With respect to the Hoffman thread deletion, I think you'll agree it was becoming a trash fire by the end. Lee kept provoking a fresh round of gang stomping, then that truly unpleasant derail into the actions and character of deceased individuals. I personally would have deleted just those last nasty posts and then locked the thread, but I try not to second guess mods. They've got a hard job.

    1. I didn't see how that thread ended but as you say, wouldn't if have been preferable to just delete the nasty posts, just as they deleted the not nasty but science based ones, and lock down the thread?

      You are right though that the mods probably do have a hard job but the majority of them are biased. I don't know if it still happens today but in the past, when someone starts a thread about the best release of a certain album on CD, another will inevitably thread crap it with glib comments like "vinyl is the best" and get a free pass. Yet when someone thread craps a best vinyl pressing thread with "CDs are better" that person will be locked out of further posting on a thread and more than likely have their post deleted. That happened to me once on a thread where some members were having wet dreams on how analog tape is way superior to any digital mastering processes. I butted in and simply asked an innocent question of why when 24/96 exceeds even the best of the best 30 ips machines on any objective measure and was locked out with the post deleted. It happened all the time to many people and sadly, like me they no longer participate but just lurk to get music information.
      crapped a thread about

    2. As a mod on another board, I'd do exactly just like Prep said - removed only the offending posts.

    3. Yup. Agree with the above. I am thankful that over the years here, you guys have been very civil and well-mannered.

      At most I would only consider deleting rude, personal comments.

      Sure, the MQA thread with LeeS was deteriorating but deleting a few comments here and there, then locking the thread would not have been hard I think. But why delete the thread entirely? What fear is there of hosting those messages or as a forum allowing those thoughts to exist as a representation of those who participate? Of note, as I recall, there were many participants in that thread so it wasn't like only a handful of guys were talking back and forth. The only purpose I can see to doing this is to censor or somehow reduce the impression that opposition towards MQA is strong. This fact can be easily seen already within the self-identified "audiophiles" (witness all the opposition at places like Computer Audiophile, the Stereophile webpage or TAS) and on broader forums like Steve Hoffman where many general "music lovers" also participate.

  14. Here it goes:

    1. Hey Raiker,
      Yeah... I saw that earlier this week. That's nice, Lavorgna asked Bob Stuart a question in 2016 and recently but didn't get much of an answer from the looks of it.

      Seriously, would the official answer from MQA be anything different?

      Yes, what we're seeing currently is that the digital signatures are being used as "authentication" to playback through a decoder and turn on a blue/green light. But it doesn't take a genius to recognize that "authentication" is a pre-requisite for broader forms of control. Mr. Lavorgna may not be a "conspiracy theorist" but I think it's just as bad to fail to be prudent and consider the big picture and maybe ask the harder questions. Worth doing IMO before suggesting that this file format should be entrusted with widespread adoption...

      I believe he should have asked Mr. Stuart:
      Can you assure all consumers that there will in fact *never* be any form of copy protection, an update to the system that could lead to sharing of file usage data, or even further reduction to sound quality when played back in a non-decoded fashion?

      And can you comment if MQA and its partners have ever considered these possibilities?

      I think the answer to that would be much more interesting!

      Ultimately, the MQA file format is a closed system that provides a layer of security that hides the underlying "hi-res" data from free access. MQA retains the rights as to how they deal with music labels, manufacturers and software companies. There's nothing wrong with this - they have a right to market the scheme and make profits. However, in a world where we have enjoyed the freedom to play our music across devices, losslessly convert to whichever format we choose, and have access to the full resolution, clearly MQA is a backward step for consumers.

      That loss of freedom makes MQA inherently different from FLAC, WAV, AIFF, APE, ALAC, etc. Whether we call this "management" of our digital rights/privileges/freedoms or use another term, it certainly does reduce creativity, competitiveness, and presents an inconvenience to customers.

      In my mind, since I think the "format" is simply objectively a compromised container that degrades sound quality, I would not be interested in supporting it even if they did release it freely!

  15. Now i can get a Tidal files and was made little experiment
    (use google translate from Russian)

  16. Excellent, in-depth look! I was really interested in finding a source of the picosecond figure for quite a while, I'd heard it mentioned on a forum and it's not too hard to test in an editor like Audacity (or at least, some figures close to it, certainly far shorter than the microsecond scale!)

    Your articles go into such great detail and are a joy to read -- thank you for writing them! (and for your unrelenting and thorough analysis of the articles at hand)

  17. 50us delay between low frequencies and high frequencies could change the sound. It is enough to reverse the polarity of a 10KHz waveform compared to Linear Phase. So basically the 5KHz to 10KHz range of music has all the first harmonics (10KHz to 15KHz) more or less reversed or out of phase - you can bet this is audible in the way transients sound (provided you have fairly Linear Phase tweeters in that frequency range which is not a stretch).

    Some may prefer the reversal of the phase of harmonics above 10KHz but this is hardly high fidelity. It is not how the instrument would sound naturally with natural timbre.