Wednesday 11 March 2020

MUSINGS: "Which measurements matter?" Importance of considering the complete audio chain. And a listen to room DSP filters...



Very busy these last few days so I thought let's address an interesting discussion question I've seen over the last while about measurements and audibility. Here's a forum comment from tapatrick over on Audiophile Style the other day:

think you have done a good job in your blog post of covering all the relevant areas involved from music production to listening. And maybe the conversation is now concluded. If only everyone would acknowledge and respect each others 'intent' as you put it then there would be a lot less misunderstanding. I am none the wiser which measurements matter but I have to say I'm now clear that measuring and analysing equipment outside of listening leaves me cold. I will leave that to others more qualified but I will keep an eye on developments...
Hi tapatrick,

I appreciate the post. There is something you highlighted in the comment that I think is worth spending more time thinking about; this is of course about the question of measurements and how these correlate with subjective experience.

As per the blog post last time, I think it's important to remember that those 3 parts to audio - PRODUCTION, REPRODUCTION, and PERCEPTION - each will have a role to play when we conceptualize the question "Which measurements matter?" precisely because the measurements we can perform on the hardware can only inform us about that piece of REPRODUCTION gear, and not necessarily on the full "Audio Trinity".

As I said last time, I think we are blessed these days with very low distortions, controlled frequency response, low noise, minimal crosstalk, and minimal time-domain anomalies such that the majority of reasonably priced gear (I'm not talking about the cheapest of the cheap of course!) can already be said to "sound good" when used and set-up appropriately. Anomalies in measurements of distortion, noise, crosstalk, and frequency response, I believe are rather straight forward to correlate subjectively when high enough. The problem is I think that the majority of audiophiles are not doing their own measurements so have not had the opportunity to correlate the objective results and subjective audibility themselves; the only way to remedy this is with experience and that will require some work on your part rather than listen to opinions of others. Further compounding this is that the "high-end" Industry keeps telling us that there are all kinds of significantly audible differences that measurements cannot catch! I do not believe this is true in 2020 for most devices.

I know in a thread I came across recently, folks asked "Which measure correlates with soundstage?". A great question which leads us down the path of a complex perceptual phenomenon! Let's use that to consider the role of measurements and subjective impressions...

From start to finish, music recordings are a product of those three major parts above, therefore the answer to many questions will necessarily be much more complex. A concept like "soundstage" is not a phenomenon that is answerable by examining the REPRODUCTION components like the products we as consumers purchase in isolation. It's meaningless if you ever come across a review claiming "this $2000 cable will give you a bigger soundstage than that $100 one".

Instead we have to ask ourselves what are the elements we need to consider for a complex perceived phenomenon like the "soundstage". First, we must always remember the importance of the PRODUCTION side. Since any subjective quality one might experience must first be encoded in the source recording, soundstage quality must first be captured in the recording and production techniques. This should be no surprise and is in fact the topic of the 1981 John Atkinson article "The Stereo Image" when he wrote about various recording techniques like multi-mono stereo, binaural, Blumlein mic technique, and spaced omnis (does anyone write technical audiophile articles like that in mainstream magazines anymore!?). Here's a more recent discussion on Reddit about microphone positioning to create soundstage. These days, like it or not, recordings are often made close mic'ed and "dry" so they can be mixed to produce a soundstage artificially. Furthermore, digital tools can be used to adjust front-to-back depth; check out this article from Sound-on-Sound. As usual, "garbage in, garbage out" if the PRODUCTION is done poorly.

As for our PERCEPTION of soundstange, binaural hearing obviously requires that both our ears are working properly in order to provide the amplitude, frequency and temporal cues for the brain to decipher. For example, an important part of hearing acuity has to do with one's frequency response - have a look at an audiogram.

Here's one from a family member a number of years ago to check hearing about a year after a severe ear infection with fluid effusion behind one of the ear drums:


This is an example of a typical pure-tone audiogram used primarily as a screen of the speech frequency spectrum approximately defined as 500Hz to 4kHz, typically tested up to 8kHz. For reference, with "normal" hearing, threshold for kids can go up to 15-20dB, while for adults, it can be up to 25-30dB on the graph. By the time this audiogram was done, the infection was cleared and hearing was reported as subjectively back to "normal" by the individual.

As you can see, the "O" line represents the right ear, and the "X" is the left ear. If you've ever had your ears tested, you'll notice just like the graph above, typically our ears are not perfectly equivalent. In the graph above, the left ear overall is more sensitive than the right consistently by about 5dB or so (indeed the ear infection a year back affected the right ear more).

With the audiogram above clinically within "normal" limits, the family member is back to usual life including doing musical performances, and her brain/mind has obviously adjusted to the interaural discrepancy such that subjectively "everything sounds back to normal".

But here's the tough question worth thinking about... If this person were an audiophile (and I bet many audiophiles out there would have audiograms worse than this), would you have any reservations about this person's reporting of how he/she perceives the "soundstage" since we do need both ears to "work" well for "accurate" perception? What if this person were an audiophile and used phrases like she heard an "immersive soundstage", or that a pair of speakers sounded "effortless" or even more vague subjective words like "magical"? How would we determine whether this person provided an accurate testimonial (review) of what was heard? If this person were to apply as a "professional" to write for a well known audio magazine, what kind of criteria would be used to gauge this person's aptitude beyond being able to write in an entertaining manner and familiar with the audiophile lingo? Would a person with such an audiogram still be eligible as a "Golden Ear" deemed capable of differentiating cables and bitperfect audio streamers?

Unlike measurements of speakers where we can easily capture data over thousands of points and speak with some authority about concepts like high fidelity or "accuracy", the measurement of human hearing at least clinically is nowhere near that level of detail. I bet if we measured the hearing acuity of audiophiles, we would be able to point to a number of individuals where we can wonder "So Mr. Golden Ear Audiophile, are you sure you can hear a wider soundstage using this USB cable with that kind of audiogram as you claimed in that review!?"

We can calibrate our measurement devices to examine objective performance, but is the human "instrument" calibrated with adequate insight into his own hearing? As such, with what confidence then should we attribute to testimonials? For the "more objective" audiophiles, the answer does I think need to be one of healthy skepticism unless the person is a trained listener and can demonstrate listening skills.

Finally then, when it comes to REPRODUCTION equipment and a higher level phenomenon like "soundstage", then we have to think about whether the source device is "bit perfect" and distortion-free, whether the amplifier is accurate with precise channel balance, and if the speakers are likewise well-balanced across the audible spectrum, able to reproduce the dynamics without distortion, and have adequate time-domain performance (ideally precisely time-aligned). Furthermore, we do need to consider the room quality, plus choices made like how far apart the speakers are and details like tilt and toe-in appropriate for the device. Of course, the audiophile then should be seated in the "sweet spot". If we do all these things, then we will simply recover (REPRODUCE) the "soundstage" that was embedded in the PRODUTION, hopefully PERCEIVED with high quality ears and mind!

In summary... My experience has been that measurements correlate nicely with sound quality already once you appreciate and account for the production quality, have your gear and room reasonably sorted out, and appreciate the perceptual limitations of one's own ears and mind. As far as I can tell, the folks that disagree most with this are those attached to the audiophile Industry who seem to think there's still significant amounts of "magic" out of reach from measurement instruments (including null testers and software like Audio Diffmaker or Paul's excellent DeltaWave). This is how they justify unusual products and illogical claims while of course never providing evidence (often deferring to circular reasoning because the "right measurements" have not been discovered!).

[Speaking of evidence-less, unjustified products, consider this recent review of the UpTone EtherREGEN. By all means, evaluate the claims in this wordy white paper. May I humbly suggest that there are better ways to spend US$640 than on a rather odd ethernet switch which IMO cannot change the jitter of one's DAC output to any significant degree? I would suggest waiting for Mr. Swenson to release his measurements first ("I look forward to the measurements data to back up the claims, which Swenson says he will publish soon.") if you're still interested in pulling out the credit card. As far as I am aware, he never released measurements for his UpTone USB Regen despite statements to that effect years ago as I recall.]

Alas, as a hobbyist, the only way one could confirm what I say might be true is if you start doing your own measurements and experience the claim for yourself.

--------------------

The other night, it was great to have Mitch Barnett (mitchco) over to catch up on life, enjoy dinner together with our significant others, and of course listen to some music on the system!

As I had promised Le Concombre Masqué on the thread "Boundlessly abrasive offer to DSP work gratis" which I believe has been deleted from Audiophile Style, I would listen to some FIR filters he had created using his technique with rePhase using some sweeps created in my room using REW, and the miniDSP UMIK-1 microphone.

Here's what my speaker/room pink noise graph looked like spatially averaged using the Moving Mic Measurement (MMM), recorded from the main listening position with all the furniture in place:


As expected, a fair amount of room modes to be found in the lower frequencies below 300Hz.

I provided Le Concombre with 13 sweeps taken from around the sweet spot as shown here based on the Dirac Live 2 manual with additional data for further right and left:

Out of interest, here is the left channel frequency response from sweeps 1-9 which encompasses the "sweet spot" (or "main position") and the "box" about 12" on either side and 6" above and below the position.

Orange line = "sweet spot".
As you can see, moving the microphone around that main position within a 2-feet x 1-foot box results in significant changes (1/6 octave smoothing applied). While objectively obvious, I would suggest not getting too obsessed about the idea that moving speaker positioning 1" to one side or another suddenly makes massive change to the sound quality. I suppose if your speakers are really close to room boundaries, maybe moving 1" further out would have some audible effect, but let's not get too dramatic!

Thank you Le Concombre for creating two filters for me to try based on Harman RR and RR1 loudspeaker curves:


Listening to a few "audiophile" tracks like Stevie Ray Vaughn's "Tin Pan Alley" (Couldn't Stand The Weather), Patricia Barber's "Regular Pleasures" (Verse), and Yello's "Planet Dada" (By Yello: The Anthology) in Roon, it wasn't hard differentiating the sound from Le Concombre's two Harman curves compared to an Acourate FIR filter I made, and from one Mitch created using the same REW sweeps using Audiolense.

There's no point describing for you what I heard when I can show you the frequency response differences using a white noise signal averaged over about 30 seconds: [Subwoofer off.]

White noise. No DSP vs. My Acourate FIR vs. Le Concombre's RR curve vs. Mitch's Audiolense. Mitch tells me there was an issue with his Audiolense filter not importing my mic calibration properly so it could be significantly improved. 64k-point FFT.
I'll let you imagine the sonic differences as an exercise. Let's just say what Mitch and I heard using these filters were consistent with the graphs. Overall, I preferred the sound of my Acourate filter. Perhaps I'm just biased :-). Nonetheless, this was a nice demonstration of the power of DSP in changing the sonic character of one's system to taste. To me, subjective preferences play a role in which "house curve" I would choose.

Objectively and subjectively, clearly all of these filters improved the low frequency response. A song where one can hear this important improvement is the "classic" audiophile track by Rebecca Pidgeon "Spanish Harlem" (The Raven) - amplitude of the bass line should be relatively smooth and not bounce around when room correction is "tuned in".

Alright folks... That's all I have for now in the audio world.

Recent events around health care and the financial markets have been tumultuous and approaching life-changing proportions in many parts of the world. This means I've got to deal with things around here more important than audio as well :-). As a result, although I have some things I want to show you guys, I probably won't be able to update the blog as regularly until things settle down over the next number of weeks and maybe months...

Stay safe. Stay healthy. Remain rational. And above all, enjoy the music!

PS: Remember if you find yourself having time due to all the shut-downs and social distancing these days, give the Harmonic Distortion Blind Test a try! Still collecting data until end of April.

11 comments:

  1. Archimago, thanks for a very informative post! These topics do resonate with my interests very much.

    Regarding the soundstage phenomenon, if we correlate it with the illusion of "being there", then good objective parameters are IMO absence of strong reflections on the ETC curve, and the IACC metric (as displayed by Acourate for example). There are some tracks from "The Best of Chesky Classics & Jazz and Audiophile Test Disk, Vol. 3" that can be used for checking the quality of reproduced soundstage by ear, for example track 28 where a recording of a drummer running around you can be perceived very naturally on a proper setup.

    I've also been educated about the moving microphone technique recently at AES Academy talk by Charles Sprinkle. There is an AES paper on this: http://www.aes.org/e-lib/browse.cfm?elib=19477. Note that you need to be careful when looking at the data above 10 kHz, it will largely depend on the directionality and calibration of the microphone, see the article for details. BTW, they call it MMA (Moving Microphone Average), not "MMM".

    Also, the MMA technique obviously does include a lot of reverberant energy of the room, so I would use it with caution for equalization. Following Dr. Toole, I would first ensure that the direct sound of the speakers is smooth, and use MMA more for low frequency room correction and also for backing up shelving tone control settings to compensate for too "dry" or too "live" room.

    Stay healthy, for sure!

    ReplyDelete
  2. Hi Archimago.

    In my opinion, as important as frequency response, low distortion and of course healthy ears are for convincing sound REproduction, good soundstage creation is in the hands of the record PROduction mostly. It can be natural miking as in most classical and baroque music or jazz, or artificial construction using software with multiple tracks, as in most pop music.

    For REproducing the created soundstage with stereo speakers, the most important criterion is crosstalk cancellation, something we get with headphones but stuck inside the head.

    To have it as a normal soundstage necessitates processing to eliminate that stereo crosstalk due to room reverberation.

    I recently bought a 2019 MacBook Air, and I was amazed at the soundstage produced with two (very) small speakers when putting the laptop on my lap. Sound comes from very wide out of the speakers, and even to both sides with 5.1 source!

    investigating, I found that Apple has patents for the audio controller included in their T2 chip that do just that: projecting into space and doing crosstalk cancellation:

    https://www.slashgear.com/apple-patent-makes-macbook-audio-sound-like-its-coming-from-other-places-02604903/

    You can read the patent here:

    http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&co1=AND&d=PTXT&s1=10524080&OS=10524080&RS=10524080

    I hope in the future they make that technology available for use with any speakers, it is that good in my opinion. If you can get your hands (and ears) on a 2019 MacBook Air (probably the Pro too…) listen to this collection of surround tests on YouTube and be amazed:

    https://www.youtube.com/watch?v=PvnlpPWAcZc

    ReplyDelete
    Replies
    1. Gilles, thanks for an interesting observation! I think part of the trick with the laptop is that the speakers are small and close to your ears, so you mostly hear direct sound from them. In fact, even on "big" speakers that do not engage the room too energetically, it's possible to hear a soundstage that spans wider than the speakers.

      There is a nice paper by S. Linkwitz about recording and reproducing spatial details in stereo, which also mentions cross-talk cancellation setups: http://www.linkwitzlab.com/TMT-Leipzig'10/TMT-Hearing%20spatial%20detail.pdf

      Delete
  3. The big flaw in Archi's logic is that only with "near perfect" hearing balanced between the ears will any reports of soundstage be believed as valid (with the usual dollop of arrogance about "golden eared" etc.)

    This is the rabbit hole that many objectivists have descended into. No ears are perfect & balanced between them - just a simple emerging from the rabbit hole to look at the difference in Pinnae between ears & between people would reveal the error in this logic.

    Auditory perception is not a measurement device & its misguided to treat it as such - the majority of its function is in the brain making the best sense (in a short a time as possible) of the nerve signals coming from the auditory mechanism. By & large it does a reasonably good job when combined with other signals from other senses in interpreting the physical world (otherwise we would have become an extinct species a long time ago).

    ReplyDelete
  4. This is rather old stuff. I am amazed that apple still could file a patent..

    Ambisonic and ambiosonic as well as binaural does crosstalk manipulation.
    No surprise to hear a sound more left or right than the left or right speaker.
    The first commercially available product I am aware of and heard is from Lexicon (was done by Greiner). Lexicon CP1, CP3 Sound-DSP. The setting was called "Panorama".
    A soundbar from Weiss implements ambisonic. Not yet on the market but shown at Munich HighEnd2019 (crosstalk cancellation with the same effect).

    ReplyDelete
    Replies
    1. You are right about the fact that ambisonics is rather old, I remember playing with that algorithm at least 10 years ago, and binaural is even older. I know ambisonics requires putting speakers close together, so the connection with laptops is normal.

      I was maybe over-enthusiastic because this is my first laptop having always worked with desktop workstations, and I didn’t expect such good sound from tiny speakers.

      The iFixit teardown talks of elongated speakers but shows no photo so I guess they are more a membrane type, something helping also with less point oriented output.

      I also re-read the patent more carefully and it seems aimed at augmented reality games rather than hifi, that is, the possibility to position outside objects at a fixed position inside the soundscape. The current T2 chip probably only implements crosstalk cancellation.

      I still believe correctly reproducing a recorded sounscape is impossible with standard two channel stereo in a room. All that optimizing the frequency response for a sweet spot is related to is the quality of sound, not its correct positionning.

      I also tried very long ago the brute-force way of experimenting crosstalk cancellation: putting the speakers in two adjacent room separated by a wall, and listening while standing right at the wall end (with ears in both rooms so to speak…) and the resulting stereo image was really astounding!

      Delete
  5. I've been using room EQ for about 6 years and easily spend more than a hundred hours experimenting (which also includes experimenting with phase alignment). I ended up with a house curve that's very close to Harmans RR1 curve without even knowing about Harmans publications. So I'm a bit surprised you seem to prefer that much more sub (if my interpretation of your measurements is correct).

    In my experience phase alignment also makes a big difference. It leads to a more solid and forward bass. If you didn't implement this yet it might be worth trying out, as an alternative to allowing a higher bass level in your house curve.

    ReplyDelete
  6. These debates are pointless IMO, as the brain interpolates stuff, "evens out" differences between our ears, and probably does many other interesting things of which we know very little. There's a lot of stuff going on after the sound have engaged the recepticles in our inner ears, especially in the brain. If you know anything about how incredibly complex our brains are with whatever they deal with, you would suspect like me, that our ears only act as microphones. The signal processing belongs to the brain.

    ReplyDelete
  7. Listening to Mr. Acker Bilk "The first time ever I saw your face", a recording from the 60'es, that is an extremely beautiful tune, and which sounds absolutely fabulous (he plays the clarinet).

    I was introduced to his music in early teenhood, and I've loved his music ever since. Who cares what measurements might say of this recording when it can bring tears to one's eyes?

    ReplyDelete
  8. I refer you to Audio, about 30 or more years ago and Bert Whyte.

    ReplyDelete
  9. Hey Arch, it was awesome meeting up, watching some 5.1 Picard, eating Indian cuisine and interesting convos with the families. It was also fun listening to a variety of tunes, some 5.1, others 2 channel, switching filters and comparing. Like you say, sounds just like the charts show. Speaking of which, the mic cal issue has been solved and you have a new set of filters, definitely brighter than what is shown on the chart. Let me know how it sounds.

    Thanks again for your hospitality and stay healthy!

    Kind regards,
    Mitch

    ReplyDelete