Without a doubt, the Raspberry Pi computers are the current reigning champions in popularity. They announced 5 million units sold back in early 2015. The recently released Raspberry Pi 3 looks excellent. Plenty of support, reasonably fast quad-core Broadcom BCM2837 64-bit ARMv8 1.2GHz, decent but not impressive 1GB RAM, OK 10/100Mbps ethernet, and convenient wireless 802.11n WiFi with Bluetooth 4.1. The list price should only be about US$35 for one of these but due to worldwide shortage, there's a bit of a markup currently (very common supply-demand issue of course).
But looking around, another little computer caught my attention - the US$40 ODROID-C2 from Hardkernel, a South Korean company. This was just released in March 2016. I was able to get it here in Canada from Diigiit Robotics but I see that they're now out of stock and like the Pi 3, prices have become elevated. Here's the block diagram for the C2:
All technical details including schematics can be found here. |
As you can see, it's a reasonably powerful little unit which easily can outrace the Pi 3 computationally (some benchmarks here compared to Pi 3). Quad-core Amlogic S905 CPU with 64-bit Cortex-A53's running at 2GHz, 2GB of DDR3 RAM, Gigabit ethernet, and a video subsystem capable of HDMI 2.0 4K/60fps output and hardware decoding for H.265 (750MHz Mali-450 "pentacore" GPU). Unlike the Pi 3, this unit does not have built-in wireless communications. Micro-SD card (supports UHS-1 speed) can be used or faster e.MMC 5.0 (400MB/s interface) module for flash storage. There's also an IR receiver built-in for remote control of features like volume adjustment.
So I figured... Why not? Let's get one of these little guys with the goal being to run a simple Volumio streamer out of it (I think we'll be seeing RuneAudio on this device soon also). Here's what I got:
I didn't bother buying a power supply because I have many 5V USB wallwarts already and this machine can be powered either with a small 2.5mm outer/0.8mm inner miniature barrel jack or just through a standard micro-USB cable. The specs suggest a 2A power supply but I have read that unless you hang a bunch of power-hungry USB peripherals, you can generally get away with less. In my situation, I found just a USB 5V/1A power supply which has worked well.
Notice that this machine is passively cooled with a good sized heatsink (on a relative basis of course). As much as I enjoy using fast computers, I cannot tolerate noise in the sound room so if this is to be an audio streamer or low-power home theater media player (one day), then it will need to be silent. The other big draw for me was the gigabit ethernet. Even though my media room is just next door to my wireless router, I have every room in my home linked by gigabit ethernet and even if I have great wireless signal strength, I want high-resolution music streaming as robust as possible - no audio/video-phile should need to tolerate stuttering and dropouts with any media playback in 2016!
As with most things in life, there are some "cons" to this little inexpensive computer. First and foremost is the fact that this is not a Raspberry which has the largest community of developers and contributors of any of these SBCs. That of course brings with it the latest bug fixes, firmwares, and of course hardware support. For example, I have heard great things about the HiFiBerry DAC card and "Digi" SPDIF output. The ODROID-C2 does have I2S pins and does also have the HiFi Shield (US$39) option based on the TI/BB PCM5102 DAC which I did not order because I already have many good DACs around here. It's capable of audio data up to 32/384. The measurements on their web page look excellent by the way!
Given that the ODROID-C2 is a new product and parts with the ARMv8-A architecture started showing up in the last year, there will be some growing pains on the OS side for the time being. This is certainly where the Pi 3 has an upper hand... However, the ODROID folks do already have Ubuntu 16.04 LTS beta images available (even though the kernel is still at 3.14LTS) and I hope in the days ahead they'll get the Amlogic S905 CPU patches into the "mainline" Linux kernel. (There's a comment from the ODROID folks of mainlining into Linux Kernel 4.4 starting in May - hope this comes to pass folks!) It has been more than a decade since I've personally bothered to download and compile my own Linux kernel, so I hope the Linux gurus have fun playing with the new hardware!
Getting it up and running was easy, make sure you have a good micro-SDHC card or if you want better performance, use an eMMC module (more expensive). I've been using these Samsung EVO+ microSD cards for cameras lately:
Works well, reasonably fast (UHS-1 speed) and is very inexpensive (got the 32GB card for less than CAD$20).
Flashing the SD card is trivial with a downloaded firmware OS image (you can find both Linux Ubuntu and Android here) and Win32DiskImager on Windows (or with dd on UNIX/Linux/Mac). For example, here's Win32DiskImager writing out the RC1 version of Volumio 2 for the ODROID-C2.
The little machine in the clear enclosure:
We'll talk more about my experience with using Volumio later... I did explore Ubuntu a bit also; here's how it looks on an old 1080P TV (just plugged in my Logitech wireless USB keyboard/trackpad, standard HDMI cable to TV, 5V micro-USB wallwart for power, ethernet cable):
Yeah, it works :-). There are a couple status LEDs on the board when it turns on; the red stays on when power is available, and a blue light pulsates like a heart beat. Running Ubuntu with the MATE Desktop, it "feels" reasonably responsive with good GUI performance but as I mentioned above, it's a new 64-bit system and there are a number of bugs to work out. For example, streaming YouTube videos caused Firefox to crash on occasion and I had some issues even just getting video to run on Chromium browser. Remember that even though we're looking at a 2GHz 64-bit quad-core processor, this is a low-power ARM RISC processor comparable to a modest smartphone (the Samsung Galaxy Note 5 already is an octa-core with 64-bit ARM processors, 4 of which are the more powerful Cortex-A57 units)... Certainly it can do lots but by no means should we be making comparisons with the speed of a modern desktop intel/AMD processor! Who knows, maybe in time we'll start seeing ARM processors make headway in the desktop space where traditionally the x86/x64 machines currently dominate but that day is not here yet without noticeable compromises (maybe in 2017 with the AMD K12 processors?).
I'll keep an eye on developments in the months ahead in terms of Ubuntu stability. I think it'll be really cool if I can hook this device up to a 4K TV in the media room maybe later this year if I decide to finally move forward with the TV upgrade now that 4K/UHD Blu-Ray players (eg. Samsung UBD-K8500) and movies have been released.
You might be wondering - how much power does this little machine use? Plugged into a Kill-A-Watt power monitor, the ODROID-C2 system sucks up between 4 to 7W depending on what it's doing when connected to my TV using the HDMI output. The heatsink feels just a tad warmer after 10 minutes of surfing and fooling with settings. As I mentioned above, I just used a 5V/1A power supply I had lying around and this was just fine - obviously 1.5 or 2A recommended if you have a bunch of USB peripherals hanging off of it like a WiFi transmitter, Bluetooth, or storage. Very good power efficiency with good performance thus far. I'd certainly be very interested to see if at some point I could use this machine to run Kodi with hardware video decoding for HTPC purposes (or Plex maybe if they start to support HEVC). Based on some specs here, it looks like the Amlogic S905 SoC is capable of 4K60 10-bit HEVC/H.265 decoding (and AVC/H.264 of course) but not Google's VP9. Other than YouTube 4K, it certainly looks like the future will be HEVC/H.265 for movies and downloads so I think this probably is just fine not having specific VP9 decode in the hardware. From what I have seen, HEVC is the only standard for the new 4K/UHD Blu-Rays anyways. [As an aside, in the last year since getting my 4K monitor, I have done a bit of HEVC/H.265 encoding using HandBrake with 1080P and 4K material. Very impressive video quality with low bitrates!]
As displayed above, I flashed the beta/RC version of Volumio 2 and will talk about this next time as essentially a sub-$75 streaming device with quite a bit of processing horsepower under the hood. It'll be used primarily as a UPnP renderer playing off my AMD A10-5800K-based Windows Server 2012 R2 machine with the music library managed by JRiver 21 (along side my Logitech Media Server for all the Squeezeboxes of course). I intend to couple it with my TEAC UD-501 in the sound room.
Here's how the ODROID-C2 looks sitting on top of the TEAC DAC - notice how small this thing is... The little computer can be easily moved to the space behind the DAC and out of view (I won't be using the IR remote control for volume so line of sight visibility is unnecessary).
Okay, have a great week ahead everyone! Gonna have a listen for a few weeks and will report back with some thoughts and measurements as appropriate.
Some obligatory links first:
MUSINGS: Miscellanies on audio encoding (Dolby Atmos & Meridian MQA Concerns)
Written back in January 2015 - shortly after the grand "unveiling" party back in December 2014 based on various early reports.
MEASUREMENTS: MQA (Master Quality Authenticated) Observations and The Big Picture...
Written in January 2016 - observations and other thoughts about MQA without decoder (after CES2016 and 2L releasing demo tracks).
MEASUREMENTS / IMPRESSIONS: Meridian Explorer2 Analogue Output - 24/192 PCM vs. Decoded MQA.
Written in February 2016 - observations based on actual Meridian Explorer 2 hardware with MQA decoding firmware upgrade.
Now, some obligatory comments based on MQA developments this week...
I see that finally there are comments from Bob Stuart / MQA posted on Computer Audiophile. Interesting... I'm sure his comments will generate just as many follow-up questions and I'll leave it to the readers to decipher what is reality and what is likely "spin". For example, I find it interesting what they are choosing to present in those graphs for example. As for the answer around DRM; "In fact, MQA is the antithesis of a DRM system: everyone can hear the music without a decoder!" Yeah... Apparently they're really doing us all a favour. :-)
From my perspective, there is no magic here. They want to define MQA as a technique borne of a "philosophy", and "a different conceptual frame of reference". That's nice, but it doesn't change the facts IMO other than provide some kind of license to pick and choose what they feel "human neuroscience" can and cannot perceive mixed with their form of "origami" / "hierarchical" compression. They have made MQA "compatible" with regular hardware playback. As a result, there are only so many bits that can be used for specialized encoding before causing obvious audible effects played on a regular non-MQA-decoding DAC. It is PCM which means there are well-understood concepts of dynamic range and frequency bandwidth to contend with (irrespective of their philosophical outlook). There is a DSP function to apply their "de-blur" and encoding on the producer side while on the consumer side some software processing is done to decode, upsample and present the DAC with the final PCM "rendering" for playback (be it at 96/192/384kHz samplerates...). And this whole process is currently wrapped up and "offered under NDA and implementation license to bona fide developers" only. There are inalienable implications on the effect, say for example, with an original 24/96 audio file when it goes through a process like this to create a 24/48 MQA product; both in how it compares to the original file once decoded (ie. not truly "lossless" as we currently define a CODEC like FLAC, but rather "perceptually lossless" with claims that it might be "better!" sounding), and loss of freedom of universal playback at full quality due to intellectual property entitlements. Remember, this CODEC is being introduced at a time in history where for years now we as consumers have had the opportunity to buy our music in a freely open format like FLAC.
If there is to be anything "new" here regarding sound quality, it's in the claim of improving time domain performance using some kind of proprietary DSP process (the "de-blur" process). This claim has been anticipated (as I suggested late last year) and I think it's fair to remain skeptical since much if not all of the technical information represents variation on already previously released material (such as this) - there's nothing new in those Q&A responses. I still find it interesting how there are claims that "neuroscience" contributes to the technique and the brain stem is highly "responsive to fine time structure". Whether brain stem response is of importance (compared to changes in the human primary auditory cortex or higher cortical centers), it's hard to imagine how this is relevant in the MQA context based on articles like Lewicki's "Efficient Coding of Natural Sounds" (2002) paper cited commonly in the MQA material when if you look at that research, they just used standard 44.1kHz sample rates (that were even further downsampled to 14.7kHz mono!). Strange... Remember folks, time domain performance is related to bit-depth below the Nyquist frequency for the sample rate. And 16-bits of PCM quantization can already achieve picosecond accuracy (this is all a rehash of the Kunchur papers from 2007/2008 with discussions here, and commented on in my previous blog post).
Combined with the article in TAS from late March, I suspect there's going to be some build-up coming to drum up interest as I would not be surprised to see other "listening impressions" articles join the fold from the usual sources. But truly the next milestone is how this plays out once a service like TIDAL actually "flips the switch" to activate the streaming system - whenever that happens. Remember, MQA was supposed to be available in early 2015 but obviously pushed back over time.
Note that even if TIDAL turns the encoding on, how many people can actually benefit right now given the paucity of MQA DACs out there? And let's not forget that converting to MQA encoding isn't completely "free". Ignoring any special identification of ADC and whatnot to optimize the "de-blur" process, and dispensing with getting the engineers and artists to "authenticate" the sound, even if it's just a black box taking in 24/96+ files and spitting out 24/44 or 24/48 MQA, the resulting files require more data bandwidth than the current lossless 16/44 streaming for TIDAL and subscribers. Is there some kind of "Catch-22" happening here? TIDAL is waiting for more hardware out there, but who buys the hardware without some assurance of content?!
The multi-million dollar question: What is the actual public interest and adoption of this CODEC as a "solution"? The answer to that question will decide whether we'll still be talking about MQA in 2020. And only then shall we truly see just how "revolutionary" this ends up being.
Whatever the outcome of all this, there's no reason we can't all still be enjoying perfectly beautiful digital playback...
Addendum (April 9, 2016):
I was thinking this AM when waiting for my daughter in her figure skating class about this "Catch-22" situation for MQA. :-)
There is of course a way to potentially accelerate adoption. Introduce an incentive using a software decoder. Provide a TIDAL-specific player on Windows and Mac taking the MQA stream and providing an upsampled 88/96/176/192+kHz signal on the other end to feed to one's hi-res DAC attached to the computer. Of course this compromises the "authenticated" component somewhat because there's no control of what hardware DAC is being used.
It could be good for TIDAL to push the idea of "hi-res streaming" beyond lossless 16/44 - further distancing them from competitors as the choice for "best quality" streaming; possibly increasing subscription base. Hey, it looks great when you see "192kHz" displayed on the DAC while streaming off your Mac, right!?
For Meridian/MQA, it would put MQA "out there" quicker and capitalize on whatever level of interest there is currently before the perception of this being "hot" technology cools. Although it might compromise some of their "authenticated" philosophy, it might help sell some MQA hardware because they can claim that it is really with actual MQA-enabled-DACs (not the software decoder) that the full extent of the improved quality can be assured. I'm sure some will buy a new DAC and want the best playback off their TIDAL subscription (especially if they stream from something like the original Auralic Aries without the software decoder) and others who sign up for TIDAL "Master" or whatever they call it, looking for a new DAC, will check out gear like the Explorer2.
Of course like any strategic marketing, there are risks and capital investments both for TIDAL and MQA (who knows what the financial situation is for either company...). MQA could for example waive licensing fee for TIDAL to use the CODEC for 2 years while TIDAL provides an "approved" secure software player and run the streaming infrastructure. Meanwhile, MQA focuses on providing their technology to DAC manufacturers, firmware upgrades, production houses, and music label partners. The thing is, if they truly have strong faith that the CODEC will be a "hit", then it could be a potentially wise course of action that would yield dividends down the road with positive word-of-mouth, adoption of MQA as a marketable format, and royalty from both software and hardware sales.
From a business perspective, we have to remember that true success for the company (as it pertains to the "bottom line") comes from broad market appeal and penetration. Real money requires volume. NOT catering to the small (though loyal) audiophile subsegment. And to this end, it might be beneficial to not hang on to a strict philosophically idealistic stance.
So I figured... Why not? Let's get one of these little guys with the goal being to run a simple Volumio streamer out of it (I think we'll be seeing RuneAudio on this device soon also). Here's what I got:
The ODROID-C2 with clear plastic case. Note how small this computer is - about the form factor of my Costco card :-). Around the same size for the Raspberry Pi... |
Notice that this machine is passively cooled with a good sized heatsink (on a relative basis of course). As much as I enjoy using fast computers, I cannot tolerate noise in the sound room so if this is to be an audio streamer or low-power home theater media player (one day), then it will need to be silent. The other big draw for me was the gigabit ethernet. Even though my media room is just next door to my wireless router, I have every room in my home linked by gigabit ethernet and even if I have great wireless signal strength, I want high-resolution music streaming as robust as possible - no audio/video-phile should need to tolerate stuttering and dropouts with any media playback in 2016!
As with most things in life, there are some "cons" to this little inexpensive computer. First and foremost is the fact that this is not a Raspberry which has the largest community of developers and contributors of any of these SBCs. That of course brings with it the latest bug fixes, firmwares, and of course hardware support. For example, I have heard great things about the HiFiBerry DAC card and "Digi" SPDIF output. The ODROID-C2 does have I2S pins and does also have the HiFi Shield (US$39) option based on the TI/BB PCM5102 DAC which I did not order because I already have many good DACs around here. It's capable of audio data up to 32/384. The measurements on their web page look excellent by the way!
Given that the ODROID-C2 is a new product and parts with the ARMv8-A architecture started showing up in the last year, there will be some growing pains on the OS side for the time being. This is certainly where the Pi 3 has an upper hand... However, the ODROID folks do already have Ubuntu 16.04 LTS beta images available (even though the kernel is still at 3.14LTS) and I hope in the days ahead they'll get the Amlogic S905 CPU patches into the "mainline" Linux kernel. (There's a comment from the ODROID folks of mainlining into Linux Kernel 4.4 starting in May - hope this comes to pass folks!) It has been more than a decade since I've personally bothered to download and compile my own Linux kernel, so I hope the Linux gurus have fun playing with the new hardware!
Getting it up and running was easy, make sure you have a good micro-SDHC card or if you want better performance, use an eMMC module (more expensive). I've been using these Samsung EVO+ microSD cards for cameras lately:
Works well, reasonably fast (UHS-1 speed) and is very inexpensive (got the 32GB card for less than CAD$20).
Flashing the SD card is trivial with a downloaded firmware OS image (you can find both Linux Ubuntu and Android here) and Win32DiskImager on Windows (or with dd on UNIX/Linux/Mac). For example, here's Win32DiskImager writing out the RC1 version of Volumio 2 for the ODROID-C2.
About 25MB/s write speed with the Samsung EVO+ SD card. |
Note the 32GB micro-SD card is inserted on the underside right below the 40-pin GPIO pins. Reasonably easy to access to switch out memory cards with different OS's. |
Little ODROID-C2 running 64-bit Ubuntu 16.04LTS with MATE GUI. |
I'll keep an eye on developments in the months ahead in terms of Ubuntu stability. I think it'll be really cool if I can hook this device up to a 4K TV in the media room maybe later this year if I decide to finally move forward with the TV upgrade now that 4K/UHD Blu-Ray players (eg. Samsung UBD-K8500) and movies have been released.
You might be wondering - how much power does this little machine use? Plugged into a Kill-A-Watt power monitor, the ODROID-C2 system sucks up between 4 to 7W depending on what it's doing when connected to my TV using the HDMI output. The heatsink feels just a tad warmer after 10 minutes of surfing and fooling with settings. As I mentioned above, I just used a 5V/1A power supply I had lying around and this was just fine - obviously 1.5 or 2A recommended if you have a bunch of USB peripherals hanging off of it like a WiFi transmitter, Bluetooth, or storage. Very good power efficiency with good performance thus far. I'd certainly be very interested to see if at some point I could use this machine to run Kodi with hardware video decoding for HTPC purposes (or Plex maybe if they start to support HEVC). Based on some specs here, it looks like the Amlogic S905 SoC is capable of 4K60 10-bit HEVC/H.265 decoding (and AVC/H.264 of course) but not Google's VP9. Other than YouTube 4K, it certainly looks like the future will be HEVC/H.265 for movies and downloads so I think this probably is just fine not having specific VP9 decode in the hardware. From what I have seen, HEVC is the only standard for the new 4K/UHD Blu-Rays anyways. [As an aside, in the last year since getting my 4K monitor, I have done a bit of HEVC/H.265 encoding using HandBrake with 1080P and 4K material. Very impressive video quality with low bitrates!]
As displayed above, I flashed the beta/RC version of Volumio 2 and will talk about this next time as essentially a sub-$75 streaming device with quite a bit of processing horsepower under the hood. It'll be used primarily as a UPnP renderer playing off my AMD A10-5800K-based Windows Server 2012 R2 machine with the music library managed by JRiver 21 (along side my Logitech Media Server for all the Squeezeboxes of course). I intend to couple it with my TEAC UD-501 in the sound room.
Here's how the ODROID-C2 looks sitting on top of the TEAC DAC - notice how small this thing is... The little computer can be easily moved to the space behind the DAC and out of view (I won't be using the IR remote control for volume so line of sight visibility is unnecessary).
Okay, have a great week ahead everyone! Gonna have a listen for a few weeks and will report back with some thoughts and measurements as appropriate.
-------------------------------
Some obligatory links first:
MUSINGS: Miscellanies on audio encoding (Dolby Atmos & Meridian MQA Concerns)
Written back in January 2015 - shortly after the grand "unveiling" party back in December 2014 based on various early reports.
MEASUREMENTS: MQA (Master Quality Authenticated) Observations and The Big Picture...
Written in January 2016 - observations and other thoughts about MQA without decoder (after CES2016 and 2L releasing demo tracks).
MEASUREMENTS / IMPRESSIONS: Meridian Explorer2 Analogue Output - 24/192 PCM vs. Decoded MQA.
Written in February 2016 - observations based on actual Meridian Explorer 2 hardware with MQA decoding firmware upgrade.
Now, some obligatory comments based on MQA developments this week...
I see that finally there are comments from Bob Stuart / MQA posted on Computer Audiophile. Interesting... I'm sure his comments will generate just as many follow-up questions and I'll leave it to the readers to decipher what is reality and what is likely "spin". For example, I find it interesting what they are choosing to present in those graphs for example. As for the answer around DRM; "In fact, MQA is the antithesis of a DRM system: everyone can hear the music without a decoder!" Yeah... Apparently they're really doing us all a favour. :-)
From my perspective, there is no magic here. They want to define MQA as a technique borne of a "philosophy", and "a different conceptual frame of reference". That's nice, but it doesn't change the facts IMO other than provide some kind of license to pick and choose what they feel "human neuroscience" can and cannot perceive mixed with their form of "origami" / "hierarchical" compression. They have made MQA "compatible" with regular hardware playback. As a result, there are only so many bits that can be used for specialized encoding before causing obvious audible effects played on a regular non-MQA-decoding DAC. It is PCM which means there are well-understood concepts of dynamic range and frequency bandwidth to contend with (irrespective of their philosophical outlook). There is a DSP function to apply their "de-blur" and encoding on the producer side while on the consumer side some software processing is done to decode, upsample and present the DAC with the final PCM "rendering" for playback (be it at 96/192/384kHz samplerates...). And this whole process is currently wrapped up and "offered under NDA and implementation license to bona fide developers" only. There are inalienable implications on the effect, say for example, with an original 24/96 audio file when it goes through a process like this to create a 24/48 MQA product; both in how it compares to the original file once decoded (ie. not truly "lossless" as we currently define a CODEC like FLAC, but rather "perceptually lossless" with claims that it might be "better!" sounding), and loss of freedom of universal playback at full quality due to intellectual property entitlements. Remember, this CODEC is being introduced at a time in history where for years now we as consumers have had the opportunity to buy our music in a freely open format like FLAC.
If there is to be anything "new" here regarding sound quality, it's in the claim of improving time domain performance using some kind of proprietary DSP process (the "de-blur" process). This claim has been anticipated (as I suggested late last year) and I think it's fair to remain skeptical since much if not all of the technical information represents variation on already previously released material (such as this) - there's nothing new in those Q&A responses. I still find it interesting how there are claims that "neuroscience" contributes to the technique and the brain stem is highly "responsive to fine time structure". Whether brain stem response is of importance (compared to changes in the human primary auditory cortex or higher cortical centers), it's hard to imagine how this is relevant in the MQA context based on articles like Lewicki's "Efficient Coding of Natural Sounds" (2002) paper cited commonly in the MQA material when if you look at that research, they just used standard 44.1kHz sample rates (that were even further downsampled to 14.7kHz mono!). Strange... Remember folks, time domain performance is related to bit-depth below the Nyquist frequency for the sample rate. And 16-bits of PCM quantization can already achieve picosecond accuracy (this is all a rehash of the Kunchur papers from 2007/2008 with discussions here, and commented on in my previous blog post).
Combined with the article in TAS from late March, I suspect there's going to be some build-up coming to drum up interest as I would not be surprised to see other "listening impressions" articles join the fold from the usual sources. But truly the next milestone is how this plays out once a service like TIDAL actually "flips the switch" to activate the streaming system - whenever that happens. Remember, MQA was supposed to be available in early 2015 but obviously pushed back over time.
Note that even if TIDAL turns the encoding on, how many people can actually benefit right now given the paucity of MQA DACs out there? And let's not forget that converting to MQA encoding isn't completely "free". Ignoring any special identification of ADC and whatnot to optimize the "de-blur" process, and dispensing with getting the engineers and artists to "authenticate" the sound, even if it's just a black box taking in 24/96+ files and spitting out 24/44 or 24/48 MQA, the resulting files require more data bandwidth than the current lossless 16/44 streaming for TIDAL and subscribers. Is there some kind of "Catch-22" happening here? TIDAL is waiting for more hardware out there, but who buys the hardware without some assurance of content?!
The multi-million dollar question: What is the actual public interest and adoption of this CODEC as a "solution"? The answer to that question will decide whether we'll still be talking about MQA in 2020. And only then shall we truly see just how "revolutionary" this ends up being.
Whatever the outcome of all this, there's no reason we can't all still be enjoying perfectly beautiful digital playback...
Addendum (April 9, 2016):
I was thinking this AM when waiting for my daughter in her figure skating class about this "Catch-22" situation for MQA. :-)
There is of course a way to potentially accelerate adoption. Introduce an incentive using a software decoder. Provide a TIDAL-specific player on Windows and Mac taking the MQA stream and providing an upsampled 88/96/176/192+kHz signal on the other end to feed to one's hi-res DAC attached to the computer. Of course this compromises the "authenticated" component somewhat because there's no control of what hardware DAC is being used.
It could be good for TIDAL to push the idea of "hi-res streaming" beyond lossless 16/44 - further distancing them from competitors as the choice for "best quality" streaming; possibly increasing subscription base. Hey, it looks great when you see "192kHz" displayed on the DAC while streaming off your Mac, right!?
For Meridian/MQA, it would put MQA "out there" quicker and capitalize on whatever level of interest there is currently before the perception of this being "hot" technology cools. Although it might compromise some of their "authenticated" philosophy, it might help sell some MQA hardware because they can claim that it is really with actual MQA-enabled-DACs (not the software decoder) that the full extent of the improved quality can be assured. I'm sure some will buy a new DAC and want the best playback off their TIDAL subscription (especially if they stream from something like the original Auralic Aries without the software decoder) and others who sign up for TIDAL "Master" or whatever they call it, looking for a new DAC, will check out gear like the Explorer2.
Of course like any strategic marketing, there are risks and capital investments both for TIDAL and MQA (who knows what the financial situation is for either company...). MQA could for example waive licensing fee for TIDAL to use the CODEC for 2 years while TIDAL provides an "approved" secure software player and run the streaming infrastructure. Meanwhile, MQA focuses on providing their technology to DAC manufacturers, firmware upgrades, production houses, and music label partners. The thing is, if they truly have strong faith that the CODEC will be a "hit", then it could be a potentially wise course of action that would yield dividends down the road with positive word-of-mouth, adoption of MQA as a marketable format, and royalty from both software and hardware sales.
From a business perspective, we have to remember that true success for the company (as it pertains to the "bottom line") comes from broad market appeal and penetration. Real money requires volume. NOT catering to the small (though loyal) audiophile subsegment. And to this end, it might be beneficial to not hang on to a strict philosophically idealistic stance.
Are you using this as a small HTPC solution to stream music to your dac, and deliver the music to DAC with USB ? If yes, did you measure the jitter and/or 8khz noise from the device?
ReplyDeleteHi Svein. Yup. You got it!
DeleteSo far it sounds good and I've been playing with the DSD DoP abilities, upsampling and using JRiver for room correction. Works well and stable...
Haven't started the measurements yet but will do so soon.
Thanks, jsut as I thought.
DeleteI have the Hegel HD25 dac, and it has issues with the USB somehow. Let me explain. When I turn off HD25, and disconnect the USB cable from my computer the driver does not reinitiate on my laptop, so I have to restart the laptop. Very annoying. Also if the cable moves just slightly on the back of HD25 dac, I loose transmission packets, so I can not use laptop on my lap with usb cable, ot has to be dead steady. Wonder what hegel thinks about this in larger setups with 12 inches subwoofers punching music and moving cables like crazy.
So I am starting to look for a solution to stream music without having to use a cable to my laptop, so I thought about chromecast and toslink, but after I read about jitter in your earlier blog post, I dont want to do that! Also it doesnt support 24/192.
It will be interesting to see your measurements.
Keep up the good work.
Hi Svein,
DeleteSorry to hear the Hegel's USB cable being so finnicky. Sounds like you do need a "stationary" solution :-).
I'll probably post the measurement results in a couple weeks - need some time to finalize my setup and got something else for this week to post...
Suffice to say, with the measurements I have done, jitter with the little ODROID using my TEAC DAC looks beautiful :-).
Great to see you've jumped into the SBC Linux audio distro world Archimago!
ReplyDeleteI'm running Volumio 2 RC1 on my new Raspberry Pi 3 with an IQAudio Pi-DAC+, but I'd call this a alpha release versus a beta or RC. That said, with a bit of tweaking it's working great for me and should stabilize and be feature complete in a few weeks.
Since you already run the Logitech Media Server, you should play around with Squeezelite - basically a Squeezebox Touch client without the Touch - it runs on just about all Linux distros. And with JiveLite you basically have the UI of a Squeezebox Touch. Great that Slim Devices open sourced all of their software years ago!
Thanks for the reminder Jim! Yup, that would be another great way to stream to the device vs. the current DLNA route with Volumio.
DeleteI agree, despite the early state of Volumio 2 "RC1", it has been stable with no issues so far. (A few little bugs and cosmetic fixes.)
Just say no!
Deletehttps://xkcd.com/456/
LOL Re: Lunix cartoon :-).
DeleteWhadda ya gonna do? It does work, just need a little good 'ol tech know-how!
On that note, consider all these music streamers out there (Auralic, Aurender, Lumin, etc...) that are based on Linux of some form. It's certainly good I think as a computer audiophile willing to get a little deeper with the hardware and software to have basic Linux skillz. :-)
Oh yes, big fan of the xkcd.
DeleteBut I have to add, I've been running a Squeezebox Touch & Duet via my old, stripped-out, gaming rig (i3 - W7) for years completely trouble free. Perhaps I've just been lucky? But my golden rule here is, if it works, leave it alone!
Remember: https://xkcd.com/54/
:-)
you should put some ads up from google adsense, it might get you a new DAC down the road.
ReplyDeleteYeah Svein... I'll look into this. Over the years I've never bothered much with monetization of the site since I do this for fun! I just want to make sure that whatever I do here, it won't be an annoying hinderance.
DeleteFar and away my primary interest is getting some good discussion going and hopefully providing some education and perspective as I learn myself. I do get a few dollars from Amazon for referrals (which I also hope is convenient for readers who want to get something I mention); probably enough in Amazon gift certs over the years for a decent DAC.
Thankfully the day job is good enough that I can have whatever gear I really want... And obviously I have little desire for fancy stuff. :-)
I read the Q and A about MQA. There was a question about compressive sensing. Bob Stuart answered a different question. To me the 'origami' process looks related to compressive sensing (could be wrong though). I find it concerning that Bob Stuart is not aware what compressive sensing is. ANy thoughts?
ReplyDeleteHi Brad, thanks for the note!
DeleteFor those wondering what this is about; here's a handy chapter:
http://statweb.stanford.edu/~markad/publications/ddek-chapter1-2011.pdf
Who knows :-). I don't know if it has been revealed what kind of compression algo is being used with the "origami" process itself. Presumably the deeper one goes in the folding process, the less data is being used to represent the information in those frequencies, hence an implied sparse / grosser approximation of those signals (ie. the octave from 48-96kHz is more lossy compressed compared to 24-48kHz due to the deeper folding).
Remember that the word "lossy" is "anathema" according to Stuart (more evidence that they see this technique as philosophy or perhaps religion than a scientific endeavor?).
Of course, it is possible that whatever compression technique is used and how accurate it is to represent the data doesn't really matter because it's all ultrasonic any way and only the dogs and cats would know whether it "sounded like" the original hi-res source!
Hey Arch, you got mail :-) Cheers, Mitch
ReplyDeleteHey Mitch! Thanks for the note... will get back to ya this weekend!
Deletere MQA 'deblurring'DSP ...is it conceivable that what it does is akin to Plangent 'deflutter' DSP? http://www.plangentprocesses.com/faq.htm
ReplyDeleteFrom what I can tell with the Plangent process' description, it looks like they're using tones like the tape bias found in analogue recordings to keep track and correct wow & flutter time distortions.
DeleteConsidering that MQA is meant to correct time domain differences in high-res digital audio between various equipment in the studio, I assume it's functioning at an even more precise level aiming to time align down to the level of ringing and impulse response resolutions... At least that's the sense I get reading these descriptions. To do this would require knowing parameters of each ADC/DAC/plugin/DSP unit presumably in order to be completely precise.
I don't remember if MQA/Bob Stuart discussed the DSP process in any detail over the last couple years...
nice information :)
ReplyDeletebest regards
thomas from computer 22
versicherung-wv.blogspot.com | versicherung-wv.de | applicate-soft.de