Over the years, I have upgraded many parts of my computer system. New motherboards go in and out, new CPUs updated, more advanced GPUs, gigs of RAM, faster and higher storage size hard drives and SSDs...
A few months back, I mentioned that one of the more "boring" parts of the computer system with little actual need for updating has been the wired network system. I have not significantly upgraded anything in many years. While I suspect many of us over the last decade have upgraded our home ethernet systems to 1 gigabit/s ethernet (1GbE), since the mid-2000's, 10 gigabit/s speed has been waiting in the wings for larger scale adoption. In business and enterprise settings, one may have already seen fibre-optic networks (for example, those using enhanced Small Form Factor [SFP+] connectors), but 10GbE in the form of standard copper modular RJ-45 8P8C connectors (also known has 10GBASE-T) has been talked about since 2006 with relatively few of us I suspect incorporating the technology into the home yet (in early 2018). The price point is only starting to dip into consumer territory.
Twinax here which is like a coaxial cable with two inner leads and meant for very short distances. Notice that fiber allows for much longer lengths, uses less power per port and has significantly better latency (we're talking in the low microseconds here of course; actual latency in a working system between hosts and switches will be dependent on the hardware and complexity of the topology usually measured in ms). 10GBASE-T which we'll be looking at today is meant for lower cost implementations like in one's home.
Ideally, 10GBASE-T was spec'ed as using Cat-6a cables as per the chart above to achieve transmission over 100m. However, I suspect for most of us with ethernet cabling through the house, Cat-5e cables remain the standard and maybe newer homes will have Cat-6. Indeed, my house, built in 2007 was Cat-5e wired except for a more recently renovated portion of the basement including my sound room with Cat-6. But if we look at the 10GBASE-T specifications, we see that actually 10GbE could still be achievable so long as distances between devices are shorter, something like up to 45m (~147 feet). Looking at my home and where devices are located between floors, other than the most extreme corners in the top level, my suspicion is that 130' is probably enough to reach the vast majority of the house from the basement electrical panel where the main switch is located.
And so I thought... Let's give 10GbE a try even with Cat-5e in those walls :-).
Part I: The plan!First, I'm not doing this just for fun of course! I do quite a bit of photo and video editing/transcoding of data stored on my server machine (think of it as my NAS), run a web server for work with some streaming video, access the network drives quite a bit among various devices, so some extra speed could be beneficial. Therefore, what I wanted to do was increase the "main highway" on my network between the Server and the rest of the home, and especially between the Workstation and the Server computers. Let me show you what my network topology currently looks like:
As you can see, I currently have a good number of devices all over the home, some like my HTPC and TV box in the soundroom aren't even shown. Of course, not every connection there is 1GbE. Some devices like the Raspberry Pi 3, Yamaha RX-V781 receiver, and Squeezebox Transporter are only 100Mbps. When I run my 1GbE tests, it'll be with the current TP-Link TL-SG1008D switch in the Home Office and D-Link DGS-1016A in the Electrical Panel.
The plan then is to upgrade the network to this:
Going 10GbE will increase the end-to-end traffic from the Workstation to the Server. It will also increase the "pipe" from the Server to the "Electrical Panel Switch" which is the central hub for much of the streaming that happens around the house including the WiFi traffic and to my cable modem to the Internet (which is currently ~150Mbps download, ~20Mbps upload).
Part II: The HardwareObviously, first we need to upgrade the computer with 10GbE network interface cards (NIC). Here are the ASUS XG-C100C (~US$100) 10GbE cards to be installed in the computers' PCI-E x4 slot:
IMO this is a good looking compact cards with an extra 1/2 height bracket in the box. Pretty good heatsink which does get slightly warm in prolonged usage (not hot enough to feel uncomfortable to touch). As usual, a driver CD is included. Just go to the Aquantia download site and grab the latest AQC107 28nm chipset driver (currently 2.1.001.01, Windows and Linux - Macs already have native drivers in High Sierra but check if there are still driver issues).
And in order to get the 10GbE network "axis" in my home working, I need to connect the devices through 10GbE-capable switches as diagrammed above - here are the 2 Netgear GS110MX boxes (currently about US$200 each), capable of internal 56Gbps non-blocking switching speed. Notice the box has 8-ports of 1GbE and 2-ports of 10GbE (the 2 10GbE ports are to the right in the picture). Very sturdy "business" oriented switches in a metal enclosure. These are fanless, plug-and-play, unmanaged devices; if you want a managed switch, check out the Netgear GS110EMX:
Everything hooked up without any unexpected issues - and I see that the network card is connecting at 10.0Gbps:
Great. Let's do some testing before and after to see if 10GbE is actually working stably in my home with standard Cat-5e wiring!
Part III: PerformanceBefore installing all the 10GbE stuff, I of course needed to make sure the existing hardware speed was measured to make sure all this stuff is not for nought, insuring that I'm getting the expected speed bump!
For raw transfer speeds, I used Microsoft's NTttcp (64-bit command line) with the following pairs of commands between the server and receiver sides on the 1GbE network diagram above and then with the 10GbE network installed:
Windows Server 2016:
NTttcp.exe -s -m 8,*,192.168.1.10 -l 2M -a 2 -t 15
NTttcp.exe -r -m 8,*,192.168.1.10 -l 2M -a 2 -t 15
192.168.1.10 is simply the "receiver node" for the program, in this case the Windows 10 Workstation machine. This task will saturate the network with 8 threads asynchronously over 15 seconds. Check out the results between 1GbE and 10GbE as reported on the receiver and sender ends:
As you can see, the two computers have different CPUs; my Windows 10 Workstation with the faster AMD Ryzen 1700, and the Server computer in the basement with the Intel i5-6500. Overall, as expected the CPU% is lower with the Ryzen processor. Going full tilt with 8 threads on a quad core i5 pushed CPU utilization to 27%.
What is obvious is that indeed even over my Cat-5e network, throughput has very obviously increased going to 10GbE! In fact, the increase is almost exactly 10X as predicted (from 112MB/s to 1130MB/s). Very nice! Notice also that there was no increase in the number of retransmits or errors comparing 1GbE vs. 10GbE.
Now, if we try a "real world" test with a very large >30GB file sitting in a Samsung 850 Pro 256GB SSD copied from my Windows 10 machine to an OCZ Vertex 3 on the Server 2016, let's have a quick peek at how the hardware changes affect things.
First, this is just the difference in copying speed between the gigabit network port from the Workstation's MSI X370 SLI Plus motherboard and the Server's Gigabyte GA-Z170X-Gaming 7 motherboard compared to using the ASUS XG-C100C cards plugged into the motherboards on each end:
Interesting, it looks like even without upgrading to the 10GbE pathway, the ASUS ethernet cards are a bit faster than the motherboard's native ethernet which according to my Devices Manager is a "Realtek PCIe GBE Family Controller" on the AMD motherboard and a "Killer E2400 Gigabit Ethernet Controller" on the Intel. The gain isn't big, about 15% but I think it's a nice demonstration that ethernet cards can make a difference even though we're talking <20% change (probably not noticeable in daily use).
So, let's upgrade the connection from the Windows 10 machine to the Server 2016 with the 10GbE Netgear switches in between:
Now, clearly there has been an increase in transfer speed by going from 1GbE to 10GbE. But what we see is instructive. With a standard 1GbE network, the SSD transfer proceeded consistently just below 100MB/s but with a pipe capable of 10X the data rate, we see that the SATA-III SSDs on both ends cannot use the full bandwidth. Initially, the buffer fills at around 400MB/s, then dips down to ~100MB/s for awhile as the buffered data is saved to the SSD, and then we achieve ~133MB/s "steady state" transfer for the remainder of the large file which is still >30% faster than at 1GbE with the faster ASUS cards. I can increase the size of the transfer buffers to lengthen that initial high speed "bump" we see; but ultimately the speed still stalls for the buffers to flush to disk. Since most files are not huge like this 30+GB test file, transfers are typically very fast in day-to-day work.
My suspicion is that the speed limitation is from the Vertex 3 drive on the Server computer side plus computer OS overhead for sharing the directory over the network. Benchmarks have shown sequential write speeds for the Vertex are only up to ~148MB/s. Looks like I "need" a faster SSD in that Server :-). Another way to increase HD and SSD throughput would be to RAID 0 a few SSDs together.
Something I was curious about was robustness to errors. We know that TCP is an error-correcting protocol, but as discussed awhile back, UDP does have error detection, but is prone to dropped datagrams when data rate is pushed (no error recovery). There are many reasons for this including buffering errors as discussed here. Again, between the Workstation and Server machines, I used this pair of commands with the 64-bit version of iPerf3 (3.1.3) to have a look (192.168.1.10 being the address of the Windows 10 machine as the "server" this time).
Windows 10 Workstation:
iperf3.exe -s -i 60
Windows Server 2016:
iperf3.exe -c 192.168.1.10 -u -t 21600 -b 150M -i 30 -l 32k
By using UDP, it gives me another indicator of how reliable the network is over those 6 hours; whether packets are lost and received out of order.
As you can see, the results are really good. I asked the system to stream data continuously at 150Mbps which is faster than the data rate of the highest quality A/V stream we can get these days - 100GB UHD Blu-Ray (128Mbps). Over 6 hours, without any error correction, the 1GbE network lost 14 packets (0.00011% loss), and the 10GbE lost only 7 packets and 1 went out-of-order (let's call this 8 error "datagrams") or 0.000065%! I wonder if the out-of-order packet may have been related to the fact that I was running some batch home video transcoding across the network drive during the testing, increasing the network load during the 10GbE test compared to running the 1GbE test overnight in quiet conditions.
What that result means is that if I were to stream glorious 4K video with lossless multichannel sound at quality beyond today's UHD Blu-Ray standard from my basement to the main floor through Cat-5e cables, only around 14ms of the data would be either lost in transmission or was sent/received out of order through the 10GbE network! For reference, remember that typical current UHD Blu-Ray HEVC HDR video averages around 50Mbps. 7.1 channel 24/48 TrueHD + Atmos takes up another 6.5Mbps of data only. For 2-channel audiophiles, a standard 24/192 uncompressed stream (not even FLAC lossless compression) takes up 9.2Mbps. On the extreme side, uncompressed 24/768 PCM would be ~37Mbps and DSD512 ~44.8Mbps. I'm obviously asking the network to transmit more than any of that over the 6 hours.
One more thing, notice the packet "Jitter" (circled in blue) dropped from 0.126ms with 1GbE to 0.016ms with 10GbE on average. That's nice and not unexpected I suppose. Ok audiophiles, before any enterprising corporation/individual comes along selling you US$3000 "audiophile" 10GbE network switches, US$1000 Cat-7 ethernet cables, and US$6000 "high end" audio streamers (as in this discussion) claiming better jitter, just remember that the jitter we're measuring from the analogue output of your DAC is not tied to this value and you better demand some proof before dropping cash if anyone makes such claims. :-)
For fun, I also ran iPerf3 in TCP mode which of course allows for error correction over 1 hour with the following command line:
Windows Server 2016 (no change to Windows 10 command line):
iperf3.exe -c 192.168.1.10 -t 3600 -P4 -i 30 -l 128kNotice the absence of the -b "bitrate" switch so it runs at full speed, read/write buffers set to 128k, and -P4 for 4 concurrent ports active for the transfers. And we'll do this for 3600 seconds - a straight hour of data transfer.
I didn't bother with the 1GbE network since I had installed the hardware already, but with the 10GbE set-up we see this:
|Looks like there's a printout bug in the software.|
Part IV: SummaryWell, there you go. The price point, while still higher than 1GbE of course, is within reach of the home "power user" who wants to strategically speed up their network. Furthermore, this is demonstration that at least for me, in my 10-year-old house, without actually changing the ethernet cables at all and staying with Cat-5e networking, I can easily achieve 10GbE speed. While obviously Cat-6a and Cat-7 would be better for longer lengths and maybe there might be slightly higher throughput, my suspicion is that unless one had a very large home, Cat-5e would be just fine and the difference in speed would be negligible in daily use. If you're wondering, the test results above are all done without Jumbo Frames. The ASUS network card is capable of up to 16KB packet size and the Netgear GS110MX switches supports up to 9KB. Since I have quite a mixed network, it's better to just stay with the standard 1500 bytes while still achieving excellent speeds as shown.
Also, remember that both the ASUS XG-C100C card and Netgear GS110MX switch are capable of autonegotiating the speed to drop down to 5GbE and 2.5GbE so even if the computers at the most distal corners of your home after a long stretch of Cat-5e cable may not be able to handle the full 10GbE, even 2.5GbE is more than 2X the speed of a current Gigabit network. (Remember that a local USB 3.0 connection is up to 5Gbps and we're talking here about comparable if not higher speeds with a cable that runs the length of a home - cool, right?)
On a practical note, notice that the 10GbE ports do use more power and the network cards do have quite large heatsinks. After 2 hours of sustained high speed operation the cards and switches were warm to touch, certainly not what I would call hot though. Nonetheless, make sure there's adequate ventilation around them (at full speed with all ports active, the Netgear GS1100MX switch is rated at 13.2W power consumption compared to something like 4-5W for most 8-port 1GbE switches).
In the future, I can imagine replacing my central "Electrical Panel Switch" with something that provides a few more 10GbE ports. I suppose an 8-port 10GbE switch like this Buffalo BS-MP2008 would be fine. A few more 10GbE ports to reach out to the living room entertainment system, maybe the home theater / sound room, and to the top floor "Upstairs Computer" would cover any "hub" of computing activity in my house... At this point though, there's just no need. For consumers, I suspect 10GbE is just getting started and likely will be the high-speed standard for the foreseeable future as prices continue to drop.
For the audiophiles who are still debating whether Cat-7 cables "sound better" than Cat-6 vs. Cat-5e... Seriously folks. As you can see, data integrity can be maintained with good old generic Cat-5e used in home construction probably from the local Home Depot at 10X the speed of current 1 Gigabit ethernet. Between the Server to my Workstation computers there are a total of 5 unshielded twisted pair (UTP) patch cables and lengths of ethernet behind the walls snaking across my house and up a storey obviously without destructive interference ruining the data. Even without error correction, the dropped UDP packets are minimal. What "sound difference" are some people talking about when using stuff like 3 feet lengths of Cat-7+ cable (like these AudioQuest 2.5' beauties at around US$300 currently)?!
Remember, the data from the ethernet cable typically will need to go into a streamer board/device with buffers and its own microprocessor before the PCM/DSD data is passed on to the DAC; there is no direct connection between the ethernet input to the audio conversion without the intermediate stage even in the same box. And as I showed a year back, I'm simply not hearing/finding audible noise through my ethernet system.
I was going to speak a bit about what I saw on the Audiophile Internets this week but I think I've written enough! Let's chat again next weekend as usual...
Hope you're all having a wonderful time as we cruise into spring. Enjoy the music!
When I first bought the ASUS NIC, I noticed some issues with warm reboots not finding the network. Often I found I had to power off the machine for a cold restart before the NIC would attach. I updated the firmware from 1.5.44 to 1.5.58 using THIS PACKAGE. Programmed with the "asr_d107_1.5.58.clx" firmware file in the "firmware" directory.
I used the "diag -k" command line from an administrator command prompt to install the driver and update the firmware (in the AQUANTIA\AMD64 directory in Windows 10 and Server 2016).
Since the firmware update, I have not had problems with warm restarts so far (note that I tend to leave my machines on 24/7 so haven't tried too many times!).