Sunday, 22 April 2018

MEASUREMENTS: Going 10 Gigabits/s home ethernet (10GbE) on Cat-5e cables. (ASUS XG-C100C cards & Netgear GS110MX switches)


Over the years, I have upgraded many parts of my computer system. New motherboards go in and out, new CPUs updated, more advanced GPUs, gigs of RAM, faster and higher storage size hard drives and SSDs...

A few months back, I mentioned that one of the more "boring" parts of the computer system with little actual need for updating has been the wired network system. I have not significantly upgraded anything in many years. While I suspect many of us over the last decade have upgraded our home ethernet systems to 1 gigabit/s ethernet (1GbE), since the mid-2000's, 10 gigabit/s speed has been waiting in the wings for larger scale adoption. In business and enterprise settings, one may have already seen fibre-optic networks (for example, those using enhanced Small Form Factor [SFP+] connectors), but 10GbE in the form of standard copper modular RJ-45 8P8C connectors (also known has 10GBASE-T) has been talked about since 2006 with relatively few of us I suspect incorporating the technology into the home yet (in early 2018). The price point is only starting to dip into consumer territory.

For those interested, there are big differences between SFP+ fiber and copper cabling as above. We can ignore Twinax here which is like a coaxial cable with two inner leads and meant for very short distances. Notice that fiber allows for much longer lengths, uses less power per port and has significantly better latency (we're talking in the low microseconds here of course; actual latency in a working system between hosts and switches will be dependent on the hardware and complexity of the topology usually measured in ms). 10GBASE-T which we'll be looking at today is meant for lower cost implementations like in one's home.

Ideally, 10GBASE-T was spec'ed as using Cat-6a cables as per the chart above to achieve transmission over 100m. However, I suspect for most of us with ethernet cabling through the house, Cat-5e cables remain the standard and maybe newer homes will have Cat-6. Indeed, my house, built in 2007 was Cat-5e wired except for a more recently renovated portion of the basement including my sound room with Cat-6. But if we look at the 10GBASE-T specifications, we see that actually 10GbE could still be achievable so long as distances between devices are shorter, something like up to 45m (~147 feet). Looking at my home and where devices are located between floors, other than the most extreme corners in the top level, my suspicion is that 130' is probably enough to reach the vast majority of the house from the basement electrical panel where the main switch is located.

And so I thought... Let's give 10GbE a try even with Cat-5e in those walls :-).

Part I: The plan!

First, I'm not doing this just for fun of course! I do quite a bit of photo and video editing/transcoding of data stored on my server machine (think of it as my NAS), run a web server for work with some streaming video, access the network drives quite a bit among various devices, so some extra speed could be beneficial. Therefore, what I wanted to do was increase the "main highway" on my network between the Server and the rest of the home, and especially between the Workstation and the Server computers. Let me show you what my network topology currently looks like:


As you can see, I currently have a good number of devices all over the home, some like my HTPC and TV box in the soundroom aren't even shown. Of course, not every connection there is 1GbE. Some devices like the Raspberry Pi 3, Yamaha RX-V781 receiver, and Squeezebox Transporter are only 100Mbps. When I run my 1GbE tests, it'll be with the current TP-Link TL-SG1008D switch in the Home Office and D-Link DGS-1016A in the Electrical Panel.

The plan then is to upgrade the network to this:


Going 10GbE will increase the end-to-end traffic from the Workstation to the Server. It will also increase the "pipe" from the Server to the "Electrical Panel Switch" which is the central hub for much of the streaming that happens around the house including the WiFi traffic and to my cable modem to the Internet (which is currently ~150Mbps download, ~20Mbps upload).

Part II: The Hardware

Obviously, first we need to upgrade the computer with 10GbE network interface cards (NIC). Here are the ASUS XG-C100C (~US$100) 10GbE cards to be installed in the computers' PCI-E x4 slot:



IMO this is a good looking compact cards with an extra 1/2 height bracket in the box. Pretty good heatsink which does get slightly warm in prolonged usage (not hot enough to feel uncomfortable to touch). As usual, a driver CD is included. Just go to the Aquantia download site and grab the latest AQC107 28nm chipset driver (currently 2.1.001.01, Windows and Linux - Macs already have native drivers in High Sierra but check if there are still driver issues).

And in order to get the 10GbE network "axis" in my home working, I need to connect the devices through 10GbE-capable switches as diagrammed above - here are the 2 Netgear GS110MX boxes (currently about US$200 each), capable of internal 56Gbps non-blocking switching speed. Notice the box has 8-ports of 1GbE and 2-ports of 10GbE (the 2 10GbE ports are to the right in the picture). Very sturdy "business" oriented switches in a metal enclosure. These are fanless, plug-and-play, unmanaged devices; if you want a managed switch, check out the Netgear GS110EMX:


Everything hooked up without any unexpected issues - and I see that the network card is connecting at 10.0Gbps:


Great. Let's do some testing before and after to see if 10GbE is actually working stably in my home with standard Cat-5e wiring!

Part III: Performance

Before installing all the 10GbE stuff, I of course needed to make sure the existing hardware speed was measured to make sure all this stuff is not for nought, insuring that I'm getting the expected speed bump!

For raw transfer speeds, I used Microsoft's NTttcp (64-bit command line) with the following pairs of commands between the server and receiver sides on the 1GbE network diagram above and then with the 10GbE network installed:

   Windows Server 2016:
NTttcp.exe -s -m 8,*,192.168.1.10 -l 2M -a 2 -t 15

   Windows 10:
NTttcp.exe -r -m 8,*,192.168.1.10 -l 2M -a 2 -t 15

192.168.1.10 is simply the "receiver node" for the program, in this case the Windows 10 Workstation machine. This task will saturate the network with 8 threads asynchronously over 15 seconds. Check out the results between 1GbE and 10GbE as reported on the receiver and sender ends:



As you can see, the two computers have different CPUs; my Windows 10 Workstation with the faster AMD Ryzen 1700, and the Server computer in the basement with the Intel i5-6500. Overall, as expected the CPU% is lower with the Ryzen processor. Going full tilt with 8 threads on a quad core i5 pushed CPU utilization to 27%.

What is obvious is that indeed even over my Cat-5e network, throughput has very obviously increased going to 10GbE! In fact, the increase is almost exactly 10X as predicted (from 112MB/s to 1130MB/s). Very nice! Notice also that there was no increase in the number of retransmits or errors comparing 1GbE vs. 10GbE.

Now, if we try a "real world" test with a very large >30GB file sitting in a Samsung 850 Pro 256GB SSD copied from my Windows 10 machine to an OCZ Vertex 3 on the Server 2016, let's have a quick peek at how the hardware changes affect things.

First, this is just the difference in copying speed between the gigabit network port from the Workstation's MSI X370 SLI Plus motherboard and the Server's Gigabyte GA-Z170X-Gaming 7 motherboard compared to using the ASUS XG-C100C cards plugged into the motherboards on each end:


Interesting, it looks like even without upgrading to the 10GbE pathway, the ASUS ethernet cards are a bit faster than the motherboard's native ethernet which according to my Devices Manager is a "Realtek PCIe GBE Family Controller" on the AMD motherboard and a "Killer E2400 Gigabit Ethernet Controller" on the Intel. The gain isn't big, about 15% but I think it's a nice demonstration that ethernet cards can make a difference even though we're talking <20% change (probably not noticeable in daily use).

So, let's upgrade the connection from the Windows 10 machine to the Server 2016 with the 10GbE Netgear switches in between:


Now, clearly there has been an increase in transfer speed by going from 1GbE to 10GbE. But what we see is instructive. With a standard 1GbE network, the SSD transfer proceeded consistently just below 100MB/s but with a pipe capable of 10X the data rate, we see that the SATA-III SSDs on both ends cannot use the full bandwidth. Initially, the buffer fills at around 400MB/s, then dips down to ~100MB/s for awhile as the buffered data is saved to the SSD, and then we achieve ~133MB/s "steady state" transfer for the remainder of the large file which is still >30% faster than at 1GbE with the faster ASUS cards. I can increase the size of the transfer buffers to lengthen that initial high speed "bump" we see; but ultimately the speed still stalls for the buffers to flush to disk. Since most files are not huge like this 30+GB test file, transfers are typically very fast in day-to-day work.

My suspicion is that the speed limitation is from the Vertex 3 drive on the Server computer side plus computer OS overhead for sharing the directory over the network. Benchmarks have shown sequential write speeds for the Vertex are only up to ~148MB/s. Looks like I "need" a faster SSD in that Server :-). Another way to increase HD and SSD throughput would be to RAID 0 a few SSDs together.

Something I was curious about was robustness to errors. We know that TCP is an error-correcting protocol, but as discussed awhile back, UDP does have error detection, but is prone to dropped datagrams when data rate is pushed (no error recovery). There are many reasons for this including buffering errors as discussed here. Again, between the Workstation and Server machines, I used this pair of commands with the 64-bit version of iPerf3 (3.1.3) to have a look (192.168.1.10 being the address of the Windows 10 machine as the "server" this time).

   Windows 10 Workstation:
iperf3.exe -s -i 60

   Windows Server 2016:
iperf3.exe -c 192.168.1.10 -u -t 21600 -b 150M -i 30 -l 32k

By using UDP, it gives me another indicator of how reliable the network is over those 6 hours; whether packets are lost and received out of order.

As you can see, the results are really good. I asked the system to stream data continuously at 150Mbps which is faster than the data rate of the highest quality A/V stream we can get these days - 100GB UHD Blu-Ray (128Mbps). Over 6 hours, without any error correction, the 1GbE network lost 14 packets (0.00011% loss), and the 10GbE lost only 7 packets and 1 went out-of-order (let's call this 8 error "datagrams") or  0.000065%! I wonder if the out-of-order packet may have been related to the fact that I was running some batch home video transcoding across the network drive during the testing, increasing the network load during the 10GbE test compared to running the 1GbE test overnight in quiet conditions.

What that result means is that if I were to stream glorious 4K video with lossless multichannel sound at quality beyond today's UHD Blu-Ray standard from my basement to the main floor through Cat-5e cables, only around 14ms of the data would be either lost in transmission or was sent/received out of order through the 10GbE network! For reference, remember that typical current UHD Blu-Ray HEVC HDR video averages around 50Mbps. 7.1 channel 24/48 TrueHD + Atmos takes up another 6.5Mbps of data only. For 2-channel audiophiles, a standard 24/192 uncompressed stream (not even FLAC lossless compression) takes up 9.2Mbps. On the extreme side, uncompressed 24/768 PCM would be ~37Mbps and DSD512 ~44.8Mbps. I'm obviously asking the network to transmit more than any of that over the 6 hours.

One more thing, notice the packet "Jitter" (circled in blue) dropped from 0.126ms with 1GbE to 0.016ms with 10GbE on average. That's nice and not unexpected I suppose. Ok audiophiles, before any enterprising corporation/individual comes along selling you US$3000 "audiophile" 10GbE network switches, US$1000 Cat-7 ethernet cables, and US$6000 "high end" audio streamers (as in this discussion) claiming better jitter, just remember that the jitter we're measuring from the analogue output of your DAC is not tied to this value and you better demand some proof before dropping cash if anyone makes such claims. :-)

For fun, I also ran iPerf3 in TCP mode which of course allows for error correction over 1 hour with the following command line:

   Windows Server 2016 (no change to Windows 10 command line):
iperf3.exe -c 192.168.1.10 -t 3600 -P4 -i 30 -l 128k
Notice the absence of the -b "bitrate" switch so it runs at full speed, read/write buffers set to 128k, and -P4 for 4 concurrent ports active for the transfers. And we'll do this for 3600 seconds - a straight hour of data transfer.

I didn't bother with the 1GbE network since I had installed the hardware already, but with the 10GbE set-up we see this:

Looks like there's a printout bug in the software.
That's good. Sustained 10GbE throughput across the hour at 9.33-9.45 Gbps with a final average of 9.43Gbps as circled which of course correlates with ~10Gbps NTttcp results shown earlier. I'm indeed sustainably running at the high speed.

Part IV: Summary

Well, there you go. The price point, while still higher than 1GbE of course, is within reach of the home "power user" who wants to strategically speed up their network. Furthermore, this is demonstration that at least for me, in my 10-year-old house, without actually changing the ethernet cables at all and staying with Cat-5e networking, I can easily achieve 10GbE speed. While obviously Cat-6a and Cat-7 would be better for longer lengths and maybe there might be slightly higher throughput, my suspicion is that unless one had a very large home, Cat-5e would be just fine and the difference in speed would be negligible in daily use. If you're wondering, the test results above are all done without Jumbo Frames. The ASUS network card is capable of up to 16KB packet size and the Netgear GS110MX switches supports up to 9KB. Since I have quite a mixed network, it's better to just stay with the standard 1500 bytes while still achieving excellent speeds as shown.

Also, remember that both the ASUS XG-C100C card and Netgear GS110MX switch are capable of autonegotiating the speed to drop down to 5GbE and 2.5GbE so even if the computers at the most distal corners of your home after a long stretch of Cat-5e cable may not be able to handle the full 10GbE, even 2.5GbE is more than 2X the speed of a current Gigabit network. (Remember that a local USB 3.0 connection is up to 5Gbps and we're talking here about comparable if not higher speeds with a cable that runs the length of a home - cool, right?)

On a practical note, notice that the 10GbE ports do use more power and the network cards do have quite large heatsinks. After 2 hours of sustained high speed operation the cards and switches were warm to touch, certainly not what I would call hot though. Nonetheless, make sure there's adequate ventilation around them (at full speed with all ports active, the Netgear GS1100MX switch is rated at 13.2W power consumption compared to something like 4-5W for most 8-port 1GbE switches).

In the future, I can imagine replacing my central "Electrical Panel Switch" with something that provides a few more 10GbE ports. I suppose an 8-port 10GbE switch like this Buffalo BS-MP2008 would be fine. A few more 10GbE ports to reach out to the living room entertainment system, maybe the home theater / sound room, and to the top floor "Upstairs Computer" would cover any "hub" of computing activity in my house... At this point though, there's just no need. For consumers, I suspect 10GbE is just getting started and likely will be the high-speed standard for the foreseeable future as prices continue to drop.

For the audiophiles who are still debating whether Cat-7 cables "sound better" than Cat-6 vs. Cat-5e... Seriously folks. As you can see, data integrity can be maintained with good old generic Cat-5e used in home construction probably from the local Home Depot at 10X the speed of current 1 Gigabit ethernet. Between the Server to my Workstation computers there are a total of 5 unshielded twisted pair (UTP) patch cables and lengths of ethernet behind the walls snaking across my house and up a storey obviously without destructive interference ruining the data. Even without error correction, the dropped UDP packets are minimal. What "sound difference" are some people talking about when using stuff like 3 feet lengths of Cat-7+ cable (like these AudioQuest 2.5' beauties at around US$300 currently)?!

Remember, the data from the ethernet cable typically will need to go into a streamer board/device with buffers and its own microprocessor before the PCM/DSD data is passed on to the DAC; there is no direct connection between the ethernet input to the audio conversion without the intermediate stage even in the same box. And as I showed a year back, I'm simply not hearing/finding audible noise through my ethernet system.

-----------------------------

There ya go. I hope this article made home networking somewhat interesting for those wanting to achieve faster transfer speeds these days.

I was going to speak a bit about what I saw on the Audiophile Internets this week but I think I've written enough! Let's chat again next weekend as usual...

Hope you're all having a wonderful time as we cruise into spring. Enjoy the music!

ADDENDUM:
When I first bought the ASUS NIC, I noticed some issues with warm reboots not finding the network. Often I found I had to power off the machine for a cold restart before the NIC would attach. I updated the firmware from 1.5.44 to 1.5.58 using THIS PACKAGE. Programmed with the "asr_d107_1.5.58.clx" firmware file in the "firmware" directory.

I used the "diag -k" command line from an administrator command prompt to install the driver and update the firmware (in the AQUANTIA\AMD64 directory in Windows 10 and Server 2016).

Since the firmware update, I have not had problems with warm restarts so far (note that I tend to leave my machines on 24/7 so haven't tried too many times!).

20 comments:

  1. Nice writeup.... I've been looking doing something similar at home i.e. a 10G trunk between switches. Am currently using 2 x Netgear GS110TP with a 2 x 1G Fiber (LACP bonded) trunk between them, but would quite like to try 10G. I'm more tempted by SFP+ fiber though. Have been looking at the Ubiquiti gear but their only switch with SFP+ is the 48 port and that's "a bit" overkill for me.

    ReplyDelete
    Replies
    1. Hey Occam,
      You've got a nice bonded network already! Yeah, SFP+ fiber would IMO be superior over the RJ-45 in many ways - latency, low power, greater distance, some might want to bring up galvanic isolation...

      Of course the big factor for home installations would be the $cost$.

      Delete
  2. Just a comment about ethernet cables. Not only are "audiophile" ethernet cables a rip-off (obviously), but many Cat-6 and above cables are not what they claim to be. Have you seen the Blue Jeans Cable article https://www.bluejeanscable.com/articles/is-your-cat6-a-dog.htm ? It seems that standards are very lax! Staying with Cat-5e was a good move, especially as you got the expected speed increase anyway.

    ReplyDelete
    Replies
    1. Wow Otto!

      Those are shocking results. A wonder that this isn't talked about more. Well, as a pragmatist, I guess I'm just very thankful that whatever brand the house builder used when they built it in 2007 used good enough quality Cat-5e to get the job done...

      Delete
    2. Will def give that a read HOWEVER, New cables like Cat6A SFTP aren't a sham they a server a purpose specifically with cross talk and external interference that's why SFTP must be properly grounded. Yeah the standards are lax that's why one needs to make sure they are getting a good copper cable and not some Copper clad aluminum or steel (coax for the later) properly shielded cables, key stones and ends! I've even seen NICs and switches that didn't have properly grounded ports either!!!

      Cat5e can get the job done! The bonus with cat6a is that the fun and games don't have to end when the HVAC guy or the electrician contractors run or cross over a 110v or 220v next to your cables. that wouldn't be night mare with shielded.

      So when building a house or running cables for a first time these days SFTP Cat6a 1000Ft spool can be had for under $200 (at least the last time i looked on fs.com)

      That extra shielding can be helpful, having an older home with power all over the place the shielding was nice extra insurance. However if your gonna go that far I found out code for electrical grounding is far cry telecoms grounding spec 8ohm vs 1.5-0.5 ohm.

      definitely not a reason trash already working Cat5e but, something worth noting.

      Delete
  3. Yes the GS110TP are very solid switches with POE (though I don't use) and fanless too. Can't say I have much to complain about them. And fiber is very nice to work with around the house.
    I find the Ubiquiti Unifi system appealing though and would like to move over to it. Cost for 10G has come down A LOT... but still it's not super cheap for the home user. TP-Link make a nice switch with 24 x 1G and 4 x SFP+ in a fanless rack case - check out the T1700G-28TQ.... but I think their firmware isn't so polished as others.

    ReplyDelete
  4. oh boy now you can download even more high quality ads!

    ReplyDelete
    Replies
    1. LOL :-)

      Part of my day is already tagging the spam messages on the blog here so they don't litter up the comments! So maybe I can get that done microseconds faster.

      Delete
  5. Hi Archimago

    I LOVE your blog. I was surprised that you bought Oppo UDP-205 (I doubt that it is worth the price tag.). Could you compare the quality of HDR to SDR tone mapping between Oppo and madVR?

    As you prooved that 16bit DACs could be cheap and transparent (when properly designed), maybe would you test if there is a noticeable video quality difference between Oppo UDP-205, madVR and external video processors?

    Don Munsil the creator of Spears & Munsil HD Benchmark in this podcast at 36 minute https://twit.tv/shows/home-theater-geeks/episodes/8 mentions that he has Oppo bluray player and DVDO external video processor. Maybe he knows something that we don't ? ; )

    ReplyDelete
    Replies
    1. Hey Unknown,
      I'll do what I can can. To really do it right, one would like to grab a representative image straight from the Oppo and compare with madVR at the same target brightness. I'll at least try to "eye-ball" it and let you know what I think subjectively looks good (I've got my HTPC with nVidia GTX1080 connected to the same screen so can at least flip between the two to compare).

      The price tag of the UDP-205 has gone insane after stock depleted. I see >$2500 on Amazon currently new. No, I would not pay that kind of price but at the regular MSRP, the build quality is excellent and top-of-the-class as a 4K UHD player.

      I can tell you though that the sound quality is "reference" if one is aiming for "high fidelity" and "accuracy" to the digital source. Measurements and thoughts to come in the next while...

      Delete
  6. 2500 USD was a *regular price for UDP-205 in EU...

    What's your opinion about external video processors? Are they obsolete or relevant? One could think that when sole purpose of existence of a company is to make video accurate they should deliver that. But Panasonic 4K bluray players are very good already, I doubt that even video engineers whould distinguish Panasonic, Oppo, madVR, DVDO from one another. But I didn't have a chance to experience it myself.

    ReplyDelete
  7. I have been researching the very same problem … and all but decided on a different architecture, described in detail a https://homeservershow.com/forums/topic/16182-converged-10g-home-storage-network/. Two key points:
    1. There is no ‘home’ application which requires 10G networking bandwidth …
    2. … except storage backup.
    4K video playback only requires 15Mbs which is easily handled by the cheapest/smallest of devices.
    However anyone with a large media collection faces the problem of how to create another copy of data lost from a failed hard drive. If one’s collection were (simplistically) 60GB copied to 2 different disks (and ideally a third offsite) then the copy on the failed disk can be recovered in a few minutes from the backup copy. The difficulty arises over the loss of a 6TB volume. Putting aside for the time being whatever complexity the regime handling the backup copy requires, there is still the matter of reproducing 6TB. At 100MBs that is still 60,000 seconds or 1,000 minutes or 16 hours. Here is an application where I think an order of magnitude improvement is warranted.
    As AI is finding there is no point in having a 10G network … if you haven’t got a 10G disk subsystem! And buying a faster SSD is not going to go far for collectors unless one has the funds to replace multi-TB hard disks with SSD’s. I think then switching to 10G (switching to 10G – get it?) requires a high speed storage array: a dedicated RAID 5 card; INTEL RST support; a multi-column Storage Space or some such.
    Well perhaps there is one home application requiring 10G: video editing.
    Since 10BaseT equipment is still relatively expensive I believe 2 options are worth consideration:
    1. Direct wire from server to workstation i.e. no 10G switching at all.
    2. Thunderbolt holds some promise and it too can avoid switching by using daisy-chaining (with some distance constraints it must be admitted).

    ReplyDelete
  8. Hallo and and you for sharing you experiences.
    I am also using ASUS XG-C100C for direct server connection from Windows to a Linux server with Samba shares in mind.

    My setup is a simple:
    RAID6 HDDs under LVM
    RAID10 SSDs under LVM "Writeback" Cache.

    I see the same performance drops when using SSDs under LVM Writeback cache.
    With 6x SSDs in RAID10 I can transfer around 20-40 GB before a sudden drop-off from 1 GB/s to between 1-300 MB/s. Fluctuating here and there.
    There aren't many times I need to transfer over 40GB, but it bothers me.
    At first I thought it was the drivers on my Windows PC or the kernel driver on Linux.
    But it seems to be that the TLC based SSDs that can't handle the sustained write performance once the fast cache on the drives are filled up.
    I will pull the SSDs and run sustained writes to see which drive is bad and what limits they have.
    For SSDs in 10gbe I think pure MLC or Optane is a better choice than TLC.

    ReplyDelete
  9. I'm only able to achieve about 6 Gbps with iperf3 (3.1.3) with a pair of these cards, connected via a 14ft CAT6 cable. The Windows task manager indicates the connection rate is 10 Gbps on both sides.

    The machines are Windows 7 and Windows 10, both x64 .
    Win7 box is running an Asus Z170-AR .
    Win10 box is running an MSI X99 Raider.

    Both machines have all of their PCI-E slots in use, so I guess this may have something to do with it. Actually had to remove one SATA controller from the Z170-AR to plug in the Asus XG-C100C .

    Guess I will try to remove other cards one by one, but this is not a fun process.

    ReplyDelete
    Replies
    1. Don't think the bottleneck I'm running into is hardware related, actually. I achieved 9.9 gbps with jumbo packets in the driver settings.
      I just can't get past 6gbps without that.

      Maybe something to do with different versions of the drivers, or different OS ?

      FYI, I'm using drivers 2.1.12.0 dated 10/19/2018 from Aquantia - not the Asus drivers which are 2.1.8 and a few months older. The limitation was the same with Asus drivers, though.

      It might come down to the OS as well, or the version of iperf. I'm using 3.1.3 . I used the exact same command lines for server and client sides, except for the IP address (chose some in the 10.10.x.x space).

      For now I'm running with dual NICs also. My 1gbps mobo NIcs are connected to my 1 gbps switch which goes to router.

      The 10 gig NICs are only connected to each other, for now. I only have one more machine which could benefit from 10gig at this time, and it's not really crucial for that one.

      I hope we see some 5-10 gig NICs in USB 3.1 gen 1 & gen 2 formats in the future. My Odroid XU4 certainly could use one of those. Currently peaks at gigabit speed but that's too slow these days for a NAS.

      Delete
  10. OS made the difference. With Win10 to Win10, I am able to achieve 9.9 Gbps even without jumbo frames. Win7 seems to be a bottleneck with this card.

    ReplyDelete
  11. This comment has been removed by a blog administrator.

    ReplyDelete
  12. This comment has been removed by a blog administrator.

    ReplyDelete
  13. This comment has been removed by a blog administrator.

    ReplyDelete
  14. This comment has been removed by a blog administrator.

    ReplyDelete