LR-Link, a maker of networking solutions from China, has announced its first 10 GbE NIC, the wordy-named LREC6860BT. The new NIC is the first such retail product we've seen based on a design from Tehuti Networks, an Israel-based developer, bringing some more welcome competition to the 10GigE NIC market. LR-Link will be aiming at the (relative) mass-market for standalone NICs with this card, with the card now selling in Japan as well as online for less than $100.

Under the hood, the LR-Link LREC6860BT NIC is based on Tehuti Networks’ TN4010 MAC, which is further paired with Marvell’s Alaska X 88X3310P 10 GbE transceiver. The card features a PCIe Gen 2 x4 interface as well as an RJ45 connector that supports 100M, 1G, 2.5G, 5G, and 10G speeds using Cat5e/Cat6/Cat6A cabling. The card fully supports contemporary operating systems from Apple, Microsoft, and VMware as well as various Linux distributives. Therefore, the NIC is drop-in compatible with most computers that are in use today.

LR-Link's 10 GbE NIC
  LREC6860BT
Silicon MAC Tehuti Networks TN4010 
Transceiver Marvell Alaska X 88X3310P
100BASE-T Yes
1000BASE-T Yes
2.5GBASE-T Yes
5GBASE-T Yes
10GBASE-T Yes (over Cat6A cables)
Ports 1
OS Compatibility Apple MacOS 10.10.3 or later
Microsoft Windows 7 / 8 / 8.1 / 10 or later

Windows Server 2008 R2 / 2012 / 2012 R2 / 2016 R2 or later
VMware Vmware ESX / ESXi 5.x / 6.x or later
Linux Linux Stable Kernel version 2.6.x/3.x or later
Price $83 - $91
Release Date Q3 2018
Additional Information Link

The LREC6860BT is currently available from at least one retailer in Japan for ¥10,164 ($91) with VAT, which is not very high considering the fact that PC components tend to cost more in Japan than in the rest of the world. Unfortunately products from LR-Link aren't readily available from retailers outside China and Japan, but the company’s devices (including the 10 GbE NIC) can still be purchased from official stores on AliExpress, Ebay, and JD.com.

10 GbE networks are not yet widespread in SOHO environments, primarily because there are not many reasonably-priced 10 GbE switches. Meanwhile, a number of companies have released their relatively affordable 10 GbE NICs based on chips from Aquantia over the past few quarters, anticipating demand for such cards from enthusiasts. Aquantia is not the only provider of solutions for inexpensive 10 GbE cards. Tehuti Networks is considerably less known because it is focused on working with enterprise OEMs rather than with AIBs and retail. Nonetheless, having a second player in the space for cheap 10GigE/NBASE-T silicon is an important part of driving down the cost of the technology  –and boosting adoption – even further.

Related Reading:

Source: PC Watch

Comments Locked

46 Comments

View All Comments

  • AdrianB1 - Friday, August 10, 2018 - link

    You are absolutely right, but I was thinking about a different target for this: not enterprises or SME, not office users, but home users that want a faster way to transfer their (porn/pirated) movies from their main computer to the NAS in the same room or to the player next to the big TV screen. In that case they can buy pre-built cables in either fiber or Twinax and link the stuff via small port count SFP+ switches. For example my NAS is next to my workstation, the stuff I record with the camera is transferred on a 1Gbps connection and I have a few TB of that. In this case I would go for Twinax. I am not considering laying out fiber across my home, I don't need it, but where I need more speed I would go for Twinax, fiber and copper in this particular order.
  • nathanddrews - Friday, August 10, 2018 - link

    Twinax is badass.
  • genzai - Thursday, August 9, 2018 - link

    Cards with the same hardware have been available at retail and online for over a year from companies like Akitio and Startech. This lower price point is new, but Akitio has been in the ~$120 range when on sale for a long time already.
  • abufrejoval - Thursday, August 9, 2018 - link

    I can well understand the pain and frustration of waiting for affordable 10Gbit.

    But I’ve played with 10GBase-T for many years now, thanks to some employer sponsoring and some other affordable 10GBase-T NIC, which can, after all, always be *directly connected*, if nor arms and legs are left to pay for the switch.

    In the office lab, I have had 48-Port 10GBase-T switches to play with for some years. But even in the home lab, an Asus XG-U2008 switch will connect two 10Gbit ports with eight 1Gbit ports *and* be completely silent. If you consider that 10 Watts/port is where 10Gbase-T started, that’s quite a feat. And considering that the 48Port HP switch has fan capacity to push out 500 Watts of heat, I can assure you that it is very noisy starting up.

    You and I want 16 ports at $160 and completely silent and that won’t happen at 10Watts/port. Green and power reduced as well as N-Base-T Ethernet have a good chance to change that, because 3 Watts may be good enough for 10Gbit, 2.5Gbit can be had for tranquility.

    Netgear does sell some switches which offer NBase-T at 20 Watts or so for the switch, not quite passive but perhaps acceptable noise levels during quiet hours which give you something like 1x 10, 2x 5, 2x 2.5 and the other ports 1Gbit.

    Turns out, 10Gbit on the wire won’t get you 10x performance in most workloads I have measured anyway so this may be a better deal than you think. Because at least with Windows, I haven’t been able to get much better than 250Mbyte/s across a 10Gbit link anyway.

    I operate one 14TB primary server in my home lab, running Windows server. And a backup, same OS, same capacity. So every now and then, I switch on the backup, and have it synchronize the files, some of which are quite small.

    And because they are so small, what really determines the speed of synchronization is latency, not bandwidth. Sometimes the effective data transfer rates drops to kilobytes/s, sometimes on bigger files it will go to 250Mbyte/s, but I’ve never seen 1GByte/s or even half.

    There is quality LSI/Avago/Broadcom hardware RAID arrays on both sides, they certainly do >500MB/s sequential: I’ve measured that, copying data to RAM disks. I have also copied RAM disk to RAM disk and been quite disappointed, because it’s nowhere near the 70GB/s my 4 channel Haswell Xeon is supposed to be able to, nor even the 25GB/s it should be able to do per channel.

    Linux is quite a different story: At least with iperf3 there is absolutely no problem pushing 960MByte/s at idle CPU clocks, even with single a single thread. I’ve done some iSCSI testing some years back and while I don’t have the details in my head any more, I know that even with Linux at both ends, the difference between the theoretical max and what got delivered was quite big.

    So if you really want to get better than 1Gbit, get a cheapo NBase-T like this or the Aquantia 107 and do either direct connects, use an ordinary Linux PC as switch or get one of those entry level NetGear boxes which support 2.5, 5 and 10Gbit and see if you really are actually able to take advantage of 10Gbit speeds.

    No need to complain any more, time to tune!
  • oRAirwolf - Friday, August 10, 2018 - link

    I have a direct connection between my Windows 10 desktop and Freenas server with an Aquantia AQC107 in the desktop and an Intel X540 T1 in the Freenas server, which is a Dell T320 with 8 x WD Red 8TB drives. I get about 800 megabytes per second sequential both ways and about 1.1 gigabytes per second if I transfer something from the Freenas ARC cache to my desktop SSD array. Not sure how you aren't getting speeds like that from Windows. Are you not using jumbo frames?
  • AdrianB1 - Friday, August 10, 2018 - link

    I've seen several tests on Internet where 600-800 MB/sec file copy was achieved on Windows. Aquantia was in the 600 MB/sec range, Intel was in the 800 MB/sec.
  • azazel1024 - Friday, August 10, 2018 - link

    Yeah, I wonder if you've got network configuration issues going on or something. Or maybe running a stale version of SMB or something. I've got dual 1GbE links between my desktop and server running through my switch using SMB Multichannel and I have no trouble pushing 235MiB/sec, which is the max link speed with overhead. Granted, that is slower than what you are talking...but only barely.

    Small files it'll slow down a fair amount, but that is as much my RAID0 array, which is a pair of 3TB Seagate Barracudas in both machines. Smallish files like pictures and MP3s will run at more like 80-120MiB/sec. However, if I put the SSD in my desktop and in my server on the network and do a file copy of even small files like that I can transfer a 2GiB folder of 2-4MiB images at about 200-230MiB/sec (server has a not super fast rather old first generation SATAIII 60GB SSD as the boot drive). Large files tick along at the link limit of 235MiB/sec.

    SMB has a fair amount of overhead per file, which is usually a hit you see with small files because of how the network file system handles communication and stuff. But with a 10GbE link, if drive speed were taken out, I'd think you'd still see at least 800-900MiB/sec RAM disk file transfer between reasonably fast machines with small files.
  • abufrejoval - Friday, August 10, 2018 - link

    I've carefully rechecked everything and I can confirm that I indeed *can* get to 400MB/s, which is what both RAIDs are capable of sustaining for *huge* files like clonezilla images.

    So I'll have to partially retract the Windows 'dissing' :-)

    One issue I have with Windows is that iperf3 results are terribly inconsistent, I may get 6Gbit/s on a first run, and it will then drop t 2Gbit/s ever after.

    Perhaps Windows is all *too smart* and notices that nothing useful is happening...

    And it can't be renegotiating line speeds as the ASUS switch actually doesn't support the NBASE-T intermediate line rates and the LEDs stay blue for 10Gbit all through.

    None of that with Linux on both ends: 960MB/s and never lower.

    In any case...

    Since the vast majority of all files are relatively small, actual copies tend to be vastly slower as 512MB of write-back cache cannot quite compensate the fact that the OS will try to protect metadata corruption by serializing and it's still mechanical disks underneath.

    So let me just say, that you have to manage your expectations and that a faster network is more likely to expose other bottlenecks.
  • DigitalFreak - Thursday, August 9, 2018 - link

    You can already buy the Aquantia based card for $84. https://amzn.to/2KGVqSO It's QNAP branded, but works just fine in a Windows PC with the drivers from Aquantia's site.
  • Beaver M. - Friday, August 10, 2018 - link

    Uh... so that means 10 Gbit wont work with Cat7 cables on this card??

Log in

Don't have an account? Sign up now