Skip to content

Instantly share code, notes, and snippets.

@kralo
Created March 18, 2025 20:17
Show Gist options
  • Save kralo/1cddf0e8b35d53f098d6e71fb2043fe2 to your computer and use it in GitHub Desktop.
Save kralo/1cddf0e8b35d53f098d6e71fb2043fe2 to your computer and use it in GitHub Desktop.
Latency and Speed Comparisons; Local Loop Networking in Linux

In 2025, can you max out a 10 Gbps Link on the local Network? What do you need?
-> fast PCIe Ports with >= x4 Lanes.

I set out to test this.

TL;DR: the 1000Base-T is astonishingly very well optimized, has very small latency, 10G is not going to change much.

The 100M and 1G Tests were performed on the same machine, via loopback cable/fiber and all tests without a router.

It is not so common to be able to assign the same IP Subnet to different Interfaces on the same machine, but go see this excellent guide.

100 Mbps over Fiber (and USB 2.0)

Device used is LyconSys FiberGecko 100 USB 2.0 to Ethernet Adapter, with 100Base-SX SFP connected over Multimode Fiber OM4.

Complete Setup:

sudo ip netns add client100M
sudo ip netns add server100M

sudo ip link set dev enx005xxxxcc013 netns server100M
sudo ip link set dev enx005xxxxcc05c netns client100M
sudo ip netns exec server100M ip link set dev enx005xxxxcc013 up
sudo ip netns exec client100M ip link set dev enx005xxxxcc05c up
# asix 2-4:1.0 register 'asix' at usb-0000:00:14.0-4, LyconSys FiberGecko 100 USB 2.0 to Ethernet Adapter, 00:50:c2:8c:c0:5c
# asix 2-4:1.0 enx005xxxxcc05c: link up, 100Mbps, full-duplex, lpa 0x0101

sudo ip netns exec server100M ip addr add dev enx005xxxxcc013 192.168.100.1/24
sudo ip netns exec client100M ip addr add dev enx005xxxxcc05c 192.168.100.2/24

cd /a0e2e5/@/opt/progs/iperf
sudo ip netns exec server100M src/iperf3 -s --verbose

# sudo ip netns exec client100M src/iperf3 -c 192.168.100.1 --bidir -t 60 --verbose

Test Complete. Summary Results:
[ ID][Role] Interval           Transfer     Bitrate         Retr
[  5][TX-C]   0.00-60.00  sec   632 MBytes  88.4 Mbits/sec    1            sender
[  7][RX-C]   0.00-60.01  sec   639 MBytes  89.3 Mbits/sec                  receiver
snd_tcp_congestion bbr
rcv_tcp_congestion bbr

iperf Done.
iperf 3.18+


# sudo ip netns exec client100M ping -i 0.01 -c 500 192.168.100.1
# Did you know you can do ping -A (adaptive)?
--- 192.168.100.1 ping statistics ---
500 packets transmitted, 500 received, 0% packet loss, time 8022ms
rtt min/avg/max/mdev = 0.289/0.419/0.556/0.049 ms

1G over Copper (1000Base-T)

One card was on the mainboard, the second is a pcie expansion card.

sudo ip link set dev enp2s0 netns server1G
# r8169 0000:02:00.0 enp2s0: Link is Up - 1Gbps/Full - flow control rx/tx
# Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8211/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)
#  Subsystem: Gigabyte Technology Co., Ltd Onboard Ethernet [1458:e000]

sudo ip link set dev enp6s0 netns client1G
# r8169 0000:06:00.0 enp6s0: Link is Up - 1Gbps/Full - flow control rx/tx
# Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8161] (rev 15)
#  Subsystem: Realtek Semiconductor Co., Ltd. TP-Link TG-3468 v4.0 Gigabit PCI Express Network Adapter [10ec:8168]

[...]

# sudo ip netns exec client1G src/iperf3 -c 192.161.100.1 --bidir -t 60 --verbose
Test Complete. Summary Results:
[ ID][Role] Interval           Transfer     Bitrate         Retr
[  5][TX-C]   0.00-60.00  sec  6.55 GBytes   937 Mbits/sec    0            sender
[  7][RX-C]   0.00-60.00  sec  6.50 GBytes   931 Mbits/sec                  receiver
snd_tcp_congestion cubic
rcv_tcp_congestion cubic

iperf Done.

# sudo ip netns exec client1G ping -i 0.01 -c 500 192.161.100.1
--- 192.161.100.1 ping statistics ---
500 packets transmitted, 500 received, 0% packet loss, time 5533ms
rtt min/avg/max/mdev = 0.084/0.198/0.576/0.063 ms

10G (10GBase-SR over Multimode-Fiber, OM4)

NB: The 10GBASE-SR Standard is from 2002.

This was tricky.

My first card was a tn40xx based card, which you need to build the kernel for yourself (linux 6.13.7).

Then, on a local loopback test, I was only able to get a third of the desired speed.

[  5][TX-C]   0.00-60.00  sec  21.9 GBytes  3.13 Gbits/sec    0            sender
[  7][RX-C]   0.00-60.00  sec  21.7 GBytes  3.11 Gbits/sec                  receiver

I hit the speed limit of the PCIe Port, at 5GT/s for a x1 width. So I put one card in a second system, and voila:

sudo ip netns add client10G
sudo ip netns add server10G

sudo ip link set dev enp3s0 netns server10G
# tn40xx 0000:03:00.0 enp3s0: PHY [tn40xx-0-300:01] driver [QT2025 10Gpbs SFP+] (irq=POLL)
# Ethernet controller [0200]: Tehuti Networks Ltd. TN9310 10GbE SFP+ Ethernet Adapter [1fc9:4022]
#  Subsystem: Edimax Computer Co. 10 Gigabit Ethernet SFP+ PCI Express Adapter [1432:8103]


sudo ip link set dev enp5s0 netns client10G
# ixgbe 0000:02:00.0: Multiqueue Enabled: Rx Queue count = 12, Tx Queue count = 12 XDP Queue count = 0
# ixgbe 0000:02:00.0: 16.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x4 link at 0000:00:1b.4 (capable of 32.000 Gb/s with 5.0 GT/s PCIe x8 link)
# ixgbe 0000:02:00.0 enp2s0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
# Ethernet controller: Intel Corporation 82599 10 Gigabit Network Connection (rev 01)
#	 Subsystem: Beijing Sinead Technology Co., Ltd. 82599 10 Gigabit Network Connection


iperf 3.18+
Test Complete. Summary Results:
[ ID][Role] Interval           Transfer     Bitrate         Retr
[  5][TX-C]   0.00-60.00  sec  65.7 GBytes  9.40 Gbits/sec   35            sender
[  7][RX-C]   0.00-60.00  sec  59.7 GBytes  8.54 Gbits/sec                  receiver
snd_tcp_congestion cubic
rcv_tcp_congestion cubic



Same System, limited speed:
--- 192.168.10.1 ping statistics ---
500 packets transmitted, 500 received, 0% packet loss, time 5514ms
rtt min/avg/max/mdev = 0.109/0.238/0.539/0.039 ms


different sytems, full speed:

--- 10.16.1.11 ping statistics ---
30000 packets transmitted, 30000 received, 0% packet loss, time 331720ms
rtt min/avg/max/mdev = 0.044/0.359/0.861/0.130 ms

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment