Skip to content

Instantly share code, notes, and snippets.

@jorisvervuurt
Last active April 20, 2025 10:14
Show Gist options
  • Save jorisvervuurt/8ce01bb19de242484e2ec7f5c785e46b to your computer and use it in GitHub Desktop.
Save jorisvervuurt/8ce01bb19de242484e2ec7f5c785e46b to your computer and use it in GitHub Desktop.
OPNsense - Intel i226 NIC tunables
<item>
<tunable>net.inet.icmp.drop_redirect</tunable>
<value>1</value>
<descr/>
</item>
<item>
<tunable>net.isr.bindthreads</tunable>
<value>1</value>
<descr/>
</item>
<item>
<tunable>net.isr.maxthreads</tunable>
<value>-1</value>
<descr/>
</item>
<item>
<tunable>net.inet.rss.bits</tunable>
<value>2</value>
<descr/>
</item>
<item>
<tunable>net.inet.rss.enabled</tunable>
<value>1</value>
<descr/>
</item>
<item>
<tunable>kern.ipc.nmbclusters</tunable>
<value>1000000</value>
<descr/>
</item>
<item>
<tunable>kern.ipc.nmbjumbop</tunable>
<value>524288</value>
<descr/>
</item>
<item>
<tunable>hw.intr_storm_threshold</tunable>
<value>10000</value>
<descr/>
</item>
<item>
<tunable>net.inet.ip.intr_queue_maxlen</tunable>
<value>3000</value>
<descr/>
</item>
<item>
<tunable>net.inet6.ip6.intr_queue_maxlen</tunable>
<value>3000</value>
<descr/>
</item>
<item>
<tunable>hw.ix.flow_control</tunable>
<value>0</value>
<descr/>
</item>
<item>
<tunable>dev.igc.0.fc</tunable>
<value>0</value>
<descr/>
</item>
<item>
<tunable>dev.igc.1.fc</tunable>
<value>0</value>
<descr/>
</item>
<item>
<tunable>dev.igc.2.fc</tunable>
<value>0</value>
<descr/>
</item>
<item>
<tunable>dev.igc.3.fc</tunable>
<value>0</value>
<descr/>
</item>
<item>
<tunable>net.isr.dispatch</tunable>
<value>deferred</value>
<descr/>
</item>
@jorisvervuurt
Copy link
Author

jorisvervuurt commented Jul 18, 2023

Above are the custom tunables I set for an Intel N6005 mini PC that has four Intel i226 NICs and is running OPNsense 23.1.11. I've copied them from a configuration export (these weren't all items inside the <sysctl> block), but you can manually set them via the System -> Settings -> Tunables section.

Some more settings:

  • In the Interfaces -> Settings section, I have disabled Hardware CRC, Hardware TSO and Hardware LRO. VLAN Hardware Filtering is disabled too. Enabling these options resulted in weird issues, so I'd advise against enabling them (OPNsense / pfSense docs actually also advise disabling them).
  • In the Firewall -> Settings -> Advanced section, I have set Firewall Optimization to conservative.

Interface configuration (depends on your ISP; I'm using KPN fiber in The Netherlands):

  • I have created WAN_RAW interface with an MTU of 1512.
  • I have created a WAN_INTERNET PPPoE interface with an MTU of 1508 (this results in a PPP MTU of 1500).
  • I have created a LAN interface with the default MTU (1500).

I have absolutely no issues getting around 940/940 Mbps through the PPPoE interface (to the outside world), which is the limit because the fiber NTU only has a Gigabit Ethernet port.

@jorisvervuurt
Copy link
Author

IMPORTANT NOTICE

This gist has been moved to GitLab and it will no longer be updated here on GitHub.

@dezza
Copy link

dezza commented Jul 22, 2024

Thanks for these. Commenting here since your gitlab is private.

I have a HUNSN N100 Opnsense box with 4x i226v

Disabling hardware CRC actually caused iperf speeds to drop to 300mbit vs. 2.1Gbits/sec, ouch..

I'm going to experiment with rest of the settings tho. Did you perform any tests or can share how you dialed in each parameter?

@jorisvervuurt
Copy link
Author

jorisvervuurt commented Jul 23, 2024

Hi @dezza. That's weird. Do you perhaps run IDS/IPS? I don't but even with the hardware offloading disabled, I get full line-rate (I use these same tunabels on a 6x i226 N100 unit from Topton). I determined these tunables using the documentation of both OPNsense and pfSense, but also some other sources. Unfortunately I did not wrote down those sources.

https://docs.opnsense.org/troubleshooting/performance.html

https://docs.netgate.com/pfsense/en/latest/hardware/tune.html

@vhuy036
Copy link

vhuy036 commented Aug 18, 2024

Hi @jorisvervuurt . Thank you for the tips.
Could you please give more instruction on the Interface configuration part ?
Sorry for the newbie question, but can you share how to create a WAN_RAW and a WAN_INTERNET as the same time?

@jorisvervuurt
Copy link
Author

jorisvervuurt commented Aug 19, 2024

Hey @vhuy036. Do you also use KPN as ISP? KPN supports an MTU of 1512 on WAN and has internet on a separate VLAN. So basically, WAN_RAW is the physical interface with an MTU of 1512 (both IPv4 and IPv6 configuration types set to None). Next create a new VLAN (tag 6) on that physical interface. This will be WAN_INTERNET; open the edit page of that newly created interface and set IPv4 Configuration Type to PPPoE and set the MTU to 1508. Username and password both 'internet'.

@vhuy036
Copy link

vhuy036 commented Aug 19, 2024

@jorisvervuurt
Thank you for the detail instructions.
Unfortunately, I just found out the problem is that my ISP using PPPOE for IPv4 and IPOE for IPv6, which seems not to be supported on OPNSense @@

@mbrouwer
Copy link

mbrouwer commented Jan 27, 2025

@jorisvervuurt Thank you! Finally some tunables that work, or maybe its the MTUs. I moved from pfSense to OPNSense on a protectli FW4C and and horrible download speeds. Proxmoxed on a MS-01 did work but was less then ideal, like rebooting my proxmox took down my whole network... Now I get 1019 down and 1120 up which is weird with a 1Gbps connection but I'll take it...

I restored from my proxmox install hopefully the IPTV works so I wont have to go through that hell again :)

@dezza
Copy link

dezza commented Apr 20, 2025

My current settings (i226v yes of course, n100):

# Too high (3500+) would cause jitter or long connection establish delay
net.inet6.ip6.intr_queue_maxlen=3000
net.inet.ip.intr_queue_maxlen=3000
# Too high (16000+) would cause jitter or long connection establish delay
hw.igc.max_interrupt_rate=12000 # boot-time, needs reboot
net.inet.tcp.soreceive_stream=1 # boot-time
net.isr.maxthreads=-1 # boot-time
net.inet.rss.enabled=1 # boot-time
net.inet.rss.bits=2 # boot-time
net.isr.bindthreads=1 # boot-time
hw.igc.rx_process_limit=-1
net.isr.dispatch=direct 
hw.ix.flow_control=0
dev.igc.0.fc=0
dev.igc.1.fc=0
dev.igc.2.fc=0
dev.igc.3.fc=0
dev.igc.4.fc=0

My line is sold as 300mbps, both ul/dl peaks stable at 311mbps. Its a GPON/ONT setup.

I use fqcodel pipe on WAN-Download with bandwidth=295 (mbps), this is accompanied by a WAN-Download-Queue with weight=100 and WAN-Download-Any-Rule attached to WAN-Download-Queue.

Using fqcodel on the upload pipe only resulted in worse throughput and latency, so I assume the ISP is already doing some sort of shaping on the upload. I also believe this (potential shaping from ISP) might be the reason I had to set the bandwidths to theoretical speedtest max (311mbps) and NOT 80-85%. It was unnaturally rock-stable at 311mbps almost like it targets it deliberately.. reducing the bandwidth in any way would just make worse latency and over-compensate with bandwidth limits (a 280mbps limit would hit 180-200 for instance)

Settings for my upload pipe:

bandwidth=311 # mbps
scheduler=qfq # should be more lightweight than wfq, didn't spot a difference.
enable_codel=y
codel_target=14
codel_interval=140

Read (target+interval)[https://docs.opnsense.org/manual/how-tos/shaper_bufferbloat.html#target-interval], don't just use my settings. I also found that these settings (target+interval) did very little when used with only 2 [b]fqcodel[/b] pipes without queue+rules. Only after I finally attempted another scheduler on the upload-pipe I finally had progress and now these kicked into effect.

For all my queues below I have
source=$MY_SUBNETS_CIDR_AND_WAN_IP + direction=out for upload pipe rules.

destination=$MY_SUBNETS_CIDR_AND_WAN_IP + direction=in for download pipe rules.

My upload-pipe with the quick-fair-queue (qfq) is combined with

  • WAN-Upload-ICMP-queue weight=100, WAN-Upload-ICMP-Rule
  • WAN-Download-ICMP,DNS,NTP,DHCP-queue weight=100, WAN-Download-ICMP,DNS,NTP,DHCP-Rule

Next a catch-all down-prioritize

  • WAN-Download-Rule weight=1
  • WAN-Upload-Any-queue weight=1, WAN-Upload-Any-Rule
  • WAN-Download-Any-queue weight=1, WAN-Download-Any-Rule

I have never had so fast internet, browser and such stable loaded latency.

I recommend and use https://speed.cloudflare.com as it is the most advanced in terms of both data and function. It both downloads and uploads meanwhile it tests with pauses in-between which makes a really good realworld speedtest that highlights all issues for finetuning.

My results under load;
loaded latency dl (9ms), up (9ms), jitter dl (0.684ms), ul (2.94ms)

Flood-pinging under load (while qbittorrent is 30mb/s on multiple torrents)

ping -i0.002 -c1000 1.1.1.1

1000 packets transmitted, 1000 received, 0% packet loss, time 9069ms
rtt min/avg/max/mdev = 8.531/9.070/13.014/0.381 ms, pipe 2

9ms avg and 13ms max when your rtt is 14ms and qbittorrent is maxing out is pretty darn good! Anyways these are my findings after speedtesting for 3 days straight and tuning lol. Enjoy!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment