Very scary host reboot issue
-
It is the DELL X540 2 x 10 GbE and 2 x 1 GbE daughter board in a DELL R720.
-
Could be a statistical bias, but for now, absolutely ALL the reports we had came from Dell PowerEdge servers (between x20 and x30 series). Most of the time, it was with Intel cards, but I'm not 100% sure it's due to that since the crash logs indicates that OVS crashed before the packet got even in the NIC. But it can be an "answer" packet to a specific crafted incoming packet that could cause this too
-
@olivierlambert I can confirm 100% that a workstation DELL that I use at one of my clients did the same thing.
-
@darabontors Same network interface type? I just setup an HP with the same ethernet interface (igb driver) for testing.
-
@darabontors said in Very scary host reboot issue:
Some other detail that might be unrelated: my PPPoE connection to my ISP has MTU 1492. WireGuard connection also has MTU 1492. Is this relevant in any way?
I'm not into firewall/tunneling stuff but shouldn't the WireGuard MTU be lower than the PPPoE one in order to fit the WG protocol overhead? I read that the
default=1420
andminimum=1280
. I'd first reset the WG MTU to default and also test lower values within this range if the crash still persists.Regardless the tests, indeed there's a bug somewhere because a malformed packet/frame should be handled and not triggering a crash.
-
@tuxen said in Very scary host reboot issue:
Regardless the tests, indeed there's a bug somewhere because a malformed packet/frame should be handled and not triggering a crash.
Obviously, but it might be a clue on how to trigger the bug
-
@tuxen You might be on to something. I need to clarify something. I am positive this issue is related to the Windows WireGuard client. On the same host, same OPNsense VM I have 10+ SitetoSite Wireguard connections configured moving 100+ GB daily and the host never reboots. I can only trigger it from a Windows WG connection.
How do I verify MTU size for the WG connection in Windows 11? I cannot find it for the life of me...
-
I found the MTU parameter. This time it was 1420 on both OPNsense WG interface and in Windows (client side). I was happy for about 5 minutes as I wasn't able to reproduce the crash, but then it happened again. My "favorite" way to trigger it is by pausing the file transfer, waiting for a couple of minutes and then resuming it. The transfer's MB/s jumps up like crazy in Windows, then freezes until it gets in sync with the real progress of the transfer. After two tries of pausing and resuming, the crash happened.
@olivierlambert I use this setup on my infrastructure and my clients since at least 4 years. I never experienced this issue until as recent as September this year. You guys saw this issue ~6 months ago. Isn't there a way to backtrack any recent updates to Openswitch? I know it might be some updates on the FreeBSD side that made this openswitch bug surface just in recent times... I know there was little to no development on the WireGuard side of things this year.
-
First report I heard was in April of this year, on Intel
ixgbe
driver on a R630 from someone using OPNsense in a VM + wireguard. I don't remember any OVS change that could explain this.We had the crash around in May/June IIRC, on a Dell 430 (or 420) but on a
e1000e
Intel driver.But after inspection, we've seen that the issue was happening inside OVS, before entering the NIC, so it might not be related to the hardware at all. Maybe the hardware "helps" to get the packet or instruction crashing OVS. But to me, there's more chances it's related to an update or something in the FreeBSD PV driver inside OPNsense or Pfsense. That would be interesting to see if something moves in that area in both projects in 2023.
-
@olivierlambert said in Very scary host reboot issue:
FreeBSD PV driver inside OPNsense or Pfsense.
Who is maintaining the FreeBSD PV drivers?
-
Maybe it's time to ask OPNsense devs
-
@olivierlambert Last time I asked about a Xen driver issue, they deferred to the FreeBSD maintainers.
-
Maybe it's time to ask FreeBSD devs, then
-
@olivierlambert I'm thinking of a quick workaround. What if I use pci pass-through for the LAN and WAN interfaces and I physically connect the LAN port to another non PCIe pass-through port of the server and I use that port toninterface with my other VMs via OVS? Does it make any sense? Does it seem viable to mitigate this issue?
-
@darabontors Yes, if you pass the PCIe LAN/WAN hardware to the VM then it will bypass the FreeBSD Xen network drivers and the Dom0 Xen OVS drivers.
FreeBSD will use its own hardware drivers for the network interfaces.
You won't be able to use the interfaces for any shared VM. You won't be able to hot migrate the VM to another host.
-
@Andrew That makes sense. I think I'll do just this. In the meantime I'll try to replicate the phenomenon on test hardware. I really need a permanent fix for this..
-
We'll be all happy when we'll find that bug
-
@darabontors some additional tests that I could think of:
- Minimum WG MTU on client-side (
MTU=1280
); - OPNSense with emulated
e1000
interfaces (bypass the PV driver but not OVS). It'll keep the VM 'agile' (hot-migrate) but with a big cost in performance; - The last OPNSense version
23.7.5
.
As for the last version, found this important info posted by the devs about a change in the MTU code [1]:
Today introduces a change in MTU handling for parent interfaces mostly
noticed by PPPoE use where the respective MTU values need to fit the
parent plus the additional header of the VLAN or PPPoE. Should the
MTU already be misconfigured to a smaller value it will be used as
configured so check your configuration and clear the MTU value if you
want the system to decide about the effective parent MTU size.
(...)Hope it helps.
- Minimum WG MTU on client-side (
-
- Absolutely no idea how to do this in Windows. I looked for any MTU setting but couldn't find any.
- This is not a viable workaround for me, maybe it would be useful to pin the issue to the xen PV driver, maybe I'll do some more testing on spare hardware.
- I read this, but I don't know how to test it. I didn't have any manual MTUs set so I don't know what values were before the update.
What most definitely fixed the issue for me was using PCIe passthrough for the WAN interface. I used a 10 GbE NIC. It uses the ix driver (ix0) so IDK if this is related. Somehow PPPoE + WG + Windows Client on the virtual interface (Xen PV driver) in OPNsense produces this issue.
At the moment I am happy with this mitigation.I'm a little spread thin with free time at the moment. Anyone care to test this further?
-
-
Hi there,
is there any more news on this topic? 'Cause I've got a similar one.
I'll try to explain the setup:
"Model A":
-
one XCP-ng Host v8.0 (no typo, old version)
-
PX82 (machine rented from Hetzner datacenter in Falkenstein, Germany
-
Dom0 acting as an IPv4 router (maps assigned IPv4 subnet to one of the xapi Interfaces)
-
an OPNsense Firewall "24.1.6-amd64 FreeBSD 13.2-RELEASE-p11 OpenSSL 3.0.13" as a VM
-
OPNsense does all the external-to-internal stuff, OpenVPN and IPsec
-
OPNsense "sees" 4 virtual NICs (xn0-3) and has Guest Tools installed for reporting back to XCP-ng
-
XCP-ng handles VLAN tagging/untagging
-
IPv4 routing only
-
network driver is e1000e (Intel Corporation Ethernet Connection (7) I219-LM (rev 10))
-
very, very stable (currently at 673 days uptime)
-
LAST CHANGE: added IPv4/IPv6 dual stack operation, enabling IPv6 in Dom0, IPv6 Forwarding, IPv6 Static Network in OPNsense
-
works flawlessly, still no crash
"Model B":
-
one XCP-ng Host v8.2.1
-
AX101 machine rented from Hetzner datacenter in Falkenstein, Germany
-
Dom0 acting as an IPv4 router (maps assigned IPv4 subnet to one of the xapi Interfaces)
-
an OPNsense Firewall "24.1.6-amd64 FreeBSD 13.2-RELEASE-p11 OpenSSL 3.0.13" as a VM
-
OPNsense does all the external-to-internal stuff and OpenVPN (no IPsec)
-
OPNsense "sees" 4 virtual NICs (xn0-3) and has Guest Tools installed for reporting back to XCP-ng
-
XCP-ng handles VLAN tagging/untagging
-
IPv4 routing only
-
network driver igb (Intel Corporation I210 Gigabit Network Connection (rev 03))
-
WAS very stable (hundreds of days uptime)
-
LAST CHANGE: added IPv4/IPv6 dual stack operation, enabling IPv6 in Dom0, IPv6 Forwarding, IPv6 Static Network in OPNsense
-
now HOST crashes (reboots) about twice a day
I'm not using WireGuard, but I DO use identical OPNsense versions on two different hosts with different XCP-ng versions. And problems started to occur only after activating IPv6 on the "Model B" machine.
I noticed both Intel NICs using different drivers. Is the "igb" driver a possible source of the problem (here: in conjunction with IPv6) and could/should I switch the "Model B" system to e1000e (as long as there is no risk of the host staying offline after reboot)?
I do have a server status report exported from XCP-ng Center, but I'm not very experienced in reading it - there is so much stuff inside and I'm sure, most of it is irrelevant.
-