Weird kern.log errors
-
after some more debugging we found this issues reported by dmesg
dmesg |grep WARN -A 5 -- [126340.468112] WARNING: CPU: 11 PID: 1311 at arch/x86/xen/multicalls.c:130 xen_mc_flush+0x1aa/0x1c0 [126340.468113] Modules linked in: ebtable_nat arptable_filter arp_tables xt_set ip_set_hash_net ip_set nfnetlink tun rpcsec_gss_krb5 auth_rpcgss oid_registry nfsv4 nfs lockd grace fscache ebt_arp ebt_ip ebtable_filter ebtables xt_physdev br_netfilter bnx2fc(O) cnic(O) uio fcoe libfcoe libfc scsi_transport_fc bonding bridge 8021q garp mrp stp llc ipt_REJECT nf_reject_ipv4 xt_tcpudp xt_multiport xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c dm_multipath iptable_filter sr_mod cdrom sunrpc nls_iso8859_1 nls_cp437 vfat fat dm_mod uas usb_storage dcdbas crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcbc aesni_intel aes_x86_64 crypto_simd cryptd glue_helper sg i2c_piix4 ipmi_si ipmi_devintf ipmi_msghandler acpi_power_meter ip_tables x_tables hid_generic usbhid hid sd_mod ahci libahci [126340.468140] i40e(O) xhci_pci libata megaraid_sas(O) xhci_hcd scsi_dh_rdac scsi_dh_hp_sw scsi_dh_emc scsi_dh_alua scsi_mod efivarfs ipv6 crc_ccitt [126340.468145] CPU: 11 PID: 1311 Comm: forkexecd Tainted: G W O 4.19.0+1 #1 [126340.468146] Hardware name: Dell Inc. PowerEdge C6525/04DK47, BIOS 2.10.2 10/25/2022 [126340.468147] RIP: e030:xen_mc_flush+0x1aa/0x1c0 -- [126340.468351] WARNING: CPU: 12 PID: 0 at arch/x86/xen/multicalls.c:130 xen_mc_flush+0x1aa/0x1c0 [126340.468358] Modules linked in: ebtable_nat arptable_filter arp_tables xt_set ip_set_hash_net ip_set nfnetlink tun rpcsec_gss_krb5 auth_rpcgss oid_registry nfsv4 nfs lockd grace fscache ebt_arp ebt_ip ebtable_filter ebtables xt_physdev br_netfilter bnx2fc(O) cnic(O) uio fcoe libfcoe libfc scsi_transport_fc bonding bridge 8021q garp mrp stp llc ipt_REJECT nf_reject_ipv4 xt_tcpudp xt_multiport xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c dm_multipath iptable_filter sr_mod cdrom sunrpc nls_iso8859_1 nls_cp437 vfat fat dm_mod uas usb_storage dcdbas crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcbc aesni_intel aes_x86_64 crypto_simd cryptd glue_helper sg i2c_piix4 ipmi_si ipmi_devintf ipmi_msghandler acpi_power_meter ip_tables x_tables hid_generic usbhid hid sd_mod ahci libahci [126340.468423] i40e(O) xhci_pci libata megaraid_sas(O) xhci_hcd scsi_dh_rdac scsi_dh_hp_sw scsi_dh_emc scsi_dh_alua scsi_mod efivarfs ipv6 crc_ccitt [126340.468438] CPU: 12 PID: 0 Comm: swapper/12 Tainted: G W O 4.19.0+1 #1 [126340.468439] Hardware name: Dell Inc. PowerEdge C6525/04DK47, BIOS 2.10.2 10/25/2022 [126340.468443] RIP: e030:xen_mc_flush+0x1aa/0x1c0
-
Interesting. Can you check to have all your BIOS/firmware to their latest versions?
-
@olivierlambert here are the versions we have:
- BIOS 2.10.2
I- drac - 6.10.30.0 - Ethernet - 30.6.24
- RAID backplane - 4.36
- RAID Slot - 51.16.0-4076
Devices have latest firmware, updated 2 weeks ago as recommended by Dell.
- BIOS 2.10.2
-
And your other servers are running these same BIOS versions?
-
-
@Danp yes all servers came in one batch and had the same hardware and software installed at the time.
By all, I mean 36 in total same hardware and same workload. So it is just 1/36 error rate which pretty much means hardware
issue for me, but yet there is no diagnostic report of it which is quite peculiar. -
And you can reproduce the issue only on this one?
-
Yes only on this one it happens in the span of 2-5 days after complete reset.
From user perspective we know the following.Server has 2 vms running on 80-100% CPU utilization (each server has 64 cores assigned, the server has 2 EPYC CPUs for a total of 128 cores).
After the issue occurs one of the VM becomes unresponsive and the other is kind of fine - you can login but no commands can be executed on it. For example you type "top" or "df -h" press enter and it stays like that indefinitely with no output.One tip I got, though a bit far-fetched, is it can be a "cosmic ray" behaviour. I dont know about that but so far nothing else can be tracked.
-
When do you say "host", are you talking about the physical host or a VM?
-
@olivierlambert host = physical server. I fixed it in the last post.
-
That's indeed a clue if it happens ONLY on this machine if you run the same VMs on others without triggering any problem, and if you have exact same versions of XCP-ng/BIOS/firmware and the same hardware
-
@olivierlambert said in Weird kern.log errors:
That's indeed a clue if it happens ONLY on this machine if you run the same VMs on others without triggering any problem, and if you have exact same versions of XCP-ng/BIOS/firmware and the same hardware
Reply
All VMs spin VMs from the same image (Using CloudStack on top) and all workload pushed on them is the same.
If one VM was the issue it would make sense the VM is at fault, but in the current case both VMs on that server get broken at the same time.All servers have 2 SSDs in RAID 1. In the last iteration we did not use RAID and placed 1 VM on different SSD disk just in case the issue can come from there. The problem still appeared in the same way as before.
We are thinking of other ways to hold tests at the moment. Will keep you updated :).
-
Please do, I'm eager to know the root cause
-
It doesn't ring a bell as it is for me.
What I see from the first log is the segfault on
blktap
and inxcp-rrdd-xenpm
, likely that was while writing to a disk. In all cases, it is axen_mc_flush()
call.Given it happens on a single machine, I would venture it could be related to the disk controller, or disk itself, you could try to have a look at a
dmidecode
to see if the controllers are the same as on othe machines (sometimes there are small discrepencies between supposedly identical hardware), and check the drives withsmartctl
for any health issues. But especially as you were on raid1 originally, I doubt an issue with the drives themselves would lead to such an issue... -
Yeah, The HW problem seems to be a good guess.
The track that we can follow here is
xen_mc_flush
kernel function which raises a warning when a multicall (hypercall wrapper) fails. The interesting thing here would be to take a look at XEN traces. You can typexl dmesg
in dom0 to see if XEN tells something more (if it isn't happy on some reason) -
FYI - it was a CPU issue.
We changed the CPUs between servers and it moved with them.Thanks for the tips everyone!
-
Ahh great news! CPU issues are really tricky
-
-