Alert: Control Domain Memory Usage
-
Current Top:
top - 15:38:00 up 62 days, 4:22, 2 users, load average: 0.06, 0.08, 0.08 Tasks: 295 total, 1 running, 188 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.6 us, 0.0 sy, 0.0 ni, 99.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 12210160 total, 3596020 free, 7564312 used, 1049828 buff/cache KiB Swap: 1048572 total, 1048572 free, 0 used. 4420052 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2516 root 20 0 888308 123224 25172 S 0.0 1.0 230:49.43 xapi 1947 root 10 -10 712372 89348 9756 S 0.0 0.7 616:24.82 ovs-vswitc+ 1054 root 20 0 102204 30600 15516 S 0.0 0.3 23:00.23 message-sw+ 2515 root 20 0 493252 25388 12884 S 0.0 0.2 124:03.44 xenopsd-xc 2527 root 20 0 244124 25128 8952 S 0.0 0.2 0:24.59 python 1533 root 20 0 277472 23956 7928 S 0.0 0.2 161:16.62 xcp-rrdd 2514 root 20 0 95448 19204 11588 S 0.0 0.2 104:18.98 xapi-stora+ 1069 root 20 0 69952 17980 9676 S 0.0 0.1 0:23.74 varstored-+ 2042 root 20 0 138300 17524 9116 S 0.0 0.1 71:06.89 xcp-networ+ 2524 root 20 0 211832 17248 7728 S 0.0 0.1 8:15.16 python 2041 root 20 0 223856 16836 7840 S 0.0 0.1 0:00.28 python 26502 65539 20 0 334356 16236 9340 S 0.0 0.1 603:42.74 qemu-syste+ 5724 65540 20 0 208404 15400 9240 S 0.0 0.1 469:19.79 qemu-syste+ 2528 root 20 0 108192 14760 10284 S 0.0 0.1 0:00.01 xapi-nbd 9482 65537 20 0 316948 14204 9316 S 0.0 0.1 560:47.71 qemu-syste+ 24445 65541 20 0 248332 13704 9124 S 0.0 0.1 90:45.58 qemu-syste+ 1649 root 20 0 62552 13340 6172 S 0.0 0.1 60:28.97 xcp-rrdd-x+
Requested Files:
-
@stormi Usualy i migrate all vms of affected hosts to others, when memory is nearly full. But that does not free any memory. Could LVM operations be still the happening, with no VMs running?
-
@dave I'm not able to tell.
However, this all looks like a memory leak in a kernel driver or module. Maybe we should try to find a common pattern between the affected hosts, by looking at the output of
lsmod
to know which modules are loaded. -
Recompiling the kernel with
kmemleak
might be the fastest route to finding a solution here. Unfortunately, this obviously requires a reboot and will incur some performance hit whilekmemleak
is enabled (likely not an option for some systems).https://www.kernel.org/doc/html/latest/dev-tools/kmemleak.html
-
@dave thx for this hint. yes we have iSCSI SR's attached
-
So, the most probable cause of the growing memory usage people on this thread see is a memory leak in the Linux kernel, or more probably in a driver module.
Could you all share the output of
lsmod
so that we can try to identify a common factor between all affected hosts? -
@stormi current loaded modules:
Module Size Used by bridge 196608 0 tun 49152 0 nfsv3 49152 1 nfs_acl 16384 1 nfsv3 nfs 307200 5 nfsv3 lockd 110592 2 nfsv3,nfs grace 16384 1 lockd fscache 380928 1 nfs bnx2fc 159744 0 cnic 81920 1 bnx2fc uio 20480 1 cnic fcoe 32768 0 libfcoe 77824 2 fcoe,bnx2fc libfc 147456 3 fcoe,bnx2fc,libfcoe scsi_transport_fc 69632 3 fcoe,libfc,bnx2fc openvswitch 147456 53 nsh 16384 1 openvswitch nf_nat_ipv6 16384 1 openvswitch nf_nat_ipv4 16384 1 openvswitch nf_conncount 16384 1 openvswitch nf_nat 36864 3 nf_nat_ipv6,nf_nat_ipv4,openvswitch 8021q 40960 0 garp 16384 1 8021q mrp 20480 1 8021q stp 16384 2 bridge,garp llc 16384 3 bridge,stp,garp ipt_REJECT 16384 3 nf_reject_ipv4 16384 1 ipt_REJECT xt_tcpudp 16384 9 xt_multiport 16384 1 xt_conntrack 16384 6 nf_conntrack 163840 6 xt_conntrack,nf_nat,nf_nat_ipv6,nf_nat_ipv4,openvswitch,nf_conncount nf_defrag_ipv6 20480 2 nf_conntrack,openvswitch nf_defrag_ipv4 16384 1 nf_conntrack libcrc32c 16384 3 nf_conntrack,nf_nat,openvswitch iptable_filter 16384 1 dm_multipath 32768 0 sunrpc 413696 20 lockd,nfsv3,nfs_acl,nfs sb_edac 24576 0 intel_powerclamp 16384 0 crct10dif_pclmul 16384 0 crc32_pclmul 16384 0 ghash_clmulni_intel 16384 0 pcbc 16384 0 aesni_intel 200704 0 aes_x86_64 20480 1 aesni_intel cdc_ether 16384 0 crypto_simd 16384 1 aesni_intel usbnet 49152 1 cdc_ether cryptd 28672 3 crypto_simd,ghash_clmulni_intel,aesni_intel glue_helper 16384 1 aesni_intel hid_generic 16384 0 mii 16384 1 usbnet dm_mod 151552 1 dm_multipath usbhid 57344 0 hid 122880 2 usbhid,hid_generic sg 40960 0 intel_rapl_perf 16384 0 mei_me 45056 0 mei 114688 1 mei_me lpc_ich 28672 0 i2c_i801 28672 0 ipmi_si 65536 0 acpi_power_meter 20480 0 ipmi_devintf 20480 0 ipmi_msghandler 61440 2 ipmi_devintf,ipmi_si ip_tables 28672 2 iptable_filter x_tables 45056 6 xt_conntrack,iptable_filter,xt_multiport,xt_tcpudp,ipt_REJECT,ip_tables sd_mod 53248 4 xhci_pci 16384 0 ehci_pci 16384 0 tg3 192512 0 xhci_hcd 258048 1 xhci_pci ehci_hcd 90112 1 ehci_pci ixgbe 380928 0 megaraid_sas 167936 3 scsi_dh_rdac 16384 0 scsi_dh_hp_sw 16384 0 scsi_dh_emc 16384 0 scsi_dh_alua 20480 0 scsi_mod 253952 13 fcoe,scsi_dh_emc,sd_mod,dm_multipath,scsi_dh_alua,scsi_transport_fc,libfc,bnx2fc,megaraid_sas,sg,scsi_dh_rdac,scsi_dh_hp_sw ipv6 548864 926 bridge,nf_nat_ipv6 crc_ccitt 16384 1 ipv6
-
our environment also seeing the leak in xcp-ng 8.1 hosts. At first I doubled the amount of memory allocated to the control domain from 4G to 8G and now 30 something days later it ran out of ram again... this is the 3rd time the hosts (4) run of memory. This is really causing us issues.
# lsmod Module Size Used by tun 49152 0 nfsv3 49152 1 nfs_acl 16384 1 nfsv3 nfs 307200 2 nfsv3 lockd 110592 2 nfsv3,nfs grace 16384 1 lockd fscache 380928 1 nfs bnx2fc 159744 0 cnic 81920 1 bnx2fc uio 20480 1 cnic fcoe 32768 0 libfcoe 77824 2 fcoe,bnx2fc libfc 147456 3 fcoe,bnx2fc,libfcoe scsi_transport_fc 69632 3 fcoe,libfc,bnx2fc openvswitch 147456 11 nsh 16384 1 openvswitch nf_nat_ipv6 16384 1 openvswitch nf_nat_ipv4 16384 1 openvswitch nf_conncount 16384 1 openvswitch nf_nat 36864 3 nf_nat_ipv6,nf_nat_ipv4,openvswitch 8021q 40960 0 garp 16384 1 8021q mrp 20480 1 8021q stp 16384 1 garp llc 16384 2 stp,garp ipt_REJECT 16384 3 nf_reject_ipv4 16384 1 ipt_REJECT xt_tcpudp 16384 8 xt_multiport 16384 1 xt_conntrack 16384 5 nf_conntrack 163840 6 xt_conntrack,nf_nat,nf_nat_ipv6,nf_nat_ipv4,openvswitch,nf_conncount nf_defrag_ipv6 20480 2 nf_conntrack,openvswitch nf_defrag_ipv4 16384 1 nf_conntrack libcrc32c 16384 3 nf_conntrack,nf_nat,openvswitch dm_multipath 32768 0 iptable_filter 16384 1 sunrpc 413696 18 lockd,nfsv3,nfs_acl,nfs sb_edac 24576 0 intel_powerclamp 16384 0 crct10dif_pclmul 16384 0 crc32_pclmul 16384 0 ghash_clmulni_intel 16384 0 pcbc 16384 0 aesni_intel 200704 0 aes_x86_64 20480 1 aesni_intel crypto_simd 16384 1 aesni_intel cryptd 28672 3 crypto_simd,ghash_clmulni_intel,aesni_intel glue_helper 16384 1 aesni_intel dm_mod 151552 5 dm_multipath intel_rapl_perf 16384 0 sg 40960 0 i2c_i801 28672 0 mei_me 45056 0 mei 114688 1 mei_me lpc_ich 28672 0 ipmi_si 65536 0 ipmi_devintf 20480 0 ipmi_msghandler 61440 2 ipmi_devintf,ipmi_si acpi_power_meter 20480 0 ip_tables 28672 2 iptable_filter x_tables 45056 6 xt_conntrack,iptable_filter,xt_multiport,xt_tcpudp,ipt_REJECT,ip_tables hid_generic 16384 0 usbhid 57344 0 hid 122880 2 usbhid,hid_generic sd_mod 53248 5 ahci 40960 3 libahci 40960 1 ahci xhci_pci 16384 0 ehci_pci 16384 0 libata 274432 2 libahci,ahci ehci_hcd 90112 1 ehci_pci xhci_hcd 258048 1 xhci_pci ixgbe 380928 0 megaraid_sas 167936 1 scsi_dh_rdac 16384 0 scsi_dh_hp_sw 16384 0 scsi_dh_emc 16384 0 scsi_dh_alua 20480 0 scsi_mod 253952 14 fcoe,scsi_dh_emc,sd_mod,dm_multipath,scsi_dh_alua,scsi_transport_fc,libfc,bnx2fc,megaraid_sas,libata,sg,scsi_dh_rdac,scsi_dh_hp_sw ipv6 548864 193 nf_nat_ipv6 crc_ccitt 16384 1 ipv6
-
@MrMike do you have a ticket open on our side?
We are continuing to gather clues while also releasing 8.2 LTS. It should be more straightforward as soon we got this release done (more time to work on this). Obviously, support ticket are worked with the highest priority we can.
-
@olivierlambert I don't have a ticket because we don't have paid support for xcp-ng
-
Understood. We'll keep everyone posted here on our investigation.
-
Today another customer called:
He had a host (pool master) with 16GB Dom0 mem and uptime of 119 days.
Currently all my affected Systems were using megaraid_sas and iscsi and 10g intel nics.
megaraid_sas is found in @MrMike and @inaki-martinez mods too.
This is the customers lsmod:
Module Size Used by tun 49152 0 ebtable_filter 16384 0 ebtables 36864 1 ebtable_filter nls_utf8 16384 0 cifs 929792 0 ccm 20480 0 fscache 380928 1 cifs iscsi_tcp 20480 16 libiscsi_tcp 28672 1 iscsi_tcp libiscsi 61440 2 libiscsi_tcp,iscsi_tcp scsi_transport_iscsi 110592 3 iscsi_tcp,libiscsi bonding 176128 0 bridge 196608 1 bonding 8021q 40960 0 garp 16384 1 8021q mrp 20480 1 8021q stp 16384 2 bridge,garp llc 16384 3 bridge,stp,garp ipt_REJECT 16384 3 nf_reject_ipv4 16384 1 ipt_REJECT xt_tcpudp 16384 8 xt_multiport 16384 1 xt_conntrack 16384 5 nf_conntrack 163840 1 xt_conntrack nf_defrag_ipv6 20480 1 nf_conntrack nf_defrag_ipv4 16384 1 nf_conntrack libcrc32c 16384 1 nf_conntrack iptable_filter 16384 1 dm_multipath 32768 0 sunrpc 413696 1 sb_edac 24576 0 intel_powerclamp 16384 0 crct10dif_pclmul 16384 0 crc32_pclmul 16384 0 ghash_clmulni_intel 16384 0 pcbc 16384 0 aesni_intel 200704 0 aes_x86_64 20480 1 aesni_intel crypto_simd 16384 1 aesni_intel cryptd 28672 3 crypto_simd,ghash_clmulni_intel,aesni_intel glue_helper 16384 1 aesni_intel dm_mod 151552 285 dm_multipath ipmi_si 65536 0 ipmi_devintf 20480 0 intel_rapl_perf 16384 0 ipmi_msghandler 61440 2 ipmi_devintf,ipmi_si i2c_i801 28672 0 sg 40960 0 lpc_ich 28672 0 acpi_power_meter 20480 0 ip_tables 28672 2 iptable_filter x_tables 45056 7 ebtables,xt_conntrack,iptable_filter,xt_multiport,xt_tcpudp,ipt_REJECT,ip_tables hid_generic 16384 0 usbhid 57344 0 hid 122880 2 usbhid,hid_generic sd_mod 53248 9 isci 163840 0 ahci 40960 0 libsas 86016 1 isci libahci 40960 1 ahci scsi_transport_sas 45056 2 isci,libsas xhci_pci 16384 0 ehci_pci 16384 0 igb 233472 0 libata 274432 3 libahci,ahci,libsas ehci_hcd 90112 1 ehci_pci xhci_hcd 258048 1 xhci_pci e1000e 286720 0 megaraid_sas 167936 12 scsi_dh_rdac 16384 0 scsi_dh_hp_sw 16384 0 scsi_dh_emc 16384 0 scsi_dh_alua 20480 1 scsi_mod 253952 15 isci,scsi_dh_emc,scsi_transport_sas,sd_mod,dm_multipath,scsi_transport_iscsi,scsi_dh_alua,iscsi_tcp,libsas,libiscsi,megaraid_sas,libat a,sg,scsi_dh_rdac,scsi_dh_hp_sw ipv6 548864 545 bridge crc_ccitt 16384 1 ipv6
-
@dave can you open a ticket please? So we can also take a look remotely via a support tunnel.
-
We also have 10GbE Intel interfaces on the affected servers, not using iscsi yet on these servers.
So I think the comon factors right now would be the sas megaraid and 10GbE Intel nics?
-
We also have the same setup, running on affected hosts with iSCSI pool devices connected
Module Size Used by tun 49152 0 iscsi_tcp 20480 5 libiscsi_tcp 28672 1 iscsi_tcp libiscsi 61440 2 libiscsi_tcp,iscsi_tcp scsi_transport_iscsi 110592 3 iscsi_tcp,libiscsi dm_service_time 16384 4 arc4 16384 0 md4 16384 0 nls_utf8 16384 1 cifs 929792 2 ccm 20480 0 fscache 380928 1 cifs bnx2fc 159744 0 cnic 81920 1 bnx2fc uio 20480 1 cnic fcoe 32768 0 libfcoe 77824 2 fcoe,bnx2fc libfc 147456 3 fcoe,bnx2fc,libfcoe openvswitch 147456 12 nsh 16384 1 openvswitch nf_nat_ipv6 16384 1 openvswitch nf_nat_ipv4 16384 1 openvswitch nf_conncount 16384 1 openvswitch nf_nat 36864 3 nf_nat_ipv6,nf_nat_ipv4,openvswitch 8021q 40960 0 garp 16384 1 8021q mrp 20480 1 8021q stp 16384 1 garp llc 16384 2 stp,garp ipt_REJECT 16384 3 nf_reject_ipv4 16384 1 ipt_REJECT xt_tcpudp 16384 9 xt_multiport 16384 1 xt_conntrack 16384 5 nf_conntrack 163840 6 xt_conntrack,nf_nat,nf_nat_ipv6,nf_nat_ipv4,openvswitch,nf_conncount nf_defrag_ipv6 20480 2 nf_conntrack,openvswitch nf_defrag_ipv4 16384 1 nf_conntrack libcrc32c 16384 3 nf_conntrack,nf_nat,openvswitch iptable_filter 16384 1 dm_multipath 32768 5 dm_service_time intel_powerclamp 16384 0 crct10dif_pclmul 16384 0 crc32_pclmul 16384 0 ghash_clmulni_intel 16384 0 pcbc 16384 0 aesni_intel 200704 0 aes_x86_64 20480 1 aesni_intel crypto_simd 16384 1 aesni_intel cryptd 28672 3 crypto_simd,ghash_clmulni_intel,aesni_intel glue_helper 16384 1 aesni_intel dm_mod 151552 22 dm_multipath ipmi_si 65536 0 i2c_i801 28672 0 sg 40960 0 ipmi_devintf 20480 0 i7core_edac 28672 0 lpc_ich 28672 0 ipmi_msghandler 61440 2 ipmi_devintf,ipmi_si i5500_temp 16384 0 acpi_power_meter 20480 0 sunrpc 413696 1 ip_tables 28672 2 iptable_filter x_tables 45056 6 xt_conntrack,iptable_filter,xt_multiport,xt_tcpudp,ipt_REJECT,ip_tables sr_mod 28672 0 cdrom 69632 1 sr_mod sd_mod 53248 7 ata_generic 16384 0 pata_acpi 16384 0 uhci_hcd 49152 0 lpfc 958464 4 ata_piix 36864 0 nvmet_fc 32768 1 lpfc nvmet 69632 1 nvmet_fc libata 274432 3 ata_piix,pata_acpi,ata_generic nvme_fc 45056 1 lpfc nvme_fabrics 24576 1 nvme_fc ehci_pci 16384 0 igb 233472 0 ehci_hcd 90112 1 ehci_pci nvme_core 81920 2 nvme_fc,nvme_fabrics ixgbe 380928 0 megaraid_sas 167936 2 scsi_transport_fc 69632 4 fcoe,lpfc,libfc,bnx2fc scsi_dh_rdac 16384 0 scsi_dh_hp_sw 16384 0 scsi_dh_emc 16384 0 scsi_dh_alua 20480 5 scsi_mod 253952 19 fcoe,lpfc,scsi_dh_emc,sd_mod,dm_multipath,scsi_transport_iscsi,scsi_dh_alua,scsi_transport_fc,libfc,iscsi_tcp,bnx2fc,libiscsi,megaraid_sas,libata,sg,scsi_dh_rdac,scsi_dh_hp_sw,sr_mod ipv6 548864 173 nf_nat_ipv6 crc_ccitt 16384 1 ipv6
-
We have a pro-support user who also is affected.
ixgbe
is present but nomegaraid_sas
.If (and only if) the leak cause is common to everyone, then
ixgbe
would then be the main suspect. -
-
@olivierlambert The servers (2) I am seeing the memory leaks are used exclusively for network intensive applications. they route and tunnel many (100+) tunnels.
Other systems I have with similar host configuration are not seeing any increased domain memory usage.
-
So, I checked the other hosts in my environment that run the same types of VMs and also have the same version of xcp-ng.
Hosts that are not seeing this memory leak have BCM5720 1GbE interfaces. They are not as heavily used so I'm not sure if the leak only occurs if usage is very high or using a specific feature/ function in that driver.
-
So, @r1 has prepared a
kernel
RPM for XCP-ng 8.1 that enables kmemleak. If anyone wants to give it a try (on XCP-ng 8.1 only), you can install it with:yum install http://koji.xcp-ng.org/kojifiles/work/tasks/7624/17624/kernel-4.19.19-6.0.12.1.1.kmemleak.xcpng8.1.x86_64.rpm reboot
You can revert to the main kernel with:
# yum downgrade won't work for the kernel because it's a protected package, so let's use rpm yumdownloader rpm -Uv --oldpackage name-of-file.rpm reboot
There will be some performance impact that I'm not able to quantify and I'm not yet able to tell you how to use it to debug memory leaks, but there's plenty of documentation on the internet about kmemleak.