Alert: Control Domain Memory Usage
-
@stormi this is the current ps aux: ps-aux.txt
@r1 the sar file is too big to add it here but here is a link sar.txt (valid for a day), and the kernel oom message too messages.txt . From what I can see only around 3GB where accounted for when the OOM killer was triggered (Dom0 has 8GB of memory available).
In this case rsyslog was killed but I have seen xapi killed on other occasions. I can dig up the logs if they can help. -
- grub.cfg grub.txt
- xl top for Dom0
Domain-0 -----r 5461432 0.0 8388608 1.6 8388608 1.6 16 0 0 0 0 0 0 0 0 0 0
- xe param list for Dom0 (memory)
memory-target ( RO): <unknown> memory-overhead ( RO): 118489088 memory-static-max ( RW): 8589934592 memory-dynamic-max ( RW): 8589934592 memory-dynamic-min ( RW): 8589934592 memory-static-min ( RW): 4294967296 last-boot-record ( RO): '('struct' ('uuid' '5e1386d5-e2c9-47eb-8445-77674d76c803') ('allowed_operations' ('array')) ('current_operations' ('struct')) ('power_state' 'Running') ('name_label' 'Control domain on host: bc2-vi-srv03') ('name_description' 'The domain which manages physical devices and manages other domains') ('user_version' '1') ('is_a_template' ('boolean' '0')) ('is_default_template' ('boolean' '0')) ('suspend_VDI' 'OpaqueRef:NULL') ('resident_on' 'OpaqueRef:946c6678-044a-62ab-2a98-f8c93e34ade9') ('affinity' 'OpaqueRef:946c6678-044a-62ab-2a98-f8c93e34ade9') ('memory_overhead' '84934656') ('memory_target' '4294967296') ('memory_static_max' '4294967296') ('memory_dynamic_max' '4294967296') ('memory_dynamic_min' '4294967296') ('memory_static_min' '4294967296') ('VCPUs_params' ('struct')) ('VCPUs_max' '48') ('VCPUs_at_startup' '48') ('actions_after_shutdown' 'destroy') ('actions_after_reboot' 'destroy') ('actions_after_crash' 'destroy') ('consoles' ('array' 'OpaqueRef:aa16584e-48c6-70a3-98c0-a2ee63b3cfa4' 'OpaqueRef:01efe105-d6fe-de5e-e214-9c6e2b5be498')) ('VIFs' ('array')) ('VBDs' ('array')) ('crash_dumps' ('array')) ('VTPMs' ('array')) ('PV_bootloader' '') ('PV_kernel' '') ('PV_ramdisk' '') ('PV_args' '') ('PV_bootloader_args' '') ('PV_legacy_args' '') ('HVM_boot_policy' '') ('HVM_boot_params' ('struct')) ('HVM_shadow_multiplier' ('double' '1')) ('platform' ('struct')) ('PCI_bus' '') ('other_config' ('struct' ('storage_driver_domain' 'OpaqueRef:166e5128-4906-05cc-bb8d-ec99a3c13dc0') ('is_system_domain' 'true'))) ('domid' '0') ('domarch' 'x64') ('last_boot_CPU_flags' ('struct')) ('is_control_domain' ('boolean' '1')) ('metrics' 'OpaqueRef:2207dad4-d07f-d7f9-9ebb-796072aa37e1') ('guest_metrics' 'OpaqueRef:NULL') ('last_booted_record' '') ('recommendations' '') ('xenstore_data' ('struct')) ('ha_always_run' ('boolean' '0')) ('ha_restart_priority' '') ('is_a_snapshot' ('boolean' '0')) ('snapshot_of' 'OpaqueRef:NULL') ('snapshots' ('array')) ('snapshot_time' ('dateTime.iso8601' '19700101T00:00:00Z')) ('transportable_snapshot_id' '') ('blobs' ('struct')) ('tags' ('array')) ('blocked_operations' ('struct')) ('snapshot_info' ('struct')) ('snapshot_metadata' '') ('parent' 'OpaqueRef:NULL') ('children' ('array')) ('bios_strings' ('struct')) ('protection_policy' 'OpaqueRef:NULL') ('is_snapshot_from_vmpp' ('boolean' '0')) ('snapshot_schedule' 'OpaqueRef:NULL') ('is_vmss_snapshot' ('boolean' '0')) ('appliance' 'OpaqueRef:NULL') ('start_delay' '0') ('shutdown_delay' '0') ('order' '0') ('VGPUs' ('array')) ('attached_PCIs' ('array')) ('suspend_SR' 'OpaqueRef:NULL') ('version' '0') ('generation_id' '') ('hardware_platform_version' '0') ('has_vendor_device' ('boolean' '0')) ('requires_reboot' ('boolean' '0')) ('reference_label' ''))' memory (MRO): <not in database>
-
@inaki-martinez According to this log, 2GB of Resident Set Size was freed by killing
rsyslog
. This is a lot for such a system service. -
@stormi I seem to remember running across a similar problem on a RHEL system. Since XCP-ng is based on Centos which is pretty much the same thing, could it be related to this: https://bugzilla.redhat.com/show_bug.cgi?id=1663267
-
@JeffBerntsen this could be indeed. Advisory for the fix is https://access.redhat.com/errata/RHSA-2020:1000. I'll consider a backport.
@inaki-martinez I think dom0 memory ballooning (if that is even a thing... I need to confirm) is ruled out in your case. The sum of the RSS values for all processes (which is a simplistic and overestimating way of determining the RAM usage for all processes, due to shared memory), is around 1.5GB which leaves more than 4.5GB unexplained.
-
RHSA-2020:1000 is an interesting lead, indeed
-
@stormi
i have the problem on a PoolMaster with 2 running VM's with memory alerts.
here are some infos. may you find something.slabtop.txt
xehostparamlist.txt
xltop.txt
meminfo.txt
top.txt
grub.cfg.txtsorry, can't add images. it seems there is something broken with some node modules.
-
@daKju Thanks. What version of XCP-ng? Does restarting the
rsyslog
service or theopenvswitch
release RAM? -
@stormi
we have 8.1
I haven't start yet the services. can theopenvswitch
service safely restart without any impact?
nothing changed afterrsyslog
restart -
@daKju I must admit I can't guarantee that it is perfectly safe. It will at least induce a small network downtime.
-
Don`t restart openvswitch, if you have active iSCSI storage attached.
-
@dave since you're here, can you share the contents of your
grub.cfg
, the line starting with "Domain-0" in the output ofxl top
, and the output ofxe vm-param-list uuid={YOUR_DOM0_VM_UUID} | grep memory
?And if your offer for remote access to a server to try and find where the missing memory is being used still stands, I'm interested.
-
Another lead, although quite old: https://serverfault.com/questions/520490/very-high-memory-usage-but-not-claimed-by-any-process
In that situation the memory was seemingly taken by operations related to LVM, and stopping all LVM operations released the memory. Not easy to test in production though.
-
Current Top:
top - 15:38:00 up 62 days, 4:22, 2 users, load average: 0.06, 0.08, 0.08 Tasks: 295 total, 1 running, 188 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.6 us, 0.0 sy, 0.0 ni, 99.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 12210160 total, 3596020 free, 7564312 used, 1049828 buff/cache KiB Swap: 1048572 total, 1048572 free, 0 used. 4420052 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2516 root 20 0 888308 123224 25172 S 0.0 1.0 230:49.43 xapi 1947 root 10 -10 712372 89348 9756 S 0.0 0.7 616:24.82 ovs-vswitc+ 1054 root 20 0 102204 30600 15516 S 0.0 0.3 23:00.23 message-sw+ 2515 root 20 0 493252 25388 12884 S 0.0 0.2 124:03.44 xenopsd-xc 2527 root 20 0 244124 25128 8952 S 0.0 0.2 0:24.59 python 1533 root 20 0 277472 23956 7928 S 0.0 0.2 161:16.62 xcp-rrdd 2514 root 20 0 95448 19204 11588 S 0.0 0.2 104:18.98 xapi-stora+ 1069 root 20 0 69952 17980 9676 S 0.0 0.1 0:23.74 varstored-+ 2042 root 20 0 138300 17524 9116 S 0.0 0.1 71:06.89 xcp-networ+ 2524 root 20 0 211832 17248 7728 S 0.0 0.1 8:15.16 python 2041 root 20 0 223856 16836 7840 S 0.0 0.1 0:00.28 python 26502 65539 20 0 334356 16236 9340 S 0.0 0.1 603:42.74 qemu-syste+ 5724 65540 20 0 208404 15400 9240 S 0.0 0.1 469:19.79 qemu-syste+ 2528 root 20 0 108192 14760 10284 S 0.0 0.1 0:00.01 xapi-nbd 9482 65537 20 0 316948 14204 9316 S 0.0 0.1 560:47.71 qemu-syste+ 24445 65541 20 0 248332 13704 9124 S 0.0 0.1 90:45.58 qemu-syste+ 1649 root 20 0 62552 13340 6172 S 0.0 0.1 60:28.97 xcp-rrdd-x+
Requested Files:
-
@stormi Usualy i migrate all vms of affected hosts to others, when memory is nearly full. But that does not free any memory. Could LVM operations be still the happening, with no VMs running?
-
@dave I'm not able to tell.
However, this all looks like a memory leak in a kernel driver or module. Maybe we should try to find a common pattern between the affected hosts, by looking at the output of
lsmod
to know which modules are loaded. -
Recompiling the kernel with
kmemleak
might be the fastest route to finding a solution here. Unfortunately, this obviously requires a reboot and will incur some performance hit whilekmemleak
is enabled (likely not an option for some systems).https://www.kernel.org/doc/html/latest/dev-tools/kmemleak.html
-
@dave thx for this hint. yes we have iSCSI SR's attached
-
So, the most probable cause of the growing memory usage people on this thread see is a memory leak in the Linux kernel, or more probably in a driver module.
Could you all share the output of
lsmod
so that we can try to identify a common factor between all affected hosts? -
@stormi current loaded modules:
Module Size Used by bridge 196608 0 tun 49152 0 nfsv3 49152 1 nfs_acl 16384 1 nfsv3 nfs 307200 5 nfsv3 lockd 110592 2 nfsv3,nfs grace 16384 1 lockd fscache 380928 1 nfs bnx2fc 159744 0 cnic 81920 1 bnx2fc uio 20480 1 cnic fcoe 32768 0 libfcoe 77824 2 fcoe,bnx2fc libfc 147456 3 fcoe,bnx2fc,libfcoe scsi_transport_fc 69632 3 fcoe,libfc,bnx2fc openvswitch 147456 53 nsh 16384 1 openvswitch nf_nat_ipv6 16384 1 openvswitch nf_nat_ipv4 16384 1 openvswitch nf_conncount 16384 1 openvswitch nf_nat 36864 3 nf_nat_ipv6,nf_nat_ipv4,openvswitch 8021q 40960 0 garp 16384 1 8021q mrp 20480 1 8021q stp 16384 2 bridge,garp llc 16384 3 bridge,stp,garp ipt_REJECT 16384 3 nf_reject_ipv4 16384 1 ipt_REJECT xt_tcpudp 16384 9 xt_multiport 16384 1 xt_conntrack 16384 6 nf_conntrack 163840 6 xt_conntrack,nf_nat,nf_nat_ipv6,nf_nat_ipv4,openvswitch,nf_conncount nf_defrag_ipv6 20480 2 nf_conntrack,openvswitch nf_defrag_ipv4 16384 1 nf_conntrack libcrc32c 16384 3 nf_conntrack,nf_nat,openvswitch iptable_filter 16384 1 dm_multipath 32768 0 sunrpc 413696 20 lockd,nfsv3,nfs_acl,nfs sb_edac 24576 0 intel_powerclamp 16384 0 crct10dif_pclmul 16384 0 crc32_pclmul 16384 0 ghash_clmulni_intel 16384 0 pcbc 16384 0 aesni_intel 200704 0 aes_x86_64 20480 1 aesni_intel cdc_ether 16384 0 crypto_simd 16384 1 aesni_intel usbnet 49152 1 cdc_ether cryptd 28672 3 crypto_simd,ghash_clmulni_intel,aesni_intel glue_helper 16384 1 aesni_intel hid_generic 16384 0 mii 16384 1 usbnet dm_mod 151552 1 dm_multipath usbhid 57344 0 hid 122880 2 usbhid,hid_generic sg 40960 0 intel_rapl_perf 16384 0 mei_me 45056 0 mei 114688 1 mei_me lpc_ich 28672 0 i2c_i801 28672 0 ipmi_si 65536 0 acpi_power_meter 20480 0 ipmi_devintf 20480 0 ipmi_msghandler 61440 2 ipmi_devintf,ipmi_si ip_tables 28672 2 iptable_filter x_tables 45056 6 xt_conntrack,iptable_filter,xt_multiport,xt_tcpudp,ipt_REJECT,ip_tables sd_mod 53248 4 xhci_pci 16384 0 ehci_pci 16384 0 tg3 192512 0 xhci_hcd 258048 1 xhci_pci ehci_hcd 90112 1 ehci_pci ixgbe 380928 0 megaraid_sas 167936 3 scsi_dh_rdac 16384 0 scsi_dh_hp_sw 16384 0 scsi_dh_emc 16384 0 scsi_dh_alua 20480 0 scsi_mod 253952 13 fcoe,scsi_dh_emc,sd_mod,dm_multipath,scsi_dh_alua,scsi_transport_fc,libfc,bnx2fc,megaraid_sas,sg,scsi_dh_rdac,scsi_dh_hp_sw ipv6 548864 926 bridge,nf_nat_ipv6 crc_ccitt 16384 1 ipv6