XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Alert: Control Domain Memory Usage

    Scheduled Pinned Locked Moved Solved Compute
    194 Posts 21 Posters 201.5k Views 16 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • J Offline
      JeffBerntsen Top contributor @stormi
      last edited by

      @stormi I seem to remember running across a similar problem on a RHEL system. Since XCP-ng is based on Centos which is pretty much the same thing, could it be related to this: https://bugzilla.redhat.com/show_bug.cgi?id=1663267

      1 Reply Last reply Reply Quote 0
      • stormiS Offline
        stormi Vates 🪐 XCP-ng Team @inaki.martinez
        last edited by

        @JeffBerntsen this could be indeed. Advisory for the fix is https://access.redhat.com/errata/RHSA-2020:1000. I'll consider a backport.

        @inaki-martinez I think dom0 memory ballooning (if that is even a thing... I need to confirm) is ruled out in your case. The sum of the RSS values for all processes (which is a simplistic and overestimating way of determining the RAM usage for all processes, due to shared memory), is around 1.5GB which leaves more than 4.5GB unexplained.

        1 Reply Last reply Reply Quote 0
        • olivierlambertO Offline
          olivierlambert Vates 🪐 Co-Founder CEO
          last edited by

          RHSA-2020:1000 is an interesting lead, indeed 🙂

          1 Reply Last reply Reply Quote 0
          • D Offline
            daKju
            last edited by daKju

            @stormi
            i have the problem on a PoolMaster with 2 running VM's with memory alerts.
            here are some infos. may you find something.

            slabtop.txt
            xehostparamlist.txt
            xltop.txt
            meminfo.txt
            top.txt
            grub.cfg.txt

            sorry, can't add images. it seems there is something broken with some node modules.

            stormiS 1 Reply Last reply Reply Quote 0
            • stormiS Offline
              stormi Vates 🪐 XCP-ng Team @daKju
              last edited by

              @daKju Thanks. What version of XCP-ng? Does restarting the rsyslog service or the openvswitch release RAM?

              D 1 Reply Last reply Reply Quote 0
              • D Offline
                daKju @stormi
                last edited by daKju

                @stormi
                we have 8.1
                I haven't start yet the services. can the openvswitch service safely restart without any impact?
                nothing changed after rsyslog restart

                stormiS 1 Reply Last reply Reply Quote 0
                • stormiS Offline
                  stormi Vates 🪐 XCP-ng Team @daKju
                  last edited by

                  @daKju I must admit I can't guarantee that it is perfectly safe. It will at least induce a small network downtime.

                  1 Reply Last reply Reply Quote 0
                  • daveD Offline
                    dave
                    last edited by

                    Don`t restart openvswitch, if you have active iSCSI storage attached.

                    stormiS D 2 Replies Last reply Reply Quote 1
                    • stormiS Offline
                      stormi Vates 🪐 XCP-ng Team @dave
                      last edited by

                      @dave since you're here, can you share the contents of your grub.cfg, the line starting with "Domain-0" in the output of xl top, and the output of xe vm-param-list uuid={YOUR_DOM0_VM_UUID} | grep memory?

                      And if your offer for remote access to a server to try and find where the missing memory is being used still stands, I'm interested.

                      1 Reply Last reply Reply Quote 1
                      • stormiS Offline
                        stormi Vates 🪐 XCP-ng Team
                        last edited by

                        Another lead, although quite old: https://serverfault.com/questions/520490/very-high-memory-usage-but-not-claimed-by-any-process

                        In that situation the memory was seemingly taken by operations related to LVM, and stopping all LVM operations released the memory. Not easy to test in production though.

                        daveD 1 Reply Last reply Reply Quote 0
                        • daveD Offline
                          dave
                          last edited by dave

                          Current Top:

                          top - 15:38:00 up 62 days,  4:22,  2 users,  load average: 0.06, 0.08, 0.08
                          Tasks: 295 total,   1 running, 188 sleeping,   0 stopped,   0 zombie
                          %Cpu(s):  0.6 us,  0.0 sy,  0.0 ni, 99.4 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
                          KiB Mem : 12210160 total,  3596020 free,  7564312 used,  1049828 buff/cache
                          KiB Swap:  1048572 total,  1048572 free,        0 used.  4420052 avail Mem
                          
                            PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
                           2516 root      20   0  888308 123224  25172 S   0.0  1.0 230:49.43 xapi
                           1947 root      10 -10  712372  89348   9756 S   0.0  0.7 616:24.82 ovs-vswitc+
                           1054 root      20   0  102204  30600  15516 S   0.0  0.3  23:00.23 message-sw+
                           2515 root      20   0  493252  25388  12884 S   0.0  0.2 124:03.44 xenopsd-xc
                           2527 root      20   0  244124  25128   8952 S   0.0  0.2   0:24.59 python
                           1533 root      20   0  277472  23956   7928 S   0.0  0.2 161:16.62 xcp-rrdd
                           2514 root      20   0   95448  19204  11588 S   0.0  0.2 104:18.98 xapi-stora+
                           1069 root      20   0   69952  17980   9676 S   0.0  0.1   0:23.74 varstored-+
                           2042 root      20   0  138300  17524   9116 S   0.0  0.1  71:06.89 xcp-networ+
                           2524 root      20   0  211832  17248   7728 S   0.0  0.1   8:15.16 python
                           2041 root      20   0  223856  16836   7840 S   0.0  0.1   0:00.28 python
                          26502 65539     20   0  334356  16236   9340 S   0.0  0.1 603:42.74 qemu-syste+
                           5724 65540     20   0  208404  15400   9240 S   0.0  0.1 469:19.79 qemu-syste+
                           2528 root      20   0  108192  14760  10284 S   0.0  0.1   0:00.01 xapi-nbd
                           9482 65537     20   0  316948  14204   9316 S   0.0  0.1 560:47.71 qemu-syste+
                          24445 65541     20   0  248332  13704   9124 S   0.0  0.1  90:45.58 qemu-syste+
                           1649 root      20   0   62552  13340   6172 S   0.0  0.1  60:28.97 xcp-rrdd-x+
                          

                          Requested Files:

                          xl top.txt
                          dom0 param list.txt
                          grub.cfg.txt

                          1 Reply Last reply Reply Quote 0
                          • daveD Offline
                            dave @stormi
                            last edited by

                            @stormi Usualy i migrate all vms of affected hosts to others, when memory is nearly full. But that does not free any memory. Could LVM operations be still the happening, with no VMs running?

                            stormiS 1 Reply Last reply Reply Quote 0
                            • stormiS Offline
                              stormi Vates 🪐 XCP-ng Team @dave
                              last edited by

                              @dave I'm not able to tell.

                              However, this all looks like a memory leak in a kernel driver or module. Maybe we should try to find a common pattern between the affected hosts, by looking at the output of lsmod to know which modules are loaded.

                              1 Reply Last reply Reply Quote 0
                              • beshlemanB Offline
                                beshleman
                                last edited by

                                Recompiling the kernel with kmemleak might be the fastest route to finding a solution here. Unfortunately, this obviously requires a reboot and will incur some performance hit while kmemleak is enabled (likely not an option for some systems).

                                https://www.kernel.org/doc/html/latest/dev-tools/kmemleak.html

                                1 Reply Last reply Reply Quote 0
                                • D Offline
                                  daKju @dave
                                  last edited by

                                  @dave thx for this hint. yes we have iSCSI SR's attached

                                  1 Reply Last reply Reply Quote 0
                                  • stormiS Offline
                                    stormi Vates 🪐 XCP-ng Team
                                    last edited by

                                    So, the most probable cause of the growing memory usage people on this thread see is a memory leak in the Linux kernel, or more probably in a driver module.

                                    Could you all share the output of lsmod so that we can try to identify a common factor between all affected hosts?

                                    I M 2 Replies Last reply Reply Quote 0
                                    • I Offline
                                      inaki.martinez @stormi
                                      last edited by

                                      @stormi current loaded modules:

                                      Module                  Size  Used by
                                      bridge                196608  0 
                                      tun                    49152  0 
                                      nfsv3                  49152  1 
                                      nfs_acl                16384  1 nfsv3
                                      nfs                   307200  5 nfsv3
                                      lockd                 110592  2 nfsv3,nfs
                                      grace                  16384  1 lockd
                                      fscache               380928  1 nfs
                                      bnx2fc                159744  0 
                                      cnic                   81920  1 bnx2fc
                                      uio                    20480  1 cnic
                                      fcoe                   32768  0 
                                      libfcoe                77824  2 fcoe,bnx2fc
                                      libfc                 147456  3 fcoe,bnx2fc,libfcoe
                                      scsi_transport_fc      69632  3 fcoe,libfc,bnx2fc
                                      openvswitch           147456  53 
                                      nsh                    16384  1 openvswitch
                                      nf_nat_ipv6            16384  1 openvswitch
                                      nf_nat_ipv4            16384  1 openvswitch
                                      nf_conncount           16384  1 openvswitch
                                      nf_nat                 36864  3 nf_nat_ipv6,nf_nat_ipv4,openvswitch
                                      8021q                  40960  0 
                                      garp                   16384  1 8021q
                                      mrp                    20480  1 8021q
                                      stp                    16384  2 bridge,garp
                                      llc                    16384  3 bridge,stp,garp
                                      ipt_REJECT             16384  3 
                                      nf_reject_ipv4         16384  1 ipt_REJECT
                                      xt_tcpudp              16384  9 
                                      xt_multiport           16384  1 
                                      xt_conntrack           16384  6 
                                      nf_conntrack          163840  6 xt_conntrack,nf_nat,nf_nat_ipv6,nf_nat_ipv4,openvswitch,nf_conncount
                                      nf_defrag_ipv6         20480  2 nf_conntrack,openvswitch
                                      nf_defrag_ipv4         16384  1 nf_conntrack
                                      libcrc32c              16384  3 nf_conntrack,nf_nat,openvswitch
                                      iptable_filter         16384  1 
                                      dm_multipath           32768  0 
                                      sunrpc                413696  20 lockd,nfsv3,nfs_acl,nfs
                                      sb_edac                24576  0 
                                      intel_powerclamp       16384  0 
                                      crct10dif_pclmul       16384  0 
                                      crc32_pclmul           16384  0 
                                      ghash_clmulni_intel    16384  0 
                                      pcbc                   16384  0 
                                      aesni_intel           200704  0 
                                      aes_x86_64             20480  1 aesni_intel
                                      cdc_ether              16384  0 
                                      crypto_simd            16384  1 aesni_intel
                                      usbnet                 49152  1 cdc_ether
                                      cryptd                 28672  3 crypto_simd,ghash_clmulni_intel,aesni_intel
                                      glue_helper            16384  1 aesni_intel
                                      hid_generic            16384  0 
                                      mii                    16384  1 usbnet
                                      dm_mod                151552  1 dm_multipath
                                      usbhid                 57344  0 
                                      hid                   122880  2 usbhid,hid_generic
                                      sg                     40960  0 
                                      intel_rapl_perf        16384  0 
                                      mei_me                 45056  0 
                                      mei                   114688  1 mei_me
                                      lpc_ich                28672  0 
                                      i2c_i801               28672  0 
                                      ipmi_si                65536  0 
                                      acpi_power_meter       20480  0 
                                      ipmi_devintf           20480  0 
                                      ipmi_msghandler        61440  2 ipmi_devintf,ipmi_si
                                      ip_tables              28672  2 iptable_filter
                                      x_tables               45056  6 xt_conntrack,iptable_filter,xt_multiport,xt_tcpudp,ipt_REJECT,ip_tables
                                      sd_mod                 53248  4 
                                      xhci_pci               16384  0 
                                      ehci_pci               16384  0 
                                      tg3                   192512  0 
                                      xhci_hcd              258048  1 xhci_pci
                                      ehci_hcd               90112  1 ehci_pci
                                      ixgbe                 380928  0 
                                      megaraid_sas          167936  3 
                                      scsi_dh_rdac           16384  0 
                                      scsi_dh_hp_sw          16384  0 
                                      scsi_dh_emc            16384  0 
                                      scsi_dh_alua           20480  0 
                                      scsi_mod              253952  13 fcoe,scsi_dh_emc,sd_mod,dm_multipath,scsi_dh_alua,scsi_transport_fc,libfc,bnx2fc,megaraid_sas,sg,scsi_dh_rdac,scsi_dh_hp_sw
                                      ipv6                  548864  926 bridge,nf_nat_ipv6
                                      crc_ccitt              16384  1 ipv6
                                      
                                      1 Reply Last reply Reply Quote 0
                                      • M Offline
                                        MrMike @stormi
                                        last edited by

                                        @stormi

                                        our environment also seeing the leak in xcp-ng 8.1 hosts. At first I doubled the amount of memory allocated to the control domain from 4G to 8G and now 30 something days later it ran out of ram again... this is the 3rd time the hosts (4) run of memory. This is really causing us issues.

                                        # lsmod
                                        Module                  Size  Used by
                                        tun                    49152  0
                                        nfsv3                  49152  1
                                        nfs_acl                16384  1 nfsv3
                                        nfs                   307200  2 nfsv3
                                        lockd                 110592  2 nfsv3,nfs
                                        grace                  16384  1 lockd
                                        fscache               380928  1 nfs
                                        bnx2fc                159744  0
                                        cnic                   81920  1 bnx2fc
                                        uio                    20480  1 cnic
                                        fcoe                   32768  0
                                        libfcoe                77824  2 fcoe,bnx2fc
                                        libfc                 147456  3 fcoe,bnx2fc,libfcoe
                                        scsi_transport_fc      69632  3 fcoe,libfc,bnx2fc
                                        openvswitch           147456  11
                                        nsh                    16384  1 openvswitch
                                        nf_nat_ipv6            16384  1 openvswitch
                                        nf_nat_ipv4            16384  1 openvswitch
                                        nf_conncount           16384  1 openvswitch
                                        nf_nat                 36864  3 nf_nat_ipv6,nf_nat_ipv4,openvswitch
                                        8021q                  40960  0
                                        garp                   16384  1 8021q
                                        mrp                    20480  1 8021q
                                        stp                    16384  1 garp
                                        llc                    16384  2 stp,garp
                                        ipt_REJECT             16384  3
                                        nf_reject_ipv4         16384  1 ipt_REJECT
                                        xt_tcpudp              16384  8
                                        xt_multiport           16384  1
                                        xt_conntrack           16384  5
                                        nf_conntrack          163840  6 xt_conntrack,nf_nat,nf_nat_ipv6,nf_nat_ipv4,openvswitch,nf_conncount
                                        nf_defrag_ipv6         20480  2 nf_conntrack,openvswitch
                                        nf_defrag_ipv4         16384  1 nf_conntrack
                                        libcrc32c              16384  3 nf_conntrack,nf_nat,openvswitch
                                        dm_multipath           32768  0
                                        iptable_filter         16384  1
                                        sunrpc                413696  18 lockd,nfsv3,nfs_acl,nfs
                                        sb_edac                24576  0
                                        intel_powerclamp       16384  0
                                        crct10dif_pclmul       16384  0
                                        crc32_pclmul           16384  0
                                        ghash_clmulni_intel    16384  0
                                        pcbc                   16384  0
                                        aesni_intel           200704  0
                                        aes_x86_64             20480  1 aesni_intel
                                        crypto_simd            16384  1 aesni_intel
                                        cryptd                 28672  3 crypto_simd,ghash_clmulni_intel,aesni_intel
                                        glue_helper            16384  1 aesni_intel
                                        dm_mod                151552  5 dm_multipath
                                        intel_rapl_perf        16384  0
                                        sg                     40960  0
                                        i2c_i801               28672  0
                                        mei_me                 45056  0
                                        mei                   114688  1 mei_me
                                        lpc_ich                28672  0
                                        ipmi_si                65536  0
                                        ipmi_devintf           20480  0
                                        ipmi_msghandler        61440  2 ipmi_devintf,ipmi_si
                                        acpi_power_meter       20480  0
                                        ip_tables              28672  2 iptable_filter
                                        x_tables               45056  6 xt_conntrack,iptable_filter,xt_multiport,xt_tcpudp,ipt_REJECT,ip_tables
                                        hid_generic            16384  0
                                        usbhid                 57344  0
                                        hid                   122880  2 usbhid,hid_generic
                                        sd_mod                 53248  5
                                        ahci                   40960  3
                                        libahci                40960  1 ahci
                                        xhci_pci               16384  0
                                        ehci_pci               16384  0
                                        libata                274432  2 libahci,ahci
                                        ehci_hcd               90112  1 ehci_pci
                                        xhci_hcd              258048  1 xhci_pci
                                        ixgbe                 380928  0
                                        megaraid_sas          167936  1
                                        scsi_dh_rdac           16384  0
                                        scsi_dh_hp_sw          16384  0
                                        scsi_dh_emc            16384  0
                                        scsi_dh_alua           20480  0
                                        scsi_mod              253952  14 fcoe,scsi_dh_emc,sd_mod,dm_multipath,scsi_dh_alua,scsi_transport_fc,libfc,bnx2fc,megaraid_sas,libata,sg,scsi_dh_rdac,scsi_dh_hp_sw
                                        ipv6                  548864  193 nf_nat_ipv6
                                        crc_ccitt              16384  1 ipv6
                                        
                                        
                                        1 Reply Last reply Reply Quote 0
                                        • olivierlambertO Offline
                                          olivierlambert Vates 🪐 Co-Founder CEO
                                          last edited by

                                          @MrMike do you have a ticket open on our side?

                                          We are continuing to gather clues while also releasing 8.2 LTS. It should be more straightforward as soon we got this release done (more time to work on this). Obviously, support ticket are worked with the highest priority we can.

                                          M 1 Reply Last reply Reply Quote 0
                                          • M Offline
                                            MrMike @olivierlambert
                                            last edited by

                                            @olivierlambert I don't have a ticket because we don't have paid support for xcp-ng

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post