XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Alert: Control Domain Memory Usage

    Scheduled Pinned Locked Moved Solved Compute
    194 Posts 21 Posters 200.6k Views 16 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • U Offline
      umbradark
      last edited by

      I have a set of hosts on kernel-4.19.19-6.0.11.1.xcpng8.1 and I believe I'm hitting this as well. The OOM seems to kill openvswitch, which takes the host offline and in most cases, the VMs as well.

      1 Reply Last reply Reply Quote 0
      • stormiS Offline
        stormi Vates 🪐 XCP-ng Team
        last edited by

        So, the difference between 4.19.19-6.0.10.1.xcpng8.1 and 4.19.19-6.0.11.1.xcpng8.1 is two patches meant to reduce the performance overhead of the CROSSTalk vulnerability mitigations.

        So, assuming from delaf's test results that one of those patches introduced the memory leak, I have built

        Now here are the tests that you can do:

        • Reproduce delaf's findings by running kernel-4.19.19-6.0.10.1.xcpng8.1: no more memory leaks?
        • Test this kernel I built with patch 53 disabled: https://nextcloud.vates.fr/index.php/s/YXWCSEwo8SWkfAZ
        • Test this kernel I built with patch 62 disabled: https://nextcloud.vates.fr/index.php/s/arj5YfdrkjMKbBy

        If one of the patches is the cause of the memory leak, then one of the last two should still cause a memory leak and the other one not.

        delafD 1 Reply Last reply Reply Quote 0
        • delafD Offline
          delaf @stormi
          last edited by delaf

          stormi I have installed the two kernels

          272 ~]# yum list installed kernel | grep kernel
          kernel.x86_64                   4.19.19-6.0.11.1.0.1.patch53disabled.xcpng8.1
          
          273 ~]# yum list installed kernel | grep kernel
          kernel.x86_64                   4.19.19-6.0.11.1.0.1.patch62disabled.xcpng8.1
          

          I have removed the modification in /etc/modprobe.d/dist.conf on server 273.

          We have to wait a little bit now 😉

          1 Reply Last reply Reply Quote 1
          • stormiS Offline
            stormi Vates 🪐 XCP-ng Team
            last edited by

            FYI, the kernel with kmemleak support did detect something for a user who has a support ticket related to dom0 memory usage.

            delafD 1 Reply Last reply Reply Quote 0
            • delafD Offline
              delaf @stormi
              last edited by

              stormi For the kernel-4.19.19-6.0.10.1.xcpng8.1 test, i'm not sure it solve the problem because I get a small memory increase. We have to wait a bit more 😕

              delafD 1 Reply Last reply Reply Quote 1
              • delafD Offline
                delaf @delaf
                last edited by olivierlambert

                stormi

                • server 266 with alt-kernel: still no problem.
                  Screen Shot 2020-12-02 at 10.08.47.png

                • server 268 with 4.19.19-6.0.10.1.xcpng8.1: the problem has begun some days ago after some stable days.
                  Screen Shot 2020-12-02 at 10.03.57.png

                • server 272 with 4.19.19-6.0.11.1.0.1.patch53disabled.xcpng8.1:
                  Screen Shot 2020-12-02 at 10.05.47.png )

                • server 273 with 4.19.19-6.0.11.1.0.1.patch62disabled.xcpng8.1:
                  Screen Shot 2020-12-02 at 10.05.50.png

                It seems that 4.19.19-6.0.11.1.0.1.patch62disabled.xcpng8.1 is more stable than 4.19.19-6.0.11.1.0.1.patch53disabled.xcpng8.1. But it is a but early to be sure.

                delafD 1 Reply Last reply Reply Quote 1
                • delafD Offline
                  delaf @delaf
                  last edited by delaf

                  stormi r1 server 273 with 4.19.19-6.0.11.1.0.1.patch62disabled.xcpng8.1 is still stable and 272 has the memory problem.

                  • 272
                    Screen Shot 2020-12-15 at 14.50.31.png

                  • 273
                    Screen Shot 2020-12-15 at 14.50.40.png

                  1 Reply Last reply Reply Quote 0
                  • stormiS Offline
                    stormi Vates 🪐 XCP-ng Team
                    last edited by

                    Thanks. It looks like I'm doomed to see seemingly contradictory results for every kernel-related issue (this one, and an other one regarding network performance): you don't have any leaks without patch 62, but you had leaks with kernel 4.19.19-6.0.10.1.xcpng8.1 which doesn't have that patch either. So it's hard to conclude anything 😕

                    1 Reply Last reply Reply Quote 0
                    • R Offline
                      rblvlvl
                      last edited by rblvlvl

                      Hey Guys,

                      we are facing the same issue with xcp 8.1.
                      We can't figure out what uses all this memory (8GB) or how to reduce it. Restarting the Toolstack did nothing and we can't afford a downtime because everything runs in production. Similar systems with same configurations don't show such a behavior.

                      I can provide you with some output from our system, maybe you can see something or help us finding a solution.

                      free -m

                                    total        used        free      shared  buff/cache   available
                      Mem:           7912        7595          82          33         234          62
                      Swap:          1023         216         807
                      

                      xl top

                            NAME  STATE   CPU(sec) CPU(%)     MEM(k) MEM(%)  MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS   VBD_OO    VBD_RD     VBD_WR   VBD_RSECT   VBD_WSECT SSID
                        Domain-0 -----r    7308446   52.1    8388608    3.1    8388608       3.1    16    0        0        0    0        0         0          0           0           0    0
                      

                      xe vm-param-list uuid | grep memory

                                               memory-actual ( RO): 8589934592
                                               memory-target ( RO): <unknown>
                                             memory-overhead ( RO): 84934656
                                           memory-static-max ( RW): 8589934592
                                          memory-dynamic-max ( RW): 8589934592
                                          memory-dynamic-min ( RW): 8589934592
                                           memory-static-min ( RW): 8589934592
                                                      memory (MRO): <not in database>
                      

                      lsmod and grub.cfg
                      lsmod.txt
                      grub-cgf.txt

                      top output
                      Bildschirmfoto 2020-12-30 um 08.55.32.png

                      Tell me if you need more information or if you have any idea. Thanks.

                      1 Reply Last reply Reply Quote 0
                      • olivierlambertO Offline
                        olivierlambert Vates 🪐 Co-Founder CEO
                        last edited by

                        We need more details on the host.

                        1. Hardware detail (NICs, server model)
                        2. If all your hardware is fully BIOS/firmware up to date
                        3. The kind of storage used (iSCSI, FCoE, NFS?)

                        So far, we couldn't find a real common thing between people, and that's make hard to find the root cause.

                        1 Reply Last reply Reply Quote 0
                        • R Offline
                          rblvlvl
                          last edited by

                          olivierlambert

                          It is a Dell PowerEdge R440 Version 2.6.3 with an LACP Bond and we use an NFS Storage.

                          1 Reply Last reply Reply Quote 0
                          • olivierlambertO Offline
                            olivierlambert Vates 🪐 Co-Founder CEO
                            last edited by

                            That doesn't answer all my questions 😉

                            R 1 Reply Last reply Reply Quote 0
                            • R Offline
                              rblvlvl @olivierlambert
                              last edited by

                              olivierlambert

                              NIC:
                              Intel(R) Ethernet 10G 2P X550-t Adapter

                              driver: ixgbe
                              version: 5.5.2
                              firmware-version: 0x80000f32, 19.5.12
                              

                              RAID Controller:

                              Product Name    : PERC H740P Adapter 
                              Serial No       : 04B00V9
                              FW Package Build: 50.9.4-3025
                              
                                                  Mfg. Data
                                              ================
                              Mfg. Date       : 04/18/20
                              Rework Date     : 04/18/20
                              Revision No     : A03
                              Battery FRU     : N/A
                              
                                              Image Versions in Flash:
                                              ================
                              Boot Block Version : 7.02.00.00-0021
                              BIOS Version       : 7.09.02.1_0x07090301
                              FW Version         : 5.093.00-2856
                              NVDATA Version     : 5.0900.06-0034
                              

                              I know our hardware is not fully up to date, but for an update we need a timeframe, which can not be arranged that quickly.
                              Maybe someone knows a temporary fix to reduce the usage of the dom0 memory until the updates can be made.

                              1 Reply Last reply Reply Quote 0
                              • olivierlambertO Offline
                                olivierlambert Vates 🪐 Co-Founder CEO
                                last edited by

                                Thanks.

                                If it's a kernel leak, there's nothing to do in user space.

                                1 Reply Last reply Reply Quote 0
                                • stormiS Offline
                                  stormi Vates 🪐 XCP-ng Team
                                  last edited by

                                  Hi everyone.

                                  So, let's not give up, and let's try to find that hidden kernel leak and fix it!

                                  Let me summarize what we currently know. Correctly me if one of the statements is wrong for you:

                                  • It all started with XCP-ng 8.0 and still happens in XCP-ng 8.1
                                  • Memory is not used by user space processes. It's a kernel leak
                                  • We have fixed a rsyslog memory leak through updates, but it was a different issue. By the way, if you have memory that is eaten by a user space process, please open a new thread so that we stay focused on the kernel leak here.
                                  • Our alternate kernel, kernel-alt, is apparently not affected.
                                  • Most (all?) affected hosts have 10Gb interfaces
                                  • Many affected hosts are using iSCSI, though the last report (from rblvlvl) is on a host with NFS storage
                                  • Some reports suggest that the more network intensive the load is, the quicker the memory usage grows.
                                  • Hosts with more VMs seem to see memory usage grow faster (may be related to the previous points)
                                  • At some point we thought that reverting to a previous kernel (without some security patches) had solved the issue, but after some time memory usage started to grow again
                                  • kmemleak did not detect obvious culprits, though r1 has a lead regarding iscsi-related functions and we should still keep trying
                                  • Disabling the specific device drivers in favour of the built-in drivers in the kernel did not stop the leak

                                  Things that we don't know (tests welcome):

                                  • Is it affecting XCP-ng 8.2 too?
                                  • Is it affecting Citrix Hypervisor? It should since we use the same kernel and drivers (mostly), but this doesn't seem to be a known issue to them.

                                  Now, how to move on:

                                  • Getting our hands on an affected test server and being authorized to reboot it, change the kernel, etc., would help a lot, since we can't reproduce internally (dave maybe? At some point you said you might provide one)
                                  • Reach out to kernel developers for advice?
                                  • If someone manages to reproduce on Citrix Hypervisor, raise the issue on their bugtracker too.
                                  • Check the kernel 4.19 history for memory leak fixes, especially those related to networking.

                                  Any other idea to move on is welcome, of course.

                                  1 Reply Last reply Reply Quote 0
                                  • stormiS Offline
                                    stormi Vates 🪐 XCP-ng Team
                                    last edited by

                                    Before I realized that not every affected host was using the ixgbe driver, contrarily to what I initially thought, I built an alternate driver from the latest sources from Intel.

                                    So, even if there's little hope that it will fix anything, here's how to install it (on XCP-ng 8.1 or 8.2):

                                    yum install intel-ixgbe-alt --enablerepo=xcp-ng-testing
                                    reboot
                                    
                                    stormiS 1 Reply Last reply Reply Quote 1
                                    • olivierlambertO Offline
                                      olivierlambert Vates 🪐 Co-Founder CEO
                                      last edited by

                                      Do we asked to provide also lsmod? That might be interesting to overlap different results and see common ones.

                                      stormiS 1 Reply Last reply Reply Quote 0
                                      • stormiS Offline
                                        stormi Vates 🪐 XCP-ng Team @olivierlambert
                                        last edited by

                                        olivierlambert Yes, various users have shared their lsmod.

                                        1 Reply Last reply Reply Quote 0
                                        • olivierlambertO Offline
                                          olivierlambert Vates 🪐 Co-Founder CEO
                                          last edited by olivierlambert

                                          The latest report was on a NFS storage, however lsmod displays various iSCSI modules loaded. So it doesn't mean it's not an iSCSI module issue:

                                          scsi_mod              253952  13 fcoe,scsi_dh_emc,sd_mod,dm_multipath,scsi_dh_alua,scsi_transport_fc,libfc,bnx2fc,megaraid_sas,sg,scsi_dh_rdac,scsi_dh_hp_sw
                                          

                                          edit: what about bnx2fc? Is it common to other reports?

                                          edit 2: nope, might be megaraid_sas instead.

                                          stormiS 1 Reply Last reply Reply Quote 0
                                          • olivierlambertO Offline
                                            olivierlambert Vates 🪐 Co-Founder CEO
                                            last edited by

                                            Is there a way to provide alternate/up to date modules for the most suspicious ones? At some point, we'll find the culprit!

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post