XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    backup mail report says INTERRUPTED but it's not ?

    Scheduled Pinned Locked Moved Backup
    48 Posts 6 Posters 1.2k Views 6 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M Online
      MajorP93
      last edited by MajorP93

      Okay, to update on my findings:

      According to the log lines

      [40:0x2e27d000] 312864931 ms: Scavenge 2011.2 (2033.4) -> 2005.3 (2033.6) MB, pooled: 0 MB, 2.31 / 0.00 ms  (average mu = 0.257, current mu = 0.211) task;
      [40:0x2e27d000] 312867125 ms: Mark-Compact (reduce) **2023.6** (2044.9) -> **2000.5 (2015.5)** MB, pooled: 0 MB, 83.33 / 0.62 ms  (+ 1867.4 ms in 298 steps since start of marking, biggest step 19.4 ms, walltime since start of marking 2194 ms) (average mu = 0.333,
      FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
      

      The default heap size seems to be 2GB. I read some Node documentation regarding heap size and understood that configured heap size is honored on a per-process basis.
      XO backup seems to spawn multiple node processes (workers) which is why I figured the value I previously set as an attempt to fix my issue was too high (max-old-space-size=6144), 6GB can cause OOM quickly when multiple Node processes are being spawned.

      For now I added 512MB to the default heap which results in my heap totaling to 2.5GB.
      I hope that this will suffice for my backup jobs to not fail as my log clearly indicated the cause of the Node OOM was the heap ceiling being touched.

      If it was caused by Node 22+ RSS there would be other log entries.

      Also I was thinking a bit more about what @pilow said and I think I observed something similar.
      Due to the "interrupted" issue already occurring a few weeks back I checked "htop" once in a while on my XO VM and noticed that after backup jobs completed the RAM usage not really goes down to the value it was sitting before.
      After a fresh reboot of my XO VM RAM usage sits at around 1GB.
      During backups it showed around 6GB of 10GB total being used.
      After backups finished XO VM was sitting at around 5GB of RAM.
      So yeah maybe there is a memory leak somewhere after all.

      Anyhow I will keep monitoring this and see if the increased heap makes backup jobs more robust.

      Would still be interesting to hear something from XO team in this regard.

      Best regards
      MajorP

      P 1 Reply Last reply Reply Quote 0
      • P Offline
        Pilow @MajorP93
        last edited by Pilow

        @MajorP93 here are some screenshots of my XOA RAM
        648fbf59-9143-4517-892c-6ba9a5fdb5e1-{CC6A2183-CEF2-4139-8792-5E5BF5457BE3}.png
        (lost before sunday stats since I crashed my host in RPU this weekend...)
        ecb141c6-fce7-4571-9f3c-9b138ca9c35c-{3A43A1DB-C18B-4787-8791-95FB4E0C6531}.png
        you can clearly see RAM crawling and beeing dumped each reboot.

        here is one of my XOA Proxies (4 in total, they totally offload backups from my main XOA)
        490f8a00-67db-4aeb-aa61-6eb3b8c7b570-{59EFE2AB-72B7-43E6-B89A-BCED3EA399F2}.png

        there is also a slope of RAM crawling up... little spikes are overhead when backups are ongoing.

        I started to reboot XOA+all 4 proxxies every morning.

        florentF 1 Reply Last reply Reply Quote 1
        • florentF Offline
          florent Vates πŸͺ XO Team @Pilow
          last edited by

          @Pilow We pushed a lot of memory fixes to master, would it be possible to test it ?

          P 1 Reply Last reply Reply Quote 0
          • P Offline
            Pilow @florent
            last edited by

            @florent said in backup mail report says INTERRUPTED but it's not ?:

            @Pilow We pushed a lot of memory fixes to master, would it be possible to test it ?

            how so ? I stop my reboot everyday task and check if RAM is still crawling to 8Gb ?

            P 1 Reply Last reply Reply Quote 0
            • P Offline
              Pilow @Pilow
              last edited by

              memory problems arise on our XOA

              we have a spare XO CE deployed by ronivay script on ubuntu VM that we use only as a spare when main XOA is upgrading/rebooting
              same pools/hosts attached, quite a read only XO

              totally different behavior
              931533b5-c282-4810-bfe6-036240c58380-{D12F53BA-EB8D-4A61-AF1E-A70AABA612DC}.png

              48234a45-a3d5-4438-aa88-f1243944a3de-{9B6F2DA5-92A8-4706-99D7-A4CD05237980}.png

              M 1 Reply Last reply Reply Quote 0
              • M Online
                MajorP93 @Pilow
                last edited by MajorP93

                @Pilow Which Node JS version does your XO CE instance use?

                Could you possibly also check what Node JS version your XOA uses?

                As discussed in this thread maybe there are some RAM management differences when comparing XO on different Node JS versions.

                I would also be a big fan of Vates recommending (in documentation) XO CE users to use the exact same Node JS version as XOA uses... I feel like that would streamline things. Otherwise it feels like us XO CE users are "beta testers".

                //EDIT: @pilow also maybe the totally different RAM usage seen in your screenshots might be related to the XO CE not doing any backup jobs? You mentioned that you use XO CE purely as a read-only fallback instance. During my personal tests it looked like the RAM hogging is related to backup jobs and RAM is not being freed after backups finished.

                1 Reply Last reply Reply Quote 0
                • P Offline
                  Pilow @john.c
                  last edited by Pilow

                  @john.c said in backup mail report says INTERRUPTED but it's not ?:

                  Are you using NodeJS 22 or 24 for your instance of XO?

                  here is the node version on our problematic XOA
                  6461d193-4f79-4300-9071-724b51caffd8-image.png
                  this XOA do NOT manage backup jobs, totally offloaded to XO PROXIES

                  XOA PROXies :

                  [06:18 04] xoa@XOA-PROXY01:~$ node -v
                  v20.18.3
                  

                  and XO CE :

                  root@fallback-XOA:~# node -v
                  v24.13.0
                  
                  1 Reply Last reply Reply Quote 1
                  • M Online
                    MajorP93
                    last edited by MajorP93

                    7ec89d01-64e4-47c6-a800-e4de50a56e38-grafik.png

                    This is the RAM usage of my XO CE instance (Debian 13, Node 24, XO commit fa110ed9c92acf03447f5ee3f309ef6861a4a0d4 / "feat: release 6.1.0")

                    Metrics are exported via XO openmetrics plugin.

                    At the spots in the graph where my XO CE instance used around 2GB of RAM it was freshly restarted.
                    Between 31.01. and 03.02. you can see the RAM usage climbing and climbing until my backup jobs went into "interrupted" status on 03.02. due to Node JS heap issue as described in my error report in post https://xcp-ng.org/forum/post/102160.

                    1 Reply Last reply Reply Quote 1
                    • M Online
                      MajorP93
                      last edited by

                      I deployed XOA and used it to create a list of all XO dependencies and their respective versions as this seems to be the baseline that Vates tests against.

                      I then went ahead and re-deployed my XO CE VM using the exact same package versions that XOA uses.

                      This resulted in me using Debian 12, kernel 6.1, Node 20, etc.

                      I hope that this gives my backup jobs more stability.

                      It would be convenient if we would be able to get this information (validated, stable dependencies) from documentation instead of having to deploy XOA.

                      Best regards

                      1 Reply Last reply Reply Quote 0
                      • olivierlambertO Offline
                        olivierlambert Vates πŸͺ Co-Founder CEO
                        last edited by

                        That's precisely the value of XOA and why we are selling it. If you want best tested/stability, XOA is the way to go πŸ™‚

                        M 1 Reply Last reply Reply Quote 1
                        • M Online
                          MajorP93 @olivierlambert
                          last edited by MajorP93

                          @olivierlambert Sure, I absolutely get that XO CE comes with absolutely no warranty and XOA is the supported, enterprise grade product.
                          If the budget was there and if I was to decide on that I would be happy to use it.

                          It might still be a good idea to update your documentation at https://docs.xen-orchestra.com/installation#packages-and-prerequisites to at least align it with the Node JS version that you actually use and test against internally.
                          (The linked part of documentation advises to use Node 24 while you are shipping Node 20 in XOA.)

                          During testing it looked like running XO on Node 20 behaves quite differently compared to running it on Node 24 when it comes to RAM management. It looks like this got also confirmed by other users in this thread.

                          XO CE users actually using and testing the versions that you ship might be of value for finding bugs.
                          I think the documentation should generally advise to use the packages that you target during development in order to make the experience as good as possible for everyone.

                          Just my two cents.

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post