backup mail report says INTERRUPTED but it's not ?
-
before disruption, I prefer to patch/reboot my XOA
We got to the limit


still some memory leak somewhere guys !
-
@Pilow we are still working on it, but for now we didn't find a solution
-
After implementing the --max-old-space-size Node parameter as recommended by @pilow it took longer time for the VM to hit the issue.
Still: backups went into interrupted status.
Memory leak seems to be still there.
With each subsequent backup run the memory usage rises and rises. After backup run the memory usage does not fully go back to "normal".
After adding the node parameter there was no heap size error on Node anymore since the heap size got increased. The system went into various OOM errors in kernel log (dmesg) despite not all RAM (8GB) being used.
This is what htop looks like with 3 backup jobs running:

-
While working last night i noticed one of my backups/pools did this. Got the email that it was interupted but when i looked the tasks were still running and moving data it untill it porcess all vms in that backup job.
Edit - note my backup job was run via proxy on the specific pool/job.
2026-02-19T03_00_00.028Z - backup NG.txt
Edit 2 - homelab same last backup was interupted.
-
I wonder if this PR https://github.com/vatesfr/xen-orchestra/pull/9506 aims to solve the issue that was discussed in this thread.
To me it looks like it's the case as the issue seems to be related to RAM used by backup jobs not being freed correctly and the PR seems to add some garbage collection to backup jobs.
I hope that it will fix the issue and if needed I can test a branch. -
Hi @MajorP93,
This PR is only about changing the way we delete old logs (linked to a bigger work of making backups use XO tasks instead of their own task system), it won't fix the issue discussed in this topic.
-
Hi @Bastien-Nollet,
oh okay, thanks for clarifying!
-
P Pilow referenced this topic
-
I have been having the same issue and have been watching it for the last couple weeks. Initially my XOA only had 8GB of ram assigned, i have bumped it up to 16 to try an alleviate the issue. Seems to be some sort of memory leak. This is the official XO Appliance too not XO CE.
I changed the systemd file to make use of the extra memory as per the docs,
ExecStart=/usr/local/bin/node --max-old-space-size=12288 /usr/local/bin/xo-serverIt seems that over time it will just consume all of its memory until it crashes and restarts no matter how much i assign.

-
During the past month my backups failed (status interrupted) 1-2 times per week due to this memory leak.
When increasing heap size (node old space size) it takes longer but the backup fails when RAM usage eventually hits 100%.
I guess I’ll go with @Pilow ‘s workaround for now and create a cronjob for rebooting XO VM right before backups start. -
All of you are using a Node 24 LTS version? Do you still have the issue with Node 20 or Node 22?
-
@olivierlambert No.
I reverted to Node 20 as previously mentioned.
I was using Node 24 before but reverted to Node 20 as I hoped it would "fix" the issue.
Using Node 20 it takes longer for these issues to arise but in the end they arise.3 of the users in this thread that encounter the issue said that they are using XOA.
As XOA also uses Node 20, I think most people that reported this issue actually use Node 20. -
Thanks for the recap!