backup mail report says INTERRUPTED but it's not ?
-
I wonder if this PR https://github.com/vatesfr/xen-orchestra/pull/9506 aims to solve the issue that was discussed in this thread.
To me it looks like it's the case as the issue seems to be related to RAM used by backup jobs not being freed correctly and the PR seems to add some garbage collection to backup jobs.
I hope that it will fix the issue and if needed I can test a branch. -
Hi @MajorP93,
This PR is only about changing the way we delete old logs (linked to a bigger work of making backups use XO tasks instead of their own task system), it won't fix the issue discussed in this topic.
-
Hi @Bastien-Nollet,
oh okay, thanks for clarifying!
-
P Pilow referenced this topic
-
I have been having the same issue and have been watching it for the last couple weeks. Initially my XOA only had 8GB of ram assigned, i have bumped it up to 16 to try an alleviate the issue. Seems to be some sort of memory leak. This is the official XO Appliance too not XO CE.
I changed the systemd file to make use of the extra memory as per the docs,
ExecStart=/usr/local/bin/node --max-old-space-size=12288 /usr/local/bin/xo-serverIt seems that over time it will just consume all of its memory until it crashes and restarts no matter how much i assign.

-
During the past month my backups failed (status interrupted) 1-2 times per week due to this memory leak.
When increasing heap size (node old space size) it takes longer but the backup fails when RAM usage eventually hits 100%.
I guess Iâll go with @Pilow âs workaround for now and create a cronjob for rebooting XO VM right before backups start. -
All of you are using a Node 24 LTS version? Do you still have the issue with Node 20 or Node 22?
-
@olivierlambert No.
I reverted to Node 20 as previously mentioned.
I was using Node 24 before but reverted to Node 20 as I hoped it would "fix" the issue.
Using Node 20 it takes longer for these issues to arise but in the end they arise.3 of the users in this thread that encounter the issue said that they are using XOA.
As XOA also uses Node 20, I think most people that reported this issue actually use Node 20. -
Thanks for the recap!
-
@olivierlambert Using the prebuilt XOA appliance which reports:
[08:39 23] xoa@xoa:~$ node --version v20.18.3 -
@flakpyro said in backup mail report says INTERRUPTED but it's not ?:
@olivierlambert Using the prebuilt XOA appliance which reports:
[08:39 23] xoa@xoa:~$ node --version v20.18.3@majorp93 @pilow Can you please capture some heap snapshots from during backup runs of XOA via NodeJS?
Then compare them to each other, they need to be in the following order:-
- Snapshot before backup
- Snapshot following first backup
- Snapshot following second backup
- Snapshot following third backup
- Snapshot following subsequent backups to get to Node.js OOM (or as close as youâre willing to risk)
These will require that XOA (or XOCE) is started with Node.js heap snapshots enabled. Then open in a Chromium based browser the following url:-
chrome://inspectThe above URL will require using the browserâs DevTools features!
Another option is to integrate and enable use of Clinic.js (clinic heapprofiler), or configure node to use node-heapdump when it reaches a threshold amount.
Once your got those heap dumps your looking for the following:-
- Object types that grow massively between the snapshots.
- Large arrays or maps of backup-related objects (VMs, snapshots, jobs, tasks, etc.).
- Retained objects whose âretainersâ point to long-lived structures (global, caches, singletons).
These will likely help to pin down what and where in the backup code, the memory leak is located.
Once have these a heap snapshot diff showing which object type (or types) growing by a stated size per backup will finally help the Vates developers fix this issue.
@florent I left the above for the original reporters of the memory leak issue, and/or yourselves.