No by reducing the number of vms that are being backed up by xoa.
I moved Windows vms to veeam as we need application aware backups.
No by reducing the number of vms that are being backed up by xoa.
I moved Windows vms to veeam as we need application aware backups.
I may have fixed the issue myself. I have moved our Windows VM backups to Veeam and leaving XOA to backup linux VMs.

Updated to XOA 6.2.0 Will monitor memory usage and report back.
Current use before backups after update. All host and pools connected as of yesterday.

As I am completly new to this scripting and such I did reach out to Ai for assistance and believe the license check should be bypassed now.
Again not suggested to use in production. Use at your own risk.
still has its own branch for more testing before merging to main branch.
From my XOA

One of my pools have been offline for 24hrs due to power outage.
Standard XOA can open support tunnel if needed.
Yeah i was looking over his script to see how he worked around it but just states that.
I knew there would be limitations with it so i didnt add it to the main branch yet.
I had similar issues recently. But not exacly the same. I was able to add second host to pool but was not able to work because in my case eth0 was host managment, eth4 storage, eth5 migration, eth 7 vm network. My second host at the time did not have eht7
Then i further messed myself up by using second host as its own pool attached to same storage and when down hill form there, but was an easy fix.
Just added the option for --proxy to deploy xo proxy.
https://github.com/acebmxer/install_xen_orchestra/tree/xo-proxy
Deploying a Proxy VM
The script supports deploying a Xen Orchestra Proxy VM directly to your XenServer/XCP-ng pool using the --proxy option:
./install-xen-orchestra.sh --proxy
Important Limitations and Notes
️ Network Configuration:
The --proxy option does not allow you to specify which network the VIF is attached to
It will default to "Pool wide network associated with eth0"
️ Production Use Warning:
Use at your own risk. Not advised for use in production environments.
This feature is provided for testing and development purposes. For production deployments, it is recommended to manually configure proxy VMs with proper network planning and validation.



While can add host and remote via proxy. Backups will fail with the following error.
backupNg.runJob
{
"id": "95ac8089-69f3-404e-b902-21d0e878eec2",
"schedule": "76989b41-8bcf-4438-833a-84ae80125367"
}
{
"code": -32000,
"data": {
"stack": "TypeError: licenses.find is not a function
at Function.<anonymous> (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/app/mixins/appliance.mjs:168:23)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at file:///usr/local/lib/node_modules/@xen-orchestra/proxy/app/mixins/backups.mjs:110:25"
},
"message": "licenses.find is not a function"
}
I appreciate the comments and they are all welcome. Do keep note that I did say this is not Production ready and use at your own risk. If you see anything that is wrong please provide suggest feedback to correct said issue.
I just started running my home lab from this version yesterday. I imported my config from previous XO. Today i saw 4 new commits and the update function updated to the latest commit. Backups have run with no issue.
As I can only test in my home lab I can only speak of my own experiences.
Yes I get what you say about the confusion with XO vs XOA. I know there is another person who maintains an install script calls it XO-CE but didnt want to call mine that specifically.
As for the feature set. You get full feature set when you install from sources. You just dont get paid support.
I have just not implemented the process of setting up a proxy via the script.
Edit - Updated title
If you are referring to the login info admin@admin.net. That is per vates is documentation. It does prompt you to change that password.
There is a xo-config.cfg file where you can change most of the defaults but those that apply to install not for do itself.
https://docs.xen-orchestra.com/installation
First Login
Once you have started the VM, you can access the web UI by putting the IP you configured during deployment into your web browser. If you did not configure an IP or are unsure, try one of the following methods to find it:
Run xe vm-list params=name-label,networks | grep -A 1 XOA on your host
Check your router's DHCP leases for an xoa lease
tip
Default Web UI credentials are admin@admin.net / admin
Default console/SSH credentials are not set, you need to set them as described here.
Maybe at some point. Unless someone would like to contribute that part. I want to make sure XOA install is good and stable first.
I just switch over my homelab to this install now.
While this project is more for myself it is open to others to use. Please use at your own risk. Double check my script before using in a production environment. I am open to suggestions and please report any issues here - https://github.com/acebmxer/install_xen_orchestra/issues
With that said I wanted to create my own script to install XOA from sources using the information provided by https://docs.xen-orchestra.com/installation#from-the-sources. It took many tries to get it working just to see the log in screen.
I have only tested on Ubuntu 24.04.4 as of yet.
https://github.com/acebmxer/install_xen_orchestra
# Xen Orchestra Installation Script
Automated installation script for [Xen Orchestra](https://xen-orchestra.com/) from source, based on the [official documentation](https://docs.xen-orchestra.com/installation#from-the-sources).
## Features
- Installs all required dependencies and prerequisites automatically
- Uses Node.js 20 LTS (with npm v10)
- Yarn package manager installed globally
- Self-signed SSL certificate generation for HTTPS
- Direct port binding (80 and 443) - no proxy required
- Systemd service for automatic startup
- Update functionality with commit comparison
- Automatic backups before updates (keeps last 5)
- Interactive restore from any available backup
- Rebuild functionality — fresh clone + clean build on the current branch, preserves settings
- Configurable via simple config file
- **Customizable service user** - run as any username or root, defaults to 'xo'
- **Automatic swap space management** - creates 2GB swap if needed for builds
- **NFS mount support** - automatically configures sudo permissions for remote storage
- **Memory-efficient builds** - prevents out-of-memory errors on low-RAM systems
## Quick Start
### 1. Clone this repository
```bash
git clone https://github.com/acebmxer/install_xen_orchestra.git
cd install_xen_orchestra
Copy the sample configuration file and customize it:
cp sample-xo-config.cfg xo-config.cfg
Edit xo-config.cfg with your preferred settings:
nano xo-config.cfg
Note: If
xo-config.cfgis not found when running the script, it will automatically be created fromsample-xo-config.cfgwith default settings.
Important: Do NOT run this script with sudo. Run as a normal user with sudo privileges - the script will use sudo internally for commands that require elevated permissions.
./install-xen-orchestra.sh
The xo-config.cfg file supports the following options:
| Option | Default | Description |
|---|---|---|
HTTP_PORT |
80 | HTTP port for web interface |
HTTPS_PORT |
443 | HTTPS port for web interface |
INSTALL_DIR |
/opt/xen-orchestra | Installation directory |
SSL_CERT_DIR |
/etc/ssl/xo | SSL certificate directory |
SSL_CERT_FILE |
xo-cert.pem | SSL certificate filename |
SSL_KEY_FILE |
xo-key.pem | SSL private key filename |
GIT_BRANCH |
master | Git branch (master, stable, or tag) |
BACKUP_DIR |
/opt/xo-backups | Backup directory for updates |
BACKUP_KEEP |
5 | Number of backups to retain |
NODE_VERSION |
20 | Node.js major version |
SERVICE_USER |
xo | Service user (any username, leave empty for root) |
DEBUG_MODE |
false | Enable debug logging |
To update an existing installation:
./install-xen-orchestra.sh --update
The update process will:
BACKUP_DIR (default: /opt/xo-backups)BACKUP_KEEP backups are retained (default: 5)[1] is the most recent, [5] is the oldestTo restore a previous installation:
./install-xen-orchestra.sh --restore
The restore process will:
Example output:
==============================================
Available Backups
==============================================
[1] xo-backup-20260221_233000 (2026-02-21 06:30:00 PM EST) commit: a1b2c3d4e5f6 (newest)
[2] xo-backup-20260221_141500 (2026-02-21 09:15:00 AM EST) commit: 9f8e7d6c5b4a
[3] xo-backup-20260220_162000 (2026-02-20 11:20:00 AM EST) commit: 1a2b3c4d5e6f
[4] xo-backup-20260219_225200 (2026-02-19 05:52:00 PM EST) commit: 3c4d5e6f7a8b
[5] xo-backup-20260219_133000 (2026-02-19 08:30:00 AM EST) commit: 7d8e9f0a1b2c (oldest)
Enter the number of the backup to restore [1-5], or 'q' to quit:
After a successful restore the confirmed commit is displayed:
[SUCCESS] Restore completed successfully!
[INFO] Restored commit: a1b2c3d4e5f6
If your installation becomes corrupted or broken, use --rebuild to do a fresh clone and clean build of your current branch without losing any settings:
./install-xen-orchestra.sh --rebuild
The rebuild process will:
--update — saved to BACKUP_DIR)INSTALL_DIR and do a fresh git clone of the same branchNote: Settings stored in
/etc/xo-server(config.toml) and/var/lib/xo-server(databases and state) are not touched during a rebuild, so all your connections, users, and configuration are preserved.
After installation, Xen Orchestra runs as a systemd service:
# Start the service
sudo systemctl start xo-server
# Stop the service
sudo systemctl stop xo-server
# Check status
sudo systemctl status xo-server
# View logs
sudo journalctl -u xo-server -f
After installation, access the web interface:
http://your-server-iphttps://your-server-ipNote: If you changed
HTTP_PORTorHTTPS_PORTinxo-config.cfgfrom the defaults (80/443), append the port to the URL — e.g.http://your-server-ip:8080
admin@admin.netadminWarning: Change the default password immediately after first login!
To switch to a different branch (e.g., from master to stable
xo-config.cfg and change GIT_BRANCH./install-xen-orchestra.sh --update
The script will automatically fetch and checkout the new branch during the update process.
Note: The script automatically creates 2GB swap space if insufficient memory is detected during builds to prevent out-of-memory errors.
The script automatically installs all required dependencies:
Debian/Ubuntu:
RHEL/CentOS/Fedora:
Check the service logs:
sudo journalctl -u xo-server -n 50
If running as non-root, the service uses CAP_NET_BIND_SERVICE to bind to privileged ports. Ensure systemd is configured correctly.
The easiest fix is to use the built-in rebuild command, which takes a backup first:
./install-xen-orchestra.sh --rebuild
Or manually (if running as non-root SERVICE_USER):
cd /opt/xen-orchestra
rm -rf node_modules
# Replace 'xo' with your SERVICE_USER if different
sudo -u xo yarn
sudo -u xo yarn build
If the build process fails with exit code 137 (killed), your system ran out of memory:
The script automatically handles this by:
To manually check/add swap:
# Check current swap
free -h
# Create 2GB swap file if needed
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
If you get an error when adding NFS remote storage:
mount.nfs: not installed setuid - "user" NFS mounts not supported
The script automatically handles this by configuring sudo permissions for your service user (default: xo) to run mount/umount commands including NFS-specific helpers.
If you encounter this issue on an existing installation:
# Update sudoers configuration (replace 'xo' with your SERVICE_USER if different)
sudo tee /etc/sudoers.d/xo-server-xo > /dev/null << 'EOF'
# Allow xo-server user to mount/unmount without password
Defaults:xo !requiretty
xo ALL=(ALL:ALL) NOPASSWD:SETENV: /bin/mount, /usr/bin/mount, /bin/umount, /usr/bin/umount, /bin/findmnt, /usr/bin/findmnt, /sbin/mount.nfs, /usr/sbin/mount.nfs, /sbin/mount.nfs4, /usr/sbin/mount.nfs4, /sbin/umount.nfs, /usr/sbin/umount.nfs, /sbin/umount.nfs4, /usr/sbin/umount.nfs4
EOF
sudo chmod 440 /etc/sudoers.d/xo-server-xo
sudo systemctl restart xo-server
If NFS mounts succeed but you get permission errors when writing:
EACCES: permission denied, open '/run/xo-server/mounts/.keeper_*'
This is a UID/GID mismatch between the xo-server user and your NFS export permissions:
Option 1: Run as root (recommended for simplicity)
# Edit config
nano xo-config.cfg
# Set: SERVICE_USER=
# (leave empty to run as root)
# Update service (replace 'xo' with your SERVICE_USER if different)
sudo sed -i 's/User=xo/User=root/' /etc/systemd/system/xo-server.service
sudo chown -R root:root /opt/xen-orchestra /var/lib/xo-server /etc/xo-server
sudo systemctl daemon-reload
sudo systemctl restart xo-server
Option 2: Configure NFS for your service user's UID
On your NFS server, adjust exports to allow your service user's UID (check with id <username>), or use appropriate squash settings in your NFS export configuration.
Ensure Redis is running:
redis-cli ping
# Should respond with: PONG
xo user by default (customizable to any username; leave empty for root)/bin/mount, /usr/bin/mount, /sbin/mount.nfs, etc.)/bin/umount, /usr/bin/umount, /sbin/umount.nfs, etc.)/bin/findmnt, /usr/bin/findmnt)/etc/sudoers.d/xo-server-<username> with NOPASSWD for specific commands onlyThis installation script is provided as-is. Xen Orchestra itself is licensed under AGPL-3.0.
@Pilow
Yeah I was following your post. At first I thought i was having random backup issues with backups via proxies with the occasional issue with backups via xoa. Then just yesturday see the error about memory.
For now I just rebooted XOA and it down under 2gb currently. Will wait and see how long for it to build back up again or untill Dev team finds a solution to the problem.
@olivierlambert said in XOA - Memory Usage:
Hi,
Depends on the number of pools, hosts, VMs, number of backup jobs, size etc.
Any issues with just jumping up to 8gb or 16? Has been fine untill today.
Just noticed alert today. Currently at default settings for XOA ram allocation 4gb 4vcpus. There is a recommendation for a set value based on pulls/hosts/ or vms?
Same questions for proxies is the proxies. What is the process for them same as xoa?
1/16 - Node version
?? 2/16 - Memory: less than 10% free memory on the XOA, check https://xen-orchestra.com/docs/troubleshooting.html#memory
? 3/16 - xo-server config syntax
? 4/16 - Disk space for /
? 5/16 - Disk space for /var
? 6/16 - Native SMB support
? 7/16 - Fetching VM UUID
? 8/16 - XOA version
? 9/16 - /var is writable
? 10/16 - Appliance registration
? 11/16 - npm version
? 12/16 - NTP synchronization
? 13/16 - local SSH server
? 14/16 - xoa-support user
? 15/16 - Internet connectivity
? 16/16 - XOA status


Let someone else comfirm but think you are way out of date on xo. Might be best to deploy a new one.
This is from my home lab. Should I be concerned? If so what steps do i need to take?
I see no errors in the Health tab in the Dashboard.
"type": "VM",
"id": "29353f1e-866f-7dc7-ce26-ba439b7328ca",
"name_label": "Work PC"
},
"id": "1771610941317",
"message": "backup VM",
"start": 1771610941317,
"status": "success",
"tasks": [
{
"id": "1771610941321",
"message": "clean-vm",
"start": 1771610941321,
"status": "success",
"warnings": [
{
"data": {
"path": "/xo-vm-backups/29353f1e-866f-7dc7-ce26-ba439b7328ca/20260214T181024Z.json",
"actual": 27343314432,
"expected": 61755492352
},
"message": "cleanVm: incorrect backup size in metadata"
}
],
"end": 1771610941603,
"result": {
"merge": false
}
},
{
"id": "1771610941812",
"message": "snapshot",
"start": 1771610941812,
"status": "success",
"end": 1771610943963,
"result": "51156fde-cdc0-3657-93aa-d59671a5d28a"
},
{
"id": "1771611387226:0",
"message": "health check",
"start": 1771611387226,
"status": "success",
"infos": [
{
"message": "This VM doesn't match the health check's tags for this schedule"
}
],
"end": 1771611387226
},
{
"data": {
"id": "b86b4c88-79f9-4ce8-9fa4-dce78a32ea44",
"isFull": false,
"type": "remote"
},
"id": "1771610943964",
"message": "export",
"start": 1771610943964,
"status": "success",
"tasks": [
{
"id": "1771610946585",
"message": "transfer",
"start": 1771610946585,
"status": "success",
"end": 1771611385506,
"result": {
"size": 43509612544
}
},
{
"id": "1771611387245",
"message": "clean-vm",
"start": 1771611387245,
"status": "success",
"warnings": [
{
"data": {
"path": "/xo-vm-backups/29353f1e-866f-7dc7-ce26-ba439b7328ca/20260220T180906Z.json",
"actual": 43509612544,
"expected": 43520499200
},
"message": "cleanVm: incorrect backup size in metadata"
}
],
"end": 1771611388247,
"result": {
"merge": false
}
}
],
"end": 1771611388247
}
],
"infos": [
{
"message": "Transfer data using NBD"
},
{
"message": "will delete snapshot data"
},
{
"data": {
"vdiRef": "OpaqueRef:502e838d-b1a9-067f-e3cf-53d652bc432e"
},
"message": "Snapshot data has been deleted"
}
],
"end": 1771611388247
},
{
"data": {
"type": "VM",
"id": "c79ab43d-513e-ef73-f81f-a13257f311c8",
"name_label": "Docker of Things"
},
"id": "1771610853200",
"message": "backup VM",
"start": 1771610853200,
"status": "success",
"tasks": [
{
"id": "1771610853205",
"message": "clean-vm",
"start": 1771610853205,
"status": "success",
"warnings": [
{
"data": {
"path": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143237Z.vhd",
"error": {
"generatedMessage": false,
"code": "ERR_ASSERTION",
"actual": false,
"expected": true,
"operator": "==",
"diff": "simple",
"size": 34517586944
}
},
"message": "VHD check error"
},
{
"data": {
"parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143237Z.vhd",
"child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143324Z.vhd"
},
"message": "parent VHD is missing"
},
{
"data": {
"parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143324Z.vhd",
"child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143408Z.vhd"
},
"message": "parent VHD is missing"
},
{
"data": {
"parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143408Z.vhd",
"child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T153235Z.vhd"
},
"message": "parent VHD is missing"
},
{
"data": {
"parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T153235Z.vhd",
"child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T180448Z.vhd"
},
"message": "parent VHD is missing"
},
{
"data": {
"parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T180448Z.vhd",
"child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T183622Z.vhd"
},
"message": "parent VHD is missing"
},
{
"data": {
"parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T183622Z.vhd",
"child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050033Z.vhd"
},
"message": "parent VHD is missing"
},
{
"data": {
"parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050033Z.vhd",
"child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050607Z.vhd"
},
"message": "parent VHD is missing"
},
{
"data": {
"parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050607Z.vhd",
"child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050634Z.vhd"
},
"message": "parent VHD is missing"
},
{
"data": {
"parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050634Z.vhd",
"child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050700Z.vhd"
},
"message": "parent VHD is missing"
},
{
"data": {
"parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050700Z.vhd",
"child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T141857Z.vhd"
},
"message": "parent VHD is missing"
},
{
"data": {
"parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T141857Z.vhd",
"child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T180551Z.vhd"
},
"message": "parent VHD is missing"
},
{
"data": {
"parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T180551Z.vhd",
"child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260219T050544Z.vhd"
},
"message": "parent VHD is missing"
},
{
"data": {
"parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260219T050544Z.vhd",
"child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260219T180800Z.vhd"
},
"message": "parent VHD is missing"
},
{
"data": {
"parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260219T180800Z.vhd",
"child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260220T050624Z.vhd"
},
"message": "parent VHD is missing"
},
{
"data": {
"backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260217T153235Z.json",
"missingVhds": [
"/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T153235Z.vhd"
]
},
"message": "some VHDs linked to the backup are missing"
},
{
"data": {
"backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260217T143408Z.json",
"missingVhds": [
"/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143408Z.vhd"
]
},
"message": "some VHDs linked to the backup are missing"
},
{
"data": {
"backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260217T143324Z.json",
"missingVhds": [
"/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143324Z.vhd"
]
},
"message": "some VHDs linked to the backup are missing"
},
{
"data": {
"backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260217T183622Z.json",
"missingVhds": [
"/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T183622Z.vhd"
]
},
"message": "some VHDs linked to the backup are missing"
},
{
"data": {
"backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260217T180448Z.json",
"missingVhds": [
"/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T180448Z.vhd"
]
},
"message": "some VHDs linked to the backup are missing"
},
{
"data": {
"backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260217T143237Z.json",
"missingVhds": [
"/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143237Z.vhd"
]
},
"message": "some VHDs linked to the backup are missing"
},
{
"data": {
"backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260218T050700Z.json",
"missingVhds": [
"/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050700Z.vhd"
]
},
"message": "some VHDs linked to the backup are missing"
},
{
"data": {
"backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260218T050033Z.json",
"missingVhds": [
"/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050033Z.vhd"
]
},
"message": "some VHDs linked to the backup are missing"
},
{
"data": {
"backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260218T050634Z.json",
"missingVhds": [
"/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050634Z.vhd"
]
},
"message": "some VHDs linked to the backup are missing"
},
{
"data": {
"backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260218T180551Z.json",
"missingVhds": [
"/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T180551Z.vhd"
]
},
"message": "some VHDs linked to the backup are missing"
},
{
"data": {
"backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260218T050607Z.json",
"missingVhds": [
"/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050607Z.vhd"
]
},
"message": "some VHDs linked to the backup are missing"
},
{
"data": {
"backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260218T141857Z.json",
"missingVhds": [
"/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T141857Z.vhd"
]
},
"message": "some VHDs linked to the backup are missing"
},
{
"data": {
"backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260219T050544Z.json",
"missingVhds": [
"/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260219T050544Z.vhd"
]
},
"message": "some VHDs linked to the backup are missing"
},
{
"data": {
"backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260219T180800Z.json",
"missingVhds": [
"/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260219T180800Z.vhd"
]
},
"message": "some VHDs linked to the backup are missing"
},
{
"data": {
"backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260220T050624Z.json",
"missingVhds": [
"/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260220T050624Z.vhd"
]
},
"message": "some VHDs linked to the backup are missing"
}
],
"end": 1771610854357,
"result": {
"merge": false
}
},
{
"id": "1771610854575",
"message": "snapshot",
"start": 1771610854575,
"status": "success",
"end": 1771610856789,
"result": "aa829f5d-0516-414f-d712-315a4f9304f8"
},
{
"data": {
"id": "b86b4c88-79f9-4ce8-9fa4-dce78a32ea44",
"isFull": true,
"type": "remote"
},
"id": "1771610856789:0",
"message": "export",
"start": 1771610856789,
"status": "success",
"tasks": [
{
"id": "1771610858203",
"message": "transfer",
"start": 1771610858203,
"status": "success",
"end": 1771611601006,
"result": {
"size": 128731578368
}
},
{
"id": "1771611602089:0",
"message": "health check",
"start": 1771611602089,
"status": "success",
"tasks": [
{
"id": "1771611602096",
"message": "transfer",
"start": 1771611602096,
"status": "success",
"end": 1771612169796,
"result": {
"size": 0,
"id": "a0cb1303-2431-ba68-2799-c2fb74ff0055"
}
},
{
"id": "1771612169796:0",
"message": "vmstart",
"start": 1771612169796,
"status": "success",
"end": 1771612178693
}
],
"end": 1771612181064
},
{
"id": "1771612181072",
"message": "clean-vm",
"start": 1771612181072,
"status": "success",
"warnings": [
{
"data": {
"path": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260220T180738Z.json",
"actual": 128731578368,
"expected": 128763533312
},
"message": "cleanVm: incorrect backup size in metadata"
}
],
"end": 1771612181155,
"result": {
"merge": false
}
}
],
"end": 1771612181155
}
],
"infos": [
{
"message": "Transfer data using NBD"
},
{
"message": "will delete snapshot data"
},
{
"data": {
"vdiRef": "OpaqueRef:ea179a76-8170-d5e6-c9d7-311bce869341"
},
"message": "Snapshot data has been deleted"
}
],
"end": 1771612181155
}
],
"end": 1771612181156
While working last night i noticed one of my backups/pools did this. Got the email that it was interupted but when i looked the tasks were still running and moving data it untill it porcess all vms in that backup job.
Edit - note my backup job was run via proxy on the specific pool/job.
2026-02-19T03_00_00.028Z - backup NG.txt
Edit 2 - homelab same last backup was interupted.