XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. acebmxer
    3. Posts
    A
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 32
    • Posts 260
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: backup mail report says INTERRUPTED but it's not ?

      @MajorP93

      No by reducing the number of vms that are being backed up by xoa.

      I moved Windows vms to veeam as we need application aware backups.

      posted in Backup
      A
      acebmxer
    • RE: backup mail report says INTERRUPTED but it's not ?

      I may have fixed the issue myself. I have moved our Windows VM backups to Veeam and leaving XOA to backup linux VMs.

      Screenshot 2026-02-27 095031.png

      posted in Backup
      A
      acebmxer
    • RE: backup mail report says INTERRUPTED but it's not ?

      Updated to XOA 6.2.0 Will monitor memory usage and report back.

      Current use before backups after update. All host and pools connected as of yesterday.

      Screenshot 2026-02-26 135750.png

      posted in Backup
      A
      acebmxer
    • RE: Install XO from sources.

      @pilow

      As I am completly new to this scripting and such I did reach out to Ai for assistance and believe the license check should be bypassed now.

      Again not suggested to use in production. Use at your own risk.

      still has its own branch for more testing before merging to main branch.

      posted in Xen Orchestra
      A
      acebmxer
    • RE: backup mail report says INTERRUPTED but it's not ?

      From my XOA

      Screenshot 2026-02-24 094020.png

      One of my pools have been offline for 24hrs due to power outage.

      Standard XOA can open support tunnel if needed.

      posted in Backup
      A
      acebmxer
    • RE: Install XO from sources.

      @Pilow

      Yeah i was looking over his script to see how he worked around it but just states that.

      I knew there would be limitations with it so i didnt add it to the main branch yet.

      posted in Xen Orchestra
      A
      acebmxer
    • RE: Issues joining pool with less pif on the newest host

      @afmart_dei

      I had similar issues recently. But not exacly the same. I was able to add second host to pool but was not able to work because in my case eth0 was host managment, eth4 storage, eth5 migration, eth 7 vm network. My second host at the time did not have eht7

      Then i further messed myself up by using second host as its own pool attached to same storage and when down hill form there, but was an easy fix.

      posted in XCP-ng
      A
      acebmxer
    • RE: Install XO from sources.

      Just added the option for --proxy to deploy xo proxy.

      https://github.com/acebmxer/install_xen_orchestra/tree/xo-proxy

      Deploying a Proxy VM
      The script supports deploying a Xen Orchestra Proxy VM directly to your XenServer/XCP-ng pool using the --proxy option:

      ./install-xen-orchestra.sh --proxy

      Important Limitations and Notes

      ⚠️ Network Configuration:
      The --proxy option does not allow you to specify which network the VIF is attached to
      It will default to "Pool wide network associated with eth0"

      ⚠️ Production Use Warning:
      Use at your own risk. Not advised for use in production environments.

      This feature is provided for testing and development purposes. For production deployments, it is recommended to manually configure proxy VMs with proper network planning and validation.

      Screenshot_20260223_184415.png

      Screenshot_20260223_185435.png

      Screenshot_20260223_190135-1.png

      While can add host and remote via proxy. Backups will fail with the following error.

      backupNg.runJob
      {
        "id": "95ac8089-69f3-404e-b902-21d0e878eec2",
        "schedule": "76989b41-8bcf-4438-833a-84ae80125367"
      }
      {
        "code": -32000,
        "data": {
          "stack": "TypeError: licenses.find is not a function
          at Function.<anonymous> (file:///usr/local/lib/node_modules/@xen-orchestra/proxy/app/mixins/appliance.mjs:168:23)
          at processTicksAndRejections (node:internal/process/task_queues:95:5)
          at file:///usr/local/lib/node_modules/@xen-orchestra/proxy/app/mixins/backups.mjs:110:25"
        },
        "message": "licenses.find is not a function"
      }
      
      posted in Xen Orchestra
      A
      acebmxer
    • RE: Install XO from sources.

      I appreciate the comments and they are all welcome. Do keep note that I did say this is not Production ready and use at your own risk. If you see anything that is wrong please provide suggest feedback to correct said issue.

      I just started running my home lab from this version yesterday. I imported my config from previous XO. Today i saw 4 new commits and the update function updated to the latest commit. Backups have run with no issue.

      As I can only test in my home lab I can only speak of my own experiences.

      posted in Xen Orchestra
      A
      acebmxer
    • RE: Install XO from sources.

      @dcskinner

      Yes I get what you say about the confusion with XO vs XOA. I know there is another person who maintains an install script calls it XO-CE but didnt want to call mine that specifically.

      As for the feature set. You get full feature set when you install from sources. You just dont get paid support.

      I have just not implemented the process of setting up a proxy via the script.

      Edit - Updated title

      posted in Xen Orchestra
      A
      acebmxer
    • RE: Install XO from sources.

      @Greg_E

      If you are referring to the login info admin@admin.net. That is per vates is documentation. It does prompt you to change that password.

      There is a xo-config.cfg file where you can change most of the defaults but those that apply to install not for do itself.

      https://docs.xen-orchestra.com/installation

      First Login
      Once you have started the VM, you can access the web UI by putting the IP you configured during deployment into your web browser. If you did not configure an IP or are unsure, try one of the following methods to find it:
      
      Run xe vm-list params=name-label,networks | grep -A 1 XOA on your host
      Check your router's DHCP leases for an xoa lease
      tip
      Default Web UI credentials are admin@admin.net / admin
      Default console/SSH credentials are not set, you need to set them as described here.
      
      posted in Xen Orchestra
      A
      acebmxer
    • RE: Install XO from sources.

      @Pilow

      Maybe at some point. Unless someone would like to contribute that part. I want to make sure XOA install is good and stable first.

      I just switch over my homelab to this install now.

      posted in Xen Orchestra
      A
      acebmxer
    • Install XO from sources.

      While this project is more for myself it is open to others to use. Please use at your own risk. Double check my script before using in a production environment. I am open to suggestions and please report any issues here - https://github.com/acebmxer/install_xen_orchestra/issues

      With that said I wanted to create my own script to install XOA from sources using the information provided by https://docs.xen-orchestra.com/installation#from-the-sources. It took many tries to get it working just to see the log in screen.

      I have only tested on Ubuntu 24.04.4 as of yet.

      https://github.com/acebmxer/install_xen_orchestra

      # Xen Orchestra Installation Script
      
      Automated installation script for [Xen Orchestra](https://xen-orchestra.com/) from source, based on the [official documentation](https://docs.xen-orchestra.com/installation#from-the-sources).
      
      ## Features
      
      - Installs all required dependencies and prerequisites automatically
      - Uses Node.js 20 LTS (with npm v10)
      - Yarn package manager installed globally
      - Self-signed SSL certificate generation for HTTPS
      - Direct port binding (80 and 443) - no proxy required
      - Systemd service for automatic startup
      - Update functionality with commit comparison
      - Automatic backups before updates (keeps last 5)
      - Interactive restore from any available backup
      - Rebuild functionality — fresh clone + clean build on the current branch, preserves settings
      - Configurable via simple config file
      - **Customizable service user** - run as any username or root, defaults to 'xo'
      - **Automatic swap space management** - creates 2GB swap if needed for builds
      - **NFS mount support** - automatically configures sudo permissions for remote storage
      - **Memory-efficient builds** - prevents out-of-memory errors on low-RAM systems
      
      ## Quick Start
      
      ### 1. Clone this repository
      
      ```bash
      git clone https://github.com/acebmxer/install_xen_orchestra.git
      cd install_xen_orchestra
      

      2. Configure the installation

      Copy the sample configuration file and customize it:

      cp sample-xo-config.cfg xo-config.cfg
      

      Edit xo-config.cfg with your preferred settings:

      nano xo-config.cfg
      

      Note: If xo-config.cfg is not found when running the script, it will automatically be created from sample-xo-config.cfg with default settings.

      3. Run the installation

      Important: Do NOT run this script with sudo. Run as a normal user with sudo privileges - the script will use sudo internally for commands that require elevated permissions.

      ./install-xen-orchestra.sh
      

      Configuration Options

      The xo-config.cfg file supports the following options:

      Option Default Description
      HTTP_PORT 80 HTTP port for web interface
      HTTPS_PORT 443 HTTPS port for web interface
      INSTALL_DIR /opt/xen-orchestra Installation directory
      SSL_CERT_DIR /etc/ssl/xo SSL certificate directory
      SSL_CERT_FILE xo-cert.pem SSL certificate filename
      SSL_KEY_FILE xo-key.pem SSL private key filename
      GIT_BRANCH master Git branch (master, stable, or tag)
      BACKUP_DIR /opt/xo-backups Backup directory for updates
      BACKUP_KEEP 5 Number of backups to retain
      NODE_VERSION 20 Node.js major version
      SERVICE_USER xo Service user (any username, leave empty for root)
      DEBUG_MODE false Enable debug logging

      Updating Xen Orchestra

      To update an existing installation:

      ./install-xen-orchestra.sh --update
      

      The update process will:

      1. Compare the installed commit with the latest from GitHub
      2. Skip if already up to date
      3. Create a backup of the current installation
      4. Pull the latest changes
      5. Rebuild Xen Orchestra
      6. Restart the service

      Backup Management

      • Backups are stored in BACKUP_DIR (default: /opt/xo-backups)
      • Only the last BACKUP_KEEP backups are retained (default: 5)
      • Older backups are automatically purged before each new backup is created
      • Backup folder names are timestamped in UTC; dates and times are displayed converted to the local system timezone
      • When restoring, backups are listed newest first — [1] is the most recent, [5] is the oldest

      Restoring from Backup

      To restore a previous installation:

      ./install-xen-orchestra.sh --restore
      

      The restore process will:

      1. List all available backups newest first (1 = newest, 5 = oldest) with their dates and commit hashes
      2. Prompt you to select which backup to restore
      3. Ask for confirmation before making any changes
      4. Stop the running service
      5. Replace the current installation with the selected backup
      6. Rebuild Xen Orchestra (node_modules are excluded from backups to save space)
      7. Restart the service and report the restored commit hash

      Example output:

      ==============================================
        Available Backups
      ==============================================
      
        [1] xo-backup-20260221_233000  (2026-02-21 06:30:00 PM EST)  commit: a1b2c3d4e5f6 (newest)
        [2] xo-backup-20260221_141500  (2026-02-21 09:15:00 AM EST)  commit: 9f8e7d6c5b4a
        [3] xo-backup-20260220_162000  (2026-02-20 11:20:00 AM EST)  commit: 1a2b3c4d5e6f
        [4] xo-backup-20260219_225200  (2026-02-19 05:52:00 PM EST)  commit: 3c4d5e6f7a8b
        [5] xo-backup-20260219_133000  (2026-02-19 08:30:00 AM EST)  commit: 7d8e9f0a1b2c (oldest)
      
      Enter the number of the backup to restore [1-5], or 'q' to quit:
      

      After a successful restore the confirmed commit is displayed:

      [SUCCESS] Restore completed successfully!
      [INFO]    Restored commit: a1b2c3d4e5f6
      

      Rebuilding Xen Orchestra

      If your installation becomes corrupted or broken, use --rebuild to do a fresh clone and clean build of your current branch without losing any settings:

      ./install-xen-orchestra.sh --rebuild
      

      The rebuild process will:

      1. Detect the currently installed branch
      2. Display a summary and ask for confirmation
      3. Stop the running service
      4. Create a backup of the current installation (same as --update — saved to BACKUP_DIR)
      5. Remove the current INSTALL_DIR and do a fresh git clone of the same branch
      6. Perform a clean build (turbo cache cleared)
      7. Restart the service and report the new commit hash

      Note: Settings stored in /etc/xo-server (config.toml) and /var/lib/xo-server (databases and state) are not touched during a rebuild, so all your connections, users, and configuration are preserved.

      Service Management

      After installation, Xen Orchestra runs as a systemd service:

      # Start the service
      sudo systemctl start xo-server
      
      # Stop the service
      sudo systemctl stop xo-server
      
      # Check status
      sudo systemctl status xo-server
      
      # View logs
      sudo journalctl -u xo-server -f
      

      Accessing Xen Orchestra

      After installation, access the web interface:

      • HTTP: http://your-server-ip
      • HTTPS: https://your-server-ip

      Note: If you changed HTTP_PORT or HTTPS_PORT in xo-config.cfg from the defaults (80/443), append the port to the URL — e.g. http://your-server-ip:8080

      Default Credentials

      • Username: admin@admin.net
      • Password: admin

      Warning: Change the default password immediately after first login!

      Switching Branches

      To switch to a different branch (e.g., from master to stable😞

      1. Edit xo-config.cfg and change GIT_BRANCH
      2. Run the update:
      ./install-xen-orchestra.sh --update
      

      The script will automatically fetch and checkout the new branch during the update process.

      System Requirements

      Minimum Hardware

      • RAM: 2GB minimum (4GB+ recommended for building)
      • Disk: 10GB free space
      • CPU: 1 core minimum (2+ recommended)

      Note: The script automatically creates 2GB swap space if insufficient memory is detected during builds to prevent out-of-memory errors.

      Dependencies

      The script automatically installs all required dependencies:

      Debian/Ubuntu:

      • apt-transport-https, ca-certificates, libcap2-bin, curl, gnupg
      • build-essential, git, patch, sudo
      • Node.js v20 (with npm v10), yarn
      • redis-server
      • python3-minimal, libpng-dev
      • lvm2, cifs-utils, nfs-common, ntfs-3g
      • libvhdi-utils, dmidecode
      • libfuse2t64 (or libfuse2 on older systems)
      • software-properties-common (Ubuntu only)

      RHEL/CentOS/Fedora:

      • redis or valkey (RHEL 10+)
      • Node.js v20 (with npm v10), yarn
      • ca-certificates, gnupg2, curl
      • make, automake, gcc, gcc-c++, patch, sudo
      • git, libpng-devel
      • lvm2, cifs-utils, nfs-utils, ntfs-3g
      • dmidecode, libcap, fuse-libs

      Supported Operating Systems

      • Debian 10/11/12/13 (apt-based)
      • Ubuntu (apt-based, all supported versions)
      • RHEL/CentOS/AlmaLinux/Rocky (dnf/yum-based)
      • Fedora (dnf-based)

      Troubleshooting

      Service fails to start

      Check the service logs:

      sudo journalctl -u xo-server -n 50
      

      Port binding issues

      If running as non-root, the service uses CAP_NET_BIND_SERVICE to bind to privileged ports. Ensure systemd is configured correctly.

      Build failures

      The easiest fix is to use the built-in rebuild command, which takes a backup first:

      ./install-xen-orchestra.sh --rebuild
      

      Or manually (if running as non-root SERVICE_USER):

      cd /opt/xen-orchestra
      rm -rf node_modules
      # Replace 'xo' with your SERVICE_USER if different
      sudo -u xo yarn
      sudo -u xo yarn build
      

      Out of Memory (OOM) during build

      If the build process fails with exit code 137 (killed), your system ran out of memory:

      The script automatically handles this by:

      • Detecting available swap space before building
      • Creating 2GB swap file if insufficient
      • Setting Node.js memory limits (4GB max)

      To manually check/add swap:

      # Check current swap
      free -h
      
      # Create 2GB swap file if needed
      sudo fallocate -l 2G /swapfile
      sudo chmod 600 /swapfile
      sudo mkswap /swapfile
      sudo swapon /swapfile
      echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
      

      NFS mount errors ("user" NFS mounts not supported)

      If you get an error when adding NFS remote storage:

      mount.nfs: not installed setuid - "user" NFS mounts not supported
      

      The script automatically handles this by configuring sudo permissions for your service user (default: xo) to run mount/umount commands including NFS-specific helpers.

      If you encounter this issue on an existing installation:

      # Update sudoers configuration (replace 'xo' with your SERVICE_USER if different)
      sudo tee /etc/sudoers.d/xo-server-xo > /dev/null << 'EOF'
      # Allow xo-server user to mount/unmount without password
      Defaults:xo !requiretty
      xo ALL=(ALL:ALL) NOPASSWD:SETENV: /bin/mount, /usr/bin/mount, /bin/umount, /usr/bin/umount, /bin/findmnt, /usr/bin/findmnt, /sbin/mount.nfs, /usr/sbin/mount.nfs, /sbin/mount.nfs4, /usr/sbin/mount.nfs4, /sbin/umount.nfs, /usr/sbin/umount.nfs, /sbin/umount.nfs4, /usr/sbin/umount.nfs4
      EOF
      
      sudo chmod 440 /etc/sudoers.d/xo-server-xo
      sudo systemctl restart xo-server
      

      NFS permission denied errors

      If NFS mounts succeed but you get permission errors when writing:

      EACCES: permission denied, open '/run/xo-server/mounts/.keeper_*'
      

      This is a UID/GID mismatch between the xo-server user and your NFS export permissions:

      Option 1: Run as root (recommended for simplicity)

      # Edit config
      nano xo-config.cfg
      # Set: SERVICE_USER=
      # (leave empty to run as root)
      
      # Update service (replace 'xo' with your SERVICE_USER if different)
      sudo sed -i 's/User=xo/User=root/' /etc/systemd/system/xo-server.service
      sudo chown -R root:root /opt/xen-orchestra /var/lib/xo-server /etc/xo-server
      sudo systemctl daemon-reload
      sudo systemctl restart xo-server
      

      Option 2: Configure NFS for your service user's UID
      On your NFS server, adjust exports to allow your service user's UID (check with id <username>), or use appropriate squash settings in your NFS export configuration.

      Redis connection issues

      Ensure Redis is running:

      redis-cli ping
      # Should respond with: PONG
      

      Security Considerations

      • No Root: The script refuses to run as root/sudo and uses sudo internally
      • Service User: Runs as dedicated xo user by default (customizable to any username; leave empty for root)
      • SSL: Self-signed certificate generated automatically for HTTPS
      • Sudo Permissions: Service user configured with minimal sudo access for:
        • NFS/CIFS mount operations (/bin/mount, /usr/bin/mount, /sbin/mount.nfs, etc.)
        • Unmount operations (/bin/umount, /usr/bin/umount, /sbin/umount.nfs, etc.)
        • Mount point discovery (/bin/findmnt, /usr/bin/findmnt)
        • All configured in /etc/sudoers.d/xo-server-<username> with NOPASSWD for specific commands only
      • Automatic Swap: Swap file created with secure permissions (600) if needed for builds

      License

      This installation script is provided as-is. Xen Orchestra itself is licensed under AGPL-3.0.

      Credits

      • Xen Orchestra by Vates
      • Installation Documentation
      posted in Xen Orchestra
      A
      acebmxer
    • RE: XOA - Memory Usage

      @Pilow
      Yeah I was following your post. At first I thought i was having random backup issues with backups via proxies with the occasional issue with backups via xoa. Then just yesturday see the error about memory.

      For now I just rebooted XOA and it down under 2gb currently. Will wait and see how long for it to build back up again or untill Dev team finds a solution to the problem.

      posted in Xen Orchestra
      A
      acebmxer
    • RE: XOA - Memory Usage

      @olivierlambert said in XOA - Memory Usage:

      Hi,

      Depends on the number of pools, hosts, VMs, number of backup jobs, size etc.

      Any issues with just jumping up to 8gb or 16? Has been fine untill today.

      posted in Xen Orchestra
      A
      acebmxer
    • XOA - Memory Usage

      Just noticed alert today. Currently at default settings for XOA ram allocation 4gb 4vcpus. There is a recommendation for a set value based on pulls/hosts/ or vms?

      Same questions for proxies is the proxies. What is the process for them same as xoa?

      1/16 - Node version
      ?? 2/16 - Memory: less than 10% free memory on the XOA, check https://xen-orchestra.com/docs/troubleshooting.html#memory
      ? 3/16 - xo-server config syntax
      ? 4/16 - Disk space for /
      ? 5/16 - Disk space for /var
      ? 6/16 - Native SMB support
      ? 7/16 - Fetching VM UUID
      ? 8/16 - XOA version
      ? 9/16 - /var is writable
      ? 10/16 - Appliance registration
      ? 11/16 - npm version
      ? 12/16 - NTP synchronization
      ? 13/16 - local SSH server
      ? 14/16 - xoa-support user
      ? 15/16 - Internet connectivity
      ? 16/16 - XOA status

      Screenshot_20260221_155047.png

      Screenshot_20260221_155019.png

      posted in Xen Orchestra
      A
      acebmxer
    • RE: VM Disk Not Migrating due to Backup Running

      @allySalami

      Let someone else comfirm but think you are way out of date on xo. Might be best to deploy a new one.

      posted in Management
      A
      acebmxer
    • RE: VM Disk Not Migrating due to Backup Running

      @allySalami

      click on About in XO...

      Screenshot_20260220_151640.png

      Screenshot_20260220_151658.png

      posted in Management
      A
      acebmxer
    • Parent VHD missing - VHD linked to backup are missing

      This is from my home lab. Should I be concerned? If so what steps do i need to take?

      I see no errors in the Health tab in the Dashboard.

       "type": "VM",
              "id": "29353f1e-866f-7dc7-ce26-ba439b7328ca",
              "name_label": "Work PC"
            },
            "id": "1771610941317",
            "message": "backup VM",
            "start": 1771610941317,
            "status": "success",
            "tasks": [
              {
                "id": "1771610941321",
                "message": "clean-vm",
                "start": 1771610941321,
                "status": "success",
                "warnings": [
                  {
                    "data": {
                      "path": "/xo-vm-backups/29353f1e-866f-7dc7-ce26-ba439b7328ca/20260214T181024Z.json",
                      "actual": 27343314432,
                      "expected": 61755492352
                    },
                    "message": "cleanVm: incorrect backup size in metadata"
                  }
                ],
                "end": 1771610941603,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1771610941812",
                "message": "snapshot",
                "start": 1771610941812,
                "status": "success",
                "end": 1771610943963,
                "result": "51156fde-cdc0-3657-93aa-d59671a5d28a"
              },
              {
                "id": "1771611387226:0",
                "message": "health check",
                "start": 1771611387226,
                "status": "success",
                "infos": [
                  {
                    "message": "This VM doesn't match the health check's tags for this schedule"
                  }
                ],
                "end": 1771611387226
              },
              {
                "data": {
                  "id": "b86b4c88-79f9-4ce8-9fa4-dce78a32ea44",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1771610943964",
                "message": "export",
                "start": 1771610943964,
                "status": "success",
                "tasks": [
                  {
                    "id": "1771610946585",
                    "message": "transfer",
                    "start": 1771610946585,
                    "status": "success",
                    "end": 1771611385506,
                    "result": {
                      "size": 43509612544
                    }
                  },
                  {
                    "id": "1771611387245",
                    "message": "clean-vm",
                    "start": 1771611387245,
                    "status": "success",
                    "warnings": [
                      {
                        "data": {
                          "path": "/xo-vm-backups/29353f1e-866f-7dc7-ce26-ba439b7328ca/20260220T180906Z.json",
                          "actual": 43509612544,
                          "expected": 43520499200
                        },
                        "message": "cleanVm: incorrect backup size in metadata"
                      }
                    ],
                    "end": 1771611388247,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1771611388247
              }
            ],
            "infos": [
              {
                "message": "Transfer data using NBD"
              },
              {
                "message": "will delete snapshot data"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:502e838d-b1a9-067f-e3cf-53d652bc432e"
                },
                "message": "Snapshot data has been deleted"
              }
            ],
            "end": 1771611388247
          },
          {
            "data": {
              "type": "VM",
              "id": "c79ab43d-513e-ef73-f81f-a13257f311c8",
              "name_label": "Docker of Things"
            },
            "id": "1771610853200",
            "message": "backup VM",
            "start": 1771610853200,
            "status": "success",
            "tasks": [
              {
                "id": "1771610853205",
                "message": "clean-vm",
                "start": 1771610853205,
                "status": "success",
                "warnings": [
                  {
                    "data": {
                      "path": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143237Z.vhd",
                      "error": {
                        "generatedMessage": false,
                        "code": "ERR_ASSERTION",
                        "actual": false,
                        "expected": true,
                        "operator": "==",
                        "diff": "simple",
                        "size": 34517586944
                      }
                    },
                    "message": "VHD check error"
                  },
                  {
                    "data": {
                      "parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143237Z.vhd",
                      "child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143324Z.vhd"
                    },
                    "message": "parent VHD is missing"
                  },
                  {
                    "data": {
                      "parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143324Z.vhd",
                      "child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143408Z.vhd"
                    },
                    "message": "parent VHD is missing"
                  },
                  {
                    "data": {
                      "parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143408Z.vhd",
                      "child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T153235Z.vhd"
                    },
                    "message": "parent VHD is missing"
                  },
                  {
                    "data": {
                      "parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T153235Z.vhd",
                      "child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T180448Z.vhd"
                    },
                    "message": "parent VHD is missing"
                  },
                  {
                    "data": {
                      "parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T180448Z.vhd",
                      "child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T183622Z.vhd"
                    },
                    "message": "parent VHD is missing"
                  },
                  {
                    "data": {
                      "parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T183622Z.vhd",
                      "child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050033Z.vhd"
                    },
                    "message": "parent VHD is missing"
                  },
                  {
                    "data": {
                      "parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050033Z.vhd",
                      "child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050607Z.vhd"
                    },
                    "message": "parent VHD is missing"
                  },
                  {
                    "data": {
                      "parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050607Z.vhd",
                      "child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050634Z.vhd"
                    },
                    "message": "parent VHD is missing"
                  },
                  {
                    "data": {
                      "parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050634Z.vhd",
                      "child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050700Z.vhd"
                    },
                    "message": "parent VHD is missing"
                  },
                  {
                    "data": {
                      "parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050700Z.vhd",
                      "child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T141857Z.vhd"
                    },
                    "message": "parent VHD is missing"
                  },
                  {
                    "data": {
                      "parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T141857Z.vhd",
                      "child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T180551Z.vhd"
                    },
                    "message": "parent VHD is missing"
                  },
                  {
                    "data": {
                      "parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T180551Z.vhd",
                      "child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260219T050544Z.vhd"
                    },
                    "message": "parent VHD is missing"
                  },
                  {
                    "data": {
                      "parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260219T050544Z.vhd",
                      "child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260219T180800Z.vhd"
                    },
                    "message": "parent VHD is missing"
                  },
                  {
                    "data": {
                      "parent": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260219T180800Z.vhd",
                      "child": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260220T050624Z.vhd"
                    },
                    "message": "parent VHD is missing"
                  },
                  {
                    "data": {
                      "backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260217T153235Z.json",
                      "missingVhds": [
                        "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T153235Z.vhd"
                      ]
                    },
                    "message": "some VHDs linked to the backup are missing"
                  },
                  {
                    "data": {
                      "backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260217T143408Z.json",
                      "missingVhds": [
                        "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143408Z.vhd"
                      ]
                    },
                    "message": "some VHDs linked to the backup are missing"
                  },
                  {
                    "data": {
                      "backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260217T143324Z.json",
                      "missingVhds": [
                        "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143324Z.vhd"
                      ]
                    },
                    "message": "some VHDs linked to the backup are missing"
                  },
                  {
                    "data": {
                      "backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260217T183622Z.json",
                      "missingVhds": [
                        "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T183622Z.vhd"
                      ]
                    },
                    "message": "some VHDs linked to the backup are missing"
                  },
                  {
                    "data": {
                      "backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260217T180448Z.json",
                      "missingVhds": [
                        "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T180448Z.vhd"
                      ]
                    },
                    "message": "some VHDs linked to the backup are missing"
                  },
                  {
                    "data": {
                      "backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260217T143237Z.json",
                      "missingVhds": [
                        "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260217T143237Z.vhd"
                      ]
                    },
                    "message": "some VHDs linked to the backup are missing"
                  },
                  {
                    "data": {
                      "backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260218T050700Z.json",
                      "missingVhds": [
                        "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050700Z.vhd"
                      ]
                    },
                    "message": "some VHDs linked to the backup are missing"
                  },
                  {
                    "data": {
                      "backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260218T050033Z.json",
                      "missingVhds": [
                        "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050033Z.vhd"
                      ]
                    },
                    "message": "some VHDs linked to the backup are missing"
                  },
                  {
                    "data": {
                      "backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260218T050634Z.json",
                      "missingVhds": [
                        "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050634Z.vhd"
                      ]
                    },
                    "message": "some VHDs linked to the backup are missing"
                  },
                  {
                    "data": {
                      "backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260218T180551Z.json",
                      "missingVhds": [
                        "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T180551Z.vhd"
                      ]
                    },
                    "message": "some VHDs linked to the backup are missing"
                  },
                  {
                    "data": {
                      "backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260218T050607Z.json",
                      "missingVhds": [
                        "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T050607Z.vhd"
                      ]
                    },
                    "message": "some VHDs linked to the backup are missing"
                  },
                  {
                    "data": {
                      "backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260218T141857Z.json",
                      "missingVhds": [
                        "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260218T141857Z.vhd"
                      ]
                    },
                    "message": "some VHDs linked to the backup are missing"
                  },
                  {
                    "data": {
                      "backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260219T050544Z.json",
                      "missingVhds": [
                        "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260219T050544Z.vhd"
                      ]
                    },
                    "message": "some VHDs linked to the backup are missing"
                  },
                  {
                    "data": {
                      "backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260219T180800Z.json",
                      "missingVhds": [
                        "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260219T180800Z.vhd"
                      ]
                    },
                    "message": "some VHDs linked to the backup are missing"
                  },
                  {
                    "data": {
                      "backup": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260220T050624Z.json",
                      "missingVhds": [
                        "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/vdis/95ac8089-69f3-404e-b902-21d0e878eec2/4854ddef-0b32-44ba-88b9-6f2cb8f050ae/20260220T050624Z.vhd"
                      ]
                    },
                    "message": "some VHDs linked to the backup are missing"
                  }
                ],
                "end": 1771610854357,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1771610854575",
                "message": "snapshot",
                "start": 1771610854575,
                "status": "success",
                "end": 1771610856789,
                "result": "aa829f5d-0516-414f-d712-315a4f9304f8"
              },
              {
                "data": {
                  "id": "b86b4c88-79f9-4ce8-9fa4-dce78a32ea44",
                  "isFull": true,
                  "type": "remote"
                },
                "id": "1771610856789:0",
                "message": "export",
                "start": 1771610856789,
                "status": "success",
                "tasks": [
                  {
                    "id": "1771610858203",
                    "message": "transfer",
                    "start": 1771610858203,
                    "status": "success",
                    "end": 1771611601006,
                    "result": {
                      "size": 128731578368
                    }
                  },
                  {
                    "id": "1771611602089:0",
                    "message": "health check",
                    "start": 1771611602089,
                    "status": "success",
                    "tasks": [
                      {
                        "id": "1771611602096",
                        "message": "transfer",
                        "start": 1771611602096,
                        "status": "success",
                        "end": 1771612169796,
                        "result": {
                          "size": 0,
                          "id": "a0cb1303-2431-ba68-2799-c2fb74ff0055"
                        }
                      },
                      {
                        "id": "1771612169796:0",
                        "message": "vmstart",
                        "start": 1771612169796,
                        "status": "success",
                        "end": 1771612178693
                      }
                    ],
                    "end": 1771612181064
                  },
                  {
                    "id": "1771612181072",
                    "message": "clean-vm",
                    "start": 1771612181072,
                    "status": "success",
                    "warnings": [
                      {
                        "data": {
                          "path": "/xo-vm-backups/c79ab43d-513e-ef73-f81f-a13257f311c8/20260220T180738Z.json",
                          "actual": 128731578368,
                          "expected": 128763533312
                        },
                        "message": "cleanVm: incorrect backup size in metadata"
                      }
                    ],
                    "end": 1771612181155,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1771612181155
              }
            ],
            "infos": [
              {
                "message": "Transfer data using NBD"
              },
              {
                "message": "will delete snapshot data"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:ea179a76-8170-d5e6-c9d7-311bce869341"
                },
                "message": "Snapshot data has been deleted"
              }
            ],
            "end": 1771612181155
          }
        ],
        "end": 1771612181156
      
      
      posted in Backup
      A
      acebmxer
    • RE: backup mail report says INTERRUPTED but it's not ?

      While working last night i noticed one of my backups/pools did this. Got the email that it was interupted but when i looked the tasks were still running and moving data it untill it porcess all vms in that backup job.

      Edit - note my backup job was run via proxy on the specific pool/job.

      2026-02-19T03_00_00.028Z - backup NG.txt

      Edit 2 - homelab same last backup was interupted.

      2026-02-19T05_00_00.011Z - backup NG.txt

      posted in Backup
      A
      acebmxer