XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. peo
    P
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 5
    • Posts 37
    • Groups 0

    peo

    @peo

    0
    Reputation
    1
    Profile views
    37
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online

    peo Unfollow Follow

    Latest posts made by peo

    • RE: Backups started to fail again (overall status: failure, but both snapshot and transfer returns success)

      @olivierlambert no, and all VMs were working at the time before I rebooted the two hosts (not the third one, since that didn't have problem accessing /run/sr-mount/)

      I understand that 'df' will lock up if a NFS or SMB share does not respond, but ls the /run/sr-mount/ (without trying to access a subfolder) should have no reason to lock up (unless /run/sr-mount is not a ordinary folder, which it seems to be)

      posted in Backup
      P
      peo
    • RE: Backups started to fail again (overall status: failure, but both snapshot and transfer returns success)

      @olivierlambert I found a "solution" to the problem, by just rebooting the two involved hosts, but this might still be an issue somewhere (XO or even xcp-ng):

      At the time I started up the hosts after the power failure, the dependencies had already been started a long time before (mainly my internet connectivity and the NAS which holds one of the SRs). All three hosts have their local 2TB SSD as well for different purposes (faster disk access, temporary storage and replication from other hosts).

      I actually forgot to connect the network cable (unplugged because I reorganized the cables to the switch at the same time) to the third host (not involved in these recent problems) and found out that it seemed like it didn't start up properly (or at least, I did not get any video output from it when I was going to check its status after connecting the network cable), so I gave that one a hard reboot and got it up and running.

      Machines with their disks on the local SSDs of the two other hosts have worked fine since I powered them up, so what follows (and the replication issue) was not expected at all:

      Lock up on 'df' and 'ls /run/sr-mount/':

      [11:21 xcp-ng-1 ~]# df -h
      ^C
      [11:21 xcp-ng-1 ~]# ^C
      
      [11:21 xcp-ng-1 ~]# ls /run/sr-mount/
      ^C
      [11:22 xcp-ng-1 ~]# ls /run/
      

      ('ls /run/' worked fine)

      According to XO the disks were accessible and their content showed up as usual.

      posted in Backup
      P
      peo
    • RE: Backups started to fail again (overall status: failure, but both snapshot and transfer returns success)

      Since yesterday, even the replication jobs started to fail (I'm again 12 commits behind the current version, but other scheduled jobs continued to fail when I was up to date with XO)

      The replication is set to run from one host and store on the SSD on another. I had a power failure yesterday, but both hosts needed for this job (xcp-ng-1 and xcp-ng-2) was back up and running at the time the job was started.

      {
        "data": {
          "mode": "delta",
          "reportWhen": "failure"
        },
        "id": "1753705802804",
        "jobId": "0bb53ced-4d52-40a9-8b14-7cd1fa2b30fe",
        "jobName": "Admin Ubuntu 24",
        "message": "backup",
        "scheduleId": "69a05a67-c43b-4d23-b1e8-ada77c70ccc4",
        "start": 1753705802804,
        "status": "failure",
        "infos": [
          {
            "data": {
              "vms": [
                "1728e876-5644-2169-6c62-c764bd8b6bdf"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "1728e876-5644-2169-6c62-c764bd8b6bdf",
              "name_label": "Admin Ubuntu 24"
            },
            "id": "1753705804503",
            "message": "backup VM",
            "start": 1753705804503,
            "status": "failure",
            "tasks": [
              {
                "id": "1753705804984",
                "message": "snapshot",
                "start": 1753705804984,
                "status": "success",
                "end": 1753712867640,
                "result": "4afbdcd9-818f-9e3d-555a-ad0943081c3f"
              },
              {
                "data": {
                  "id": "46f9b5ee-c937-ff71-29b1-520ba0546675",
                  "isFull": false,
                  "name_label": "Local h2 SSD",
                  "type": "SR"
                },
                "id": "1753712867640:0",
                "message": "export",
                "start": 1753712867640,
                "status": "interrupted"
              }
            ],
            "infos": [
              {
                "message": "will delete snapshot data"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:c2504c79-d422-3f0a-d292-169d431e5aee"
                },
                "message": "Snapshot data has been deleted"
              }
            ],
            "end": 1753717484618,
            "result": {
              "name": "BodyTimeoutError",
              "code": "UND_ERR_BODY_TIMEOUT",
              "message": "Body Timeout Error",
              "stack": "BodyTimeoutError: Body Timeout Error\n    at FastTimer.onParserTimeout [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202507262229/node_modules/undici/lib/dispatcher/client-h1.js:646:28)\n    at Timeout.onTick [as _onTimeout] (/opt/xo/xo-builds/xen-orchestra-202507262229/node_modules/undici/lib/util/timers.js:162:13)\n    at listOnTimeout (node:internal/timers:588:17)\n    at process.processTimers (node:internal/timers:523:7)"
            }
          }
        ],
        "end": 1753717484619
      }
      

      Also, the replication job for my Debian XO machine fails with the same 'timeout' problem.

      posted in Backup
      P
      peo
    • RE: Backups started to fail again (overall status: failure, but both snapshot and transfer returns success)

      Since I updated 'everything' involved yesterday, the problems remain (this night's backups failed with the similar problem). As I'm again 6 commits behind the current version, I cannot create a useful bug report, so I'll just update and wait for the next scheduled backups to run (nothing the night towards Thursday, the next sequence will run at the night towards Friday)

      posted in Backup
      P
      peo
    • RE: Backups started to fail again (overall status: failure, but both snapshot and transfer returns success)

      @DustinB said in Backups started to fail again (overall status: failure, but both snapshot and transfer returns success):

      @peo said in Backups started to fail again (overall status: failure, but both snapshot and transfer returns success):

      @olivierlambert Thanks, will update every machine and XO involved in the backup process, and possibly even the individual machines that fails. First failure on vm-cleanup was 15 July, that's a few days before I patched the hosts (as a part of troubleshooting and preventing further failures). Still these backups will (probably) be fully restorable (as I have tested out with the always-failing Docker vm)

      So you patch your host, but not the administrative tools for the hosts?

      Seems a little cart before the horse there, no?

      That's a fault-finding procedure: do not patch everything at once (but now I did, after finding out that patching the hosts did not solve the problem)

      posted in Backup
      P
      peo
    • RE: Backups started to fail again (overall status: failure, but both snapshot and transfer returns success)

      @olivierlambert Thanks, will update every machine and XO involved in the backup process, and possibly even the individual machines that fails. First failure on vm-cleanup was 15 July, that's a few days before I patched the hosts (as a part of troubleshooting and preventing further failures). Still these backups will (probably) be fully restorable (as I have tested out with the always-failing Docker vm)

      posted in Backup
      P
      peo
    • Backups started to fail again (overall status: failure, but both snapshot and transfer returns success)

      Got these backup failures again. Usually only the "Docker" VM, but now all backups gives the status as mentioned in the topic. Below is one of the examples.
      I have not updated XenOrchestra in a "long" time, I'm on c8f9d81 which was current at 3rd of July.
      My hosts are fully updated, as well as the VM running XO.
      The first non-Docker-VM failure appeared before I updated the hosts.
      Anything you want to investigate, or should I just update XO and hope for these errors to stop ?

      {
        "data": {
          "mode": "delta",
          "reportWhen": "failure"
        },
        "id": "1753140173983",
        "jobId": "38f0068f-c124-4876-85d3-83f1003db60c",
        "jobName": "HomeAssistant",
        "message": "backup",
        "scheduleId": "dcb1c759-76b8-441b-9dc0-595914e60608",
        "start": 1753140173983,
        "status": "failure",
        "infos": [
          {
            "data": {
              "vms": [
                "ed4758f3-de34-7a7e-a46b-dc007d52f5c3"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "ed4758f3-de34-7a7e-a46b-dc007d52f5c3",
              "name_label": "HomeAssistant"
            },
            "id": "1753140251984",
            "message": "backup VM",
            "start": 1753140251984,
            "status": "failure",
            "tasks": [
              {
                "id": "1753140251993",
                "message": "clean-vm",
                "start": 1753140251993,
                "status": "success",
                "end": 1753140258038,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1753140354122",
                "message": "snapshot",
                "start": 1753140354122,
                "status": "success",
                "end": 1753140356461,
                "result": "fc6d5d87-a2b5-cae9-8c2a-377ffff5febc"
              },
              {
                "data": {
                  "id": "2b919467-704c-4e35-bac9-2d6a43118bda",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1753140356462",
                "message": "export",
                "start": 1753140356462,
                "status": "failure",
                "tasks": [
                  {
                    "id": "1753140359386",
                    "message": "transfer",
                    "start": 1753140359386,
                    "status": "success",
                    "end": 1753140753378,
                    "result": {
                      "size": 5630853120
                    }
                  },
                  {
                    "id": "1753140761602",
                    "message": "clean-vm",
                    "start": 1753140761602,
                    "status": "failure",
                    "end": 1753140775782,
                    "result": {
                      "name": "InternalError",
                      "$fault": "client",
                      "$metadata": {
                        "httpStatusCode": 500,
                        "requestId": "D98294C01B729C95",
                        "extendedRequestId": "RDk4Mjk0QzAxQjcyOUM5NUQ5ODI5NEMwMUI3MjlDOTVEOTgyOTRDMDFCNzI5Qzk1RDk4Mjk0QzAxQjcyOUM5NQ==",
                        "attempts": 3,
                        "totalRetryDelay": 112
                      },
                      "Code": "InternalError",
                      "message": "Internal Error",
                      "stack": "InternalError: Internal Error\n    at throwDefaultError (/opt/xo/xo-builds/xen-orchestra-202507041243/node_modules/@smithy/smithy-client/dist-cjs/index.js:867:20)\n    at /opt/xo/xo-builds/xen-orchestra-202507041243/node_modules/@smithy/smithy-client/dist-cjs/index.js:876:5\n    at de_CommandError (/opt/xo/xo-builds/xen-orchestra-202507041243/node_modules/@aws-sdk/client-s3/dist-cjs/index.js:4952:14)\n    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)\n    at async /opt/xo/xo-builds/xen-orchestra-202507041243/node_modules/@smithy/middleware-serde/dist-cjs/index.js:35:20\n    at async /opt/xo/xo-builds/xen-orchestra-202507041243/node_modules/@aws-sdk/middleware-sdk-s3/dist-cjs/index.js:484:18\n    at async /opt/xo/xo-builds/xen-orchestra-202507041243/node_modules/@smithy/middleware-retry/dist-cjs/index.js:320:38\n    at async /opt/xo/xo-builds/xen-orchestra-202507041243/node_modules/@aws-sdk/middleware-sdk-s3/dist-cjs/index.js:110:22\n    at async /opt/xo/xo-builds/xen-orchestra-202507041243/node_modules/@aws-sdk/middleware-sdk-s3/dist-cjs/index.js:137:14\n    at async /opt/xo/xo-builds/xen-orchestra-202507041243/node_modules/@aws-sdk/middleware-logger/dist-cjs/index.js:33:22"
                    }
                  }
                ],
                "end": 1753140775783
              }
            ],
            "end": 1753140775783
          }
        ],
        "end": 1753140775784
      }
      
      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      After updating to yesterday's master ("a348c", just before a non-important update for this case), now also "continuous replication" jobs fail (gets stuck):

      e7786ff5-5b61-4e31-a813-f231289d0d26-image.png

      The jobs here are stuck at the transfer stage (both as delta, both to a local SSD on the destination host) since they started 14:30 and 21:00 yesterday.

      Update:
      I have restarted the VM to be backed up (replicated), updated XO (source) and tried again. Running time so far (after some retries which were cancelled because I updated and restarted stuff) more than 90 minutes (the disk on this VM is 27GB, should take at most 10 minutes to transfer, at only 50% efficiency over the gigabit connection to the other host).
      The two other stuck jobs were also cancelled at the update/reboots, one (21:00) was started using the scheduler in XO, the other triggered through crontab (xo-cli script I wrote) on my host with the XO installation, and now when troubleshooting I start the job manually from the overview (as can be seen, makes no difference).
      Next steps:
      I will change the destination to another host, assuming my SSD in the host it tries to replicate to is broken (but I can list the files on it, and it shows that the timestamp of the vhd for the job is updated recently/all the time)

      Another thought:
      Is this maybe a new feature ? A true "continuous replication" as a service which never stops ?

      Imaginary problem solved:
      Replication job still locked up when I changed the destination (let it run for more than 5 hours), so I reverted to the replicated version of my xo VM (Deb12-XO) from before the update of XO to 'a348c' (now running '1a7b5' again). Replication now works, and I accidentally made a new (first) copy of the broken one (forgot to change VM in the backup job).

      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      @florent said in Error: invalid HTTP header in response body:

      @peo said in Error: invalid HTTP header in response body:

      can't connect through NBD, fallback to stream export

      "maybe it's the can't connect through NBD"

      do you have VM with a lot of disks ? if yes, can you reduce the concurrency , or the number of nbd connection ?

      it's independent of the number of disks attached to the VM and the NBD concurrency

      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      @florent as described by myself and others in this thread the error occurs only when "Purge snapshot data when using CBT" is enabled.
      As expected, it runs fine (every time) when "Use NBT+CBT if available" is disabled, which also disables "Purge snapshot data when using CBT".

      Jun 25 06:58:25 xoa xo-server[2661108]: 2025-06-25T10:58:25.998Z xo:backups:worker INFO starting backup
      Jun 25 06:58:26 xoa nfsrahead[2661133]: setting /run/xo-server/mounts/2ad70aa9-8f27-4353-8dde-5623f31cd49f readahead to 128
      Jun 25 06:59:03 xoa xo-server[2661108]: 2025-06-25T10:59:03.721Z @xen-orchestra/xapi/disks/Xapi WARN can't connect through NBD, fallback to stream export
      Jun 25 06:59:03 xoa xo-server[2661108]: 2025-06-25T10:59:03.798Z @xen-orchestra/xapi/disks/Xapi WARN can't connect through NBD, fallback to stream export
      Jun 25 06:59:03 xoa xo-server[2661108]: 2025-06-25T10:59:03.822Z @xen-orchestra/xapi/disks/Xapi WARN can't connect through NBD, fallback to stream export
      Jun 25 06:59:03 xoa xo-server[2661108]: 2025-06-25T10:59:03.952Z @xen-orchestra/xapi/disks/Xapi WARN can't connect through NBD, fallback to stream export
      Jun 25 06:59:04 xoa xo-server[2661108]: 2025-06-25T10:59:04.031Z @xen-orchestra/xapi/disks/Xapi WARN can't connect through NBD, fallback to stream export
      Jun 25 07:01:37 xoa xo-server[2661108]: 2025-06-25T11:01:37.465Z xo:backups:MixinBackupWriter WARN cleanVm: incorrect backup size in metadata {
      Jun 25 07:01:37 xoa xo-server[2661108]:   path: '/xo-vm-backups/30db3746-fecc-4b49-e7af-8f15d13d573c/20250625T105907Z.json',
      Jun 25 07:01:37 xoa xo-server[2661108]:   actual: 10166992896,
      Jun 25 07:01:37 xoa xo-server[2661108]:   expected: 10169530368
      Jun 25 07:01:37 xoa xo-server[2661108]: }
      Jun 25 07:01:37 xoa xo-server[2661108]: 2025-06-25T11:01:37.555Z xo:backups:worker INFO backup has ended
      Jun 25 07:01:37 xoa xo-server[2661108]: 2025-06-25T11:01:37.607Z xo:backups:worker INFO process will exit {
      Jun 25 07:01:37 xoa xo-server[2661108]:   duration: 191607947,
      Jun 25 07:01:37 xoa xo-server[2661108]:   exitCode: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:   resourceUsage: {
      Jun 25 07:01:37 xoa xo-server[2661108]:     userCPUTime: 122499805,
      Jun 25 07:01:37 xoa xo-server[2661108]:     systemCPUTime: 32534032,
      Jun 25 07:01:37 xoa xo-server[2661108]:     maxRSS: 125060,
      Jun 25 07:01:37 xoa xo-server[2661108]:     sharedMemorySize: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:     unsharedDataSize: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:     unsharedStackSize: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:     minorPageFault: 585389,
      Jun 25 07:01:37 xoa xo-server[2661108]:     majorPageFault: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:     swappedOut: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:     fsRead: 2056,
      Jun 25 07:01:37 xoa xo-server[2661108]:     fsWrite: 19863128,
      Jun 25 07:01:37 xoa xo-server[2661108]:     ipcSent: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:     ipcReceived: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:     signalsCount: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:     voluntaryContextSwitches: 112269,
      Jun 25 07:01:37 xoa xo-server[2661108]:     involuntaryContextSwitches: 90074
      Jun 25 07:01:37 xoa xo-server[2661108]:   },
      Jun 25 07:01:37 xoa xo-server[2661108]:   summary: { duration: '3m', cpuUsage: '81%', memoryUsage: '122.13 MiB' }
      Jun 25 07:01:37 xoa xo-server[2661108]: }
      
      Jun 25 07:01:58 xoa xo-server[2661382]: 2025-06-25T11:01:58.035Z xo:backups:worker INFO starting backup
      Jun 25 07:02:23 xoa xo-server[2661382]: 2025-06-25T11:02:23.856Z @xen-orchestra/xapi/disks/Xapi WARN openNbdCBT Error: can't connect to any nbd client
      Jun 25 07:02:23 xoa xo-server[2661382]:     at connectNbdClientIfPossible (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/utils.mjs:23:19)
      Jun 25 07:02:23 xoa xo-server[2661382]:     at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
      Jun 25 07:02:23 xoa xo-server[2661382]:     at async XapiVhdCbtSource.init (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/XapiVhdCbt.mjs:75:20)
      Jun 25 07:02:23 xoa xo-server[2661382]:     at async #openNbdCbt (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/Xapi.mjs:129:7)
      Jun 25 07:02:23 xoa xo-server[2661382]:     at async XapiDiskSource.init (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41)
      Jun 25 07:02:23 xoa xo-server[2661382]:     at async file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_incrementalVm.mjs:65:5
      Jun 25 07:02:23 xoa xo-server[2661382]:     at async Promise.all (index 0)
      Jun 25 07:02:23 xoa xo-server[2661382]:     at async cancelableMap (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_cancelableMap.mjs:11:12)
      Jun 25 07:02:23 xoa xo-server[2661382]:     at async exportIncrementalVm (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_incrementalVm.mjs:28:3)
      Jun 25 07:02:23 xoa xo-server[2661382]:     at async IncrementalXapiVmBackupRunner._copy (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:38:25) {
      Jun 25 07:02:23 xoa xo-server[2661382]:   code: 'NO_NBD_AVAILABLE'
      Jun 25 07:02:23 xoa xo-server[2661382]: }
      Jun 25 07:02:27 xoa xo-server[2661382]: 2025-06-25T11:02:27.312Z xo:xapi:vdi WARN invalid HTTP header in response body {
      Jun 25 07:02:27 xoa xo-server[2661382]:   body: 'HTTP/1.1 500 Internal Error\r\n' +
      Jun 25 07:02:27 xoa xo-server[2661382]:     'content-length: 318\r\n' +
      Jun 25 07:02:27 xoa xo-server[2661382]:     'content-type: text/html\r\n' +
      Jun 25 07:02:27 xoa xo-server[2661382]:     'connection: close\r\n' +
      Jun 25 07:02:27 xoa xo-server[2661382]:     'cache-control: no-cache, no-store\r\n' +
      Jun 25 07:02:27 xoa xo-server[2661382]:     '\r\n' +
      Jun 25 07:02:27 xoa xo-server[2661382]:     '<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.<h1> Additional information </h1>VDI_INCOMPATIBLE_TYPE: [ OpaqueRef:31a2142e-c677-6c86-e916-0ac19ffbe40f; CBT metadata ]</body></html>'
      Jun 25 07:02:27 xoa xo-server[2661382]: }
      Jun 25 07:02:39 xoa xo-server[2661382]: 2025-06-25T11:02:39.117Z @xen-orchestra/xapi/disks/Xapi WARN openNbdCBT Error: can't connect to any nbd client
      Jun 25 07:02:39 xoa xo-server[2661382]:     at connectNbdClientIfPossible (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/utils.mjs:23:19)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async XapiVhdCbtSource.init (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/XapiVhdCbt.mjs:75:20)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async #openNbdCbt (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/Xapi.mjs:129:7)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async XapiDiskSource.init (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_incrementalVm.mjs:65:5
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async Promise.all (index 3) {
      Jun 25 07:02:39 xoa xo-server[2661382]:   code: 'NO_NBD_AVAILABLE'
      Jun 25 07:02:39 xoa xo-server[2661382]: }
      Jun 25 07:02:39 xoa xo-server[2661382]: 2025-06-25T11:02:39.539Z @xen-orchestra/xapi/disks/Xapi WARN openNbdCBT Error: can't connect to any nbd client
      Jun 25 07:02:39 xoa xo-server[2661382]:     at connectNbdClientIfPossible (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/utils.mjs:23:19)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async XapiVhdCbtSource.init (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/XapiVhdCbt.mjs:75:20)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async #openNbdCbt (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/Xapi.mjs:129:7)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async XapiDiskSource.init (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_incrementalVm.mjs:65:5
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async Promise.all (index 2) {
      Jun 25 07:02:39 xoa xo-server[2661382]:   code: 'NO_NBD_AVAILABLE'
      Jun 25 07:02:39 xoa xo-server[2661382]: }
      Jun 25 07:02:42 xoa xo-server[2661382]: 2025-06-25T11:02:42.588Z xo:xapi:vdi WARN invalid HTTP header in response body {
      Jun 25 07:02:42 xoa xo-server[2661382]:   body: 'HTTP/1.1 500 Internal Error\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     'content-length: 318\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     'content-type: text/html\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     'connection: close\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     'cache-control: no-cache, no-store\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     '\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     '<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.<h1> Additional information </h1>VDI_INCOMPATIBLE_TYPE: [ OpaqueRef:f0379a82-6fce-c6fa-a4c7-b7b6dcc5df26; CBT metadata ]</body></html>'
      Jun 25 07:02:42 xoa xo-server[2661382]: }
      Jun 25 07:02:42 xoa xo-server[2661382]: 2025-06-25T11:02:42.950Z xo:xapi:vdi WARN invalid HTTP header in response body {
      Jun 25 07:02:42 xoa xo-server[2661382]:   body: 'HTTP/1.1 500 Internal Error\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     'content-length: 318\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     'content-type: text/html\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     'connection: close\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     'cache-control: no-cache, no-store\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     '\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     '<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.<h1> Additional information </h1>VDI_INCOMPATIBLE_TYPE: [ OpaqueRef:85222592-ba8f-e189-8389-6cb4d8dd038b; CBT metadata ]</body></html>'
      Jun 25 07:02:42 xoa xo-server[2661382]: }
      Jun 25 07:02:43 xoa xo-server[2661382]: 2025-06-25T11:02:43.467Z @xen-orchestra/xapi/disks/Xapi WARN openNbdCBT XapiError: HANDLE_INVALID(VDI, OpaqueRef:0671251f-d1f0-2a16-53c8-125f2b357e0d)
      Jun 25 07:02:43 xoa xo-server[2661382]:     at XapiError.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)
      Jun 25 07:02:43 xoa xo-server[2661382]:     at default (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_getTaskResult.mjs:13:29)
      Jun 25 07:02:43 xoa xo-server[2661382]:     at Xapi._addRecordToCache (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1072:24)
      Jun 25 07:02:43 xoa xo-server[2661382]:     at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1106:14
      Jun 25 07:02:43 xoa xo-server[2661382]:     at Array.forEach (<anonymous>)
      Jun 25 07:02:43 xoa xo-server[2661382]:     at Xapi._processEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1096:12)
      Jun 25 07:02:43 xoa xo-server[2661382]:     at Xapi._watchEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1269:14)
      Jun 25 07:02:43 xoa xo-server[2661382]:     at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
      Jun 25 07:02:43 xoa xo-server[2661382]:   code: 'HANDLE_INVALID',
      Jun 25 07:02:43 xoa xo-server[2661382]:   params: [ 'VDI', 'OpaqueRef:0671251f-d1f0-2a16-53c8-125f2b357e0d' ],
      Jun 25 07:02:43 xoa xo-server[2661382]:   call: undefined,
      Jun 25 07:02:43 xoa xo-server[2661382]:   url: undefined,
      Jun 25 07:02:43 xoa xo-server[2661382]:   task: task {
      Jun 25 07:02:43 xoa xo-server[2661382]:     uuid: '19892e76-0681-defa-7d86-8adff4c519df',
      Jun 25 07:02:43 xoa xo-server[2661382]:     name_label: 'Async.VDI.list_changed_blocks',
      Jun 25 07:02:43 xoa xo-server[2661382]:     name_description: '',
      Jun 25 07:02:43 xoa xo-server[2661382]:     allowed_operations: [],
      Jun 25 07:02:43 xoa xo-server[2661382]:     current_operations: {},
      Jun 25 07:02:43 xoa xo-server[2661382]:     created: '20250625T11:02:23Z',
      Jun 25 07:02:43 xoa xo-server[2661382]:     finished: '20250625T11:02:43Z',
      Jun 25 07:02:43 xoa xo-server[2661382]:     status: 'failure',
      Jun 25 07:02:43 xoa xo-server[2661382]:     resident_on: 'OpaqueRef:38c38c49-d15f-e42a-7aca-ae093fca92c6',
      Jun 25 07:02:43 xoa xo-server[2661382]:     progress: 1,
      Jun 25 07:02:43 xoa xo-server[2661382]:     type: '<none/>',
      Jun 25 07:02:43 xoa xo-server[2661382]:     result: '',
      Jun 25 07:02:43 xoa xo-server[2661382]:     error_info: [
      Jun 25 07:02:43 xoa xo-server[2661382]:       'HANDLE_INVALID',
      Jun 25 07:02:43 xoa xo-server[2661382]:       'VDI',
      Jun 25 07:02:43 xoa xo-server[2661382]:       'OpaqueRef:0671251f-d1f0-2a16-53c8-125f2b357e0d'
      Jun 25 07:02:43 xoa xo-server[2661382]:     ],
      Jun 25 07:02:43 xoa xo-server[2661382]:     other_config: {},
      Jun 25 07:02:43 xoa xo-server[2661382]:     subtask_of: 'OpaqueRef:NULL',
      Jun 25 07:02:43 xoa xo-server[2661382]:     subtasks: [],
      Jun 25 07:02:43 xoa xo-server[2661382]:     backtrace: '(((process xapi)(filename ocaml/xapi-client/client.ml)(line 7))((process xapi)(filename ocaml/xapi-client/client.ml)(line 19))((process xapi)(filename ocaml/xapi-client/client.ml)(line 11643))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 144))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/rbac.ml)(line 188))((process xapi)(filename ocaml/xapi/rbac.ml)(line 197))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 77)))'
      Jun 25 07:02:43 xoa xo-server[2661382]:   }
      Jun 25 07:02:43 xoa xo-server[2661382]: }
      Jun 25 07:02:46 xoa xo-server[2661382]: 2025-06-25T11:02:46.867Z xo:xapi:vdi WARN invalid HTTP header in response body {
      Jun 25 07:02:46 xoa xo-server[2661382]:   body: 'HTTP/1.1 500 Internal Error\r\n' +
      Jun 25 07:02:46 xoa xo-server[2661382]:     'content-length: 346\r\n' +
      Jun 25 07:02:46 xoa xo-server[2661382]:     'content-type: text/html\r\n' +
      Jun 25 07:02:46 xoa xo-server[2661382]:     'connection: close\r\n' +
      Jun 25 07:02:46 xoa xo-server[2661382]:     'cache-control: no-cache, no-store\r\n' +
      Jun 25 07:02:46 xoa xo-server[2661382]:     '\r\n' +
      Jun 25 07:02:46 xoa xo-server[2661382]:     '<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.<h1> Additional information </h1>Db_exn.Read_missing_uuid(&quot;VDI&quot;, &quot;&quot;, &quot;OpaqueRef:0671251f-d1f0-2a16-53c8-125f2b357e0d&quot;)</body></html>'
      Jun 25 07:02:46 xoa xo-server[2661382]: }
      Jun 25 07:02:52 xoa xo-server[2661382]: 2025-06-25T11:02:52.120Z xo:backups:worker INFO backup has ended
      Jun 25 07:02:52 xoa xo-server[2661382]: 2025-06-25T11:02:52.133Z xo:backups:worker INFO process will exit {
      Jun 25 07:02:52 xoa xo-server[2661382]:   duration: 54097776,
      Jun 25 07:02:52 xoa xo-server[2661382]:   exitCode: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:   resourceUsage: {
      Jun 25 07:02:52 xoa xo-server[2661382]:     userCPUTime: 2370678,
      Jun 25 07:02:52 xoa xo-server[2661382]:     systemCPUTime: 266735,
      Jun 25 07:02:52 xoa xo-server[2661382]:     maxRSS: 37208,
      Jun 25 07:02:52 xoa xo-server[2661382]:     sharedMemorySize: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     unsharedDataSize: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     unsharedStackSize: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     minorPageFault: 22126,
      Jun 25 07:02:52 xoa xo-server[2661382]:     majorPageFault: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     swappedOut: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     fsRead: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     fsWrite: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     ipcSent: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     ipcReceived: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     signalsCount: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     voluntaryContextSwitches: 2163,
      Jun 25 07:02:52 xoa xo-server[2661382]:     involuntaryContextSwitches: 658
      Jun 25 07:02:52 xoa xo-server[2661382]:   },
      Jun 25 07:02:52 xoa xo-server[2661382]:   summary: { duration: '54s', cpuUsage: '5%', memoryUsage: '36.34 MiB' }
      Jun 25 07:02:52 xoa xo-server[2661382]: }
      
      
      posted in Backup
      P
      peo