XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. peo
    3. Posts
    P
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 4
    • Posts 30
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Error: invalid HTTP header in response body

      After updating to yesterday's master ("a348c", just before a non-important update for this case), now also "continuous replication" jobs fail (gets stuck):

      e7786ff5-5b61-4e31-a813-f231289d0d26-image.png

      The jobs here are stuck at the transfer stage (both as delta, both to a local SSD on the destination host) since they started 14:30 and 21:00 yesterday.

      Update:
      I have restarted the VM to be backed up (replicated), updated XO (source) and tried again. Running time so far (after some retries which were cancelled because I updated and restarted stuff) more than 90 minutes (the disk on this VM is 27GB, should take at most 10 minutes to transfer, at only 50% efficiency over the gigabit connection to the other host).
      The two other stuck jobs were also cancelled at the update/reboots, one (21:00) was started using the scheduler in XO, the other triggered through crontab (xo-cli script I wrote) on my host with the XO installation, and now when troubleshooting I start the job manually from the overview (as can be seen, makes no difference).
      Next steps:
      I will change the destination to another host, assuming my SSD in the host it tries to replicate to is broken (but I can list the files on it, and it shows that the timestamp of the vhd for the job is updated recently/all the time)

      Another thought:
      Is this maybe a new feature ? A true "continuous replication" as a service which never stops ?

      Imaginary problem solved:
      Replication job still locked up when I changed the destination (let it run for more than 5 hours), so I reverted to the replicated version of my xo VM (Deb12-XO) from before the update of XO to 'a348c' (now running '1a7b5' again). Replication now works, and I accidentally made a new (first) copy of the broken one (forgot to change VM in the backup job).

      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      @florent said in Error: invalid HTTP header in response body:

      @peo said in Error: invalid HTTP header in response body:

      can't connect through NBD, fallback to stream export

      "maybe it's the can't connect through NBD"

      do you have VM with a lot of disks ? if yes, can you reduce the concurrency , or the number of nbd connection ?

      it's independent of the number of disks attached to the VM and the NBD concurrency

      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      @florent as described by myself and others in this thread the error occurs only when "Purge snapshot data when using CBT" is enabled.
      As expected, it runs fine (every time) when "Use NBT+CBT if available" is disabled, which also disables "Purge snapshot data when using CBT".

      Jun 25 06:58:25 xoa xo-server[2661108]: 2025-06-25T10:58:25.998Z xo:backups:worker INFO starting backup
      Jun 25 06:58:26 xoa nfsrahead[2661133]: setting /run/xo-server/mounts/2ad70aa9-8f27-4353-8dde-5623f31cd49f readahead to 128
      Jun 25 06:59:03 xoa xo-server[2661108]: 2025-06-25T10:59:03.721Z @xen-orchestra/xapi/disks/Xapi WARN can't connect through NBD, fallback to stream export
      Jun 25 06:59:03 xoa xo-server[2661108]: 2025-06-25T10:59:03.798Z @xen-orchestra/xapi/disks/Xapi WARN can't connect through NBD, fallback to stream export
      Jun 25 06:59:03 xoa xo-server[2661108]: 2025-06-25T10:59:03.822Z @xen-orchestra/xapi/disks/Xapi WARN can't connect through NBD, fallback to stream export
      Jun 25 06:59:03 xoa xo-server[2661108]: 2025-06-25T10:59:03.952Z @xen-orchestra/xapi/disks/Xapi WARN can't connect through NBD, fallback to stream export
      Jun 25 06:59:04 xoa xo-server[2661108]: 2025-06-25T10:59:04.031Z @xen-orchestra/xapi/disks/Xapi WARN can't connect through NBD, fallback to stream export
      Jun 25 07:01:37 xoa xo-server[2661108]: 2025-06-25T11:01:37.465Z xo:backups:MixinBackupWriter WARN cleanVm: incorrect backup size in metadata {
      Jun 25 07:01:37 xoa xo-server[2661108]:   path: '/xo-vm-backups/30db3746-fecc-4b49-e7af-8f15d13d573c/20250625T105907Z.json',
      Jun 25 07:01:37 xoa xo-server[2661108]:   actual: 10166992896,
      Jun 25 07:01:37 xoa xo-server[2661108]:   expected: 10169530368
      Jun 25 07:01:37 xoa xo-server[2661108]: }
      Jun 25 07:01:37 xoa xo-server[2661108]: 2025-06-25T11:01:37.555Z xo:backups:worker INFO backup has ended
      Jun 25 07:01:37 xoa xo-server[2661108]: 2025-06-25T11:01:37.607Z xo:backups:worker INFO process will exit {
      Jun 25 07:01:37 xoa xo-server[2661108]:   duration: 191607947,
      Jun 25 07:01:37 xoa xo-server[2661108]:   exitCode: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:   resourceUsage: {
      Jun 25 07:01:37 xoa xo-server[2661108]:     userCPUTime: 122499805,
      Jun 25 07:01:37 xoa xo-server[2661108]:     systemCPUTime: 32534032,
      Jun 25 07:01:37 xoa xo-server[2661108]:     maxRSS: 125060,
      Jun 25 07:01:37 xoa xo-server[2661108]:     sharedMemorySize: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:     unsharedDataSize: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:     unsharedStackSize: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:     minorPageFault: 585389,
      Jun 25 07:01:37 xoa xo-server[2661108]:     majorPageFault: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:     swappedOut: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:     fsRead: 2056,
      Jun 25 07:01:37 xoa xo-server[2661108]:     fsWrite: 19863128,
      Jun 25 07:01:37 xoa xo-server[2661108]:     ipcSent: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:     ipcReceived: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:     signalsCount: 0,
      Jun 25 07:01:37 xoa xo-server[2661108]:     voluntaryContextSwitches: 112269,
      Jun 25 07:01:37 xoa xo-server[2661108]:     involuntaryContextSwitches: 90074
      Jun 25 07:01:37 xoa xo-server[2661108]:   },
      Jun 25 07:01:37 xoa xo-server[2661108]:   summary: { duration: '3m', cpuUsage: '81%', memoryUsage: '122.13 MiB' }
      Jun 25 07:01:37 xoa xo-server[2661108]: }
      
      Jun 25 07:01:58 xoa xo-server[2661382]: 2025-06-25T11:01:58.035Z xo:backups:worker INFO starting backup
      Jun 25 07:02:23 xoa xo-server[2661382]: 2025-06-25T11:02:23.856Z @xen-orchestra/xapi/disks/Xapi WARN openNbdCBT Error: can't connect to any nbd client
      Jun 25 07:02:23 xoa xo-server[2661382]:     at connectNbdClientIfPossible (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/utils.mjs:23:19)
      Jun 25 07:02:23 xoa xo-server[2661382]:     at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
      Jun 25 07:02:23 xoa xo-server[2661382]:     at async XapiVhdCbtSource.init (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/XapiVhdCbt.mjs:75:20)
      Jun 25 07:02:23 xoa xo-server[2661382]:     at async #openNbdCbt (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/Xapi.mjs:129:7)
      Jun 25 07:02:23 xoa xo-server[2661382]:     at async XapiDiskSource.init (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41)
      Jun 25 07:02:23 xoa xo-server[2661382]:     at async file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_incrementalVm.mjs:65:5
      Jun 25 07:02:23 xoa xo-server[2661382]:     at async Promise.all (index 0)
      Jun 25 07:02:23 xoa xo-server[2661382]:     at async cancelableMap (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_cancelableMap.mjs:11:12)
      Jun 25 07:02:23 xoa xo-server[2661382]:     at async exportIncrementalVm (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_incrementalVm.mjs:28:3)
      Jun 25 07:02:23 xoa xo-server[2661382]:     at async IncrementalXapiVmBackupRunner._copy (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/_vmRunners/IncrementalXapi.mjs:38:25) {
      Jun 25 07:02:23 xoa xo-server[2661382]:   code: 'NO_NBD_AVAILABLE'
      Jun 25 07:02:23 xoa xo-server[2661382]: }
      Jun 25 07:02:27 xoa xo-server[2661382]: 2025-06-25T11:02:27.312Z xo:xapi:vdi WARN invalid HTTP header in response body {
      Jun 25 07:02:27 xoa xo-server[2661382]:   body: 'HTTP/1.1 500 Internal Error\r\n' +
      Jun 25 07:02:27 xoa xo-server[2661382]:     'content-length: 318\r\n' +
      Jun 25 07:02:27 xoa xo-server[2661382]:     'content-type: text/html\r\n' +
      Jun 25 07:02:27 xoa xo-server[2661382]:     'connection: close\r\n' +
      Jun 25 07:02:27 xoa xo-server[2661382]:     'cache-control: no-cache, no-store\r\n' +
      Jun 25 07:02:27 xoa xo-server[2661382]:     '\r\n' +
      Jun 25 07:02:27 xoa xo-server[2661382]:     '<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.<h1> Additional information </h1>VDI_INCOMPATIBLE_TYPE: [ OpaqueRef:31a2142e-c677-6c86-e916-0ac19ffbe40f; CBT metadata ]</body></html>'
      Jun 25 07:02:27 xoa xo-server[2661382]: }
      Jun 25 07:02:39 xoa xo-server[2661382]: 2025-06-25T11:02:39.117Z @xen-orchestra/xapi/disks/Xapi WARN openNbdCBT Error: can't connect to any nbd client
      Jun 25 07:02:39 xoa xo-server[2661382]:     at connectNbdClientIfPossible (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/utils.mjs:23:19)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async XapiVhdCbtSource.init (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/XapiVhdCbt.mjs:75:20)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async #openNbdCbt (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/Xapi.mjs:129:7)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async XapiDiskSource.init (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_incrementalVm.mjs:65:5
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async Promise.all (index 3) {
      Jun 25 07:02:39 xoa xo-server[2661382]:   code: 'NO_NBD_AVAILABLE'
      Jun 25 07:02:39 xoa xo-server[2661382]: }
      Jun 25 07:02:39 xoa xo-server[2661382]: 2025-06-25T11:02:39.539Z @xen-orchestra/xapi/disks/Xapi WARN openNbdCBT Error: can't connect to any nbd client
      Jun 25 07:02:39 xoa xo-server[2661382]:     at connectNbdClientIfPossible (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/utils.mjs:23:19)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async XapiVhdCbtSource.init (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/XapiVhdCbt.mjs:75:20)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async #openNbdCbt (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/xapi/disks/Xapi.mjs:129:7)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async XapiDiskSource.init (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41)
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_incrementalVm.mjs:65:5
      Jun 25 07:02:39 xoa xo-server[2661382]:     at async Promise.all (index 2) {
      Jun 25 07:02:39 xoa xo-server[2661382]:   code: 'NO_NBD_AVAILABLE'
      Jun 25 07:02:39 xoa xo-server[2661382]: }
      Jun 25 07:02:42 xoa xo-server[2661382]: 2025-06-25T11:02:42.588Z xo:xapi:vdi WARN invalid HTTP header in response body {
      Jun 25 07:02:42 xoa xo-server[2661382]:   body: 'HTTP/1.1 500 Internal Error\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     'content-length: 318\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     'content-type: text/html\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     'connection: close\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     'cache-control: no-cache, no-store\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     '\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     '<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.<h1> Additional information </h1>VDI_INCOMPATIBLE_TYPE: [ OpaqueRef:f0379a82-6fce-c6fa-a4c7-b7b6dcc5df26; CBT metadata ]</body></html>'
      Jun 25 07:02:42 xoa xo-server[2661382]: }
      Jun 25 07:02:42 xoa xo-server[2661382]: 2025-06-25T11:02:42.950Z xo:xapi:vdi WARN invalid HTTP header in response body {
      Jun 25 07:02:42 xoa xo-server[2661382]:   body: 'HTTP/1.1 500 Internal Error\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     'content-length: 318\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     'content-type: text/html\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     'connection: close\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     'cache-control: no-cache, no-store\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     '\r\n' +
      Jun 25 07:02:42 xoa xo-server[2661382]:     '<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.<h1> Additional information </h1>VDI_INCOMPATIBLE_TYPE: [ OpaqueRef:85222592-ba8f-e189-8389-6cb4d8dd038b; CBT metadata ]</body></html>'
      Jun 25 07:02:42 xoa xo-server[2661382]: }
      Jun 25 07:02:43 xoa xo-server[2661382]: 2025-06-25T11:02:43.467Z @xen-orchestra/xapi/disks/Xapi WARN openNbdCBT XapiError: HANDLE_INVALID(VDI, OpaqueRef:0671251f-d1f0-2a16-53c8-125f2b357e0d)
      Jun 25 07:02:43 xoa xo-server[2661382]:     at XapiError.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)
      Jun 25 07:02:43 xoa xo-server[2661382]:     at default (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_getTaskResult.mjs:13:29)
      Jun 25 07:02:43 xoa xo-server[2661382]:     at Xapi._addRecordToCache (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1072:24)
      Jun 25 07:02:43 xoa xo-server[2661382]:     at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1106:14
      Jun 25 07:02:43 xoa xo-server[2661382]:     at Array.forEach (<anonymous>)
      Jun 25 07:02:43 xoa xo-server[2661382]:     at Xapi._processEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1096:12)
      Jun 25 07:02:43 xoa xo-server[2661382]:     at Xapi._watchEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1269:14)
      Jun 25 07:02:43 xoa xo-server[2661382]:     at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
      Jun 25 07:02:43 xoa xo-server[2661382]:   code: 'HANDLE_INVALID',
      Jun 25 07:02:43 xoa xo-server[2661382]:   params: [ 'VDI', 'OpaqueRef:0671251f-d1f0-2a16-53c8-125f2b357e0d' ],
      Jun 25 07:02:43 xoa xo-server[2661382]:   call: undefined,
      Jun 25 07:02:43 xoa xo-server[2661382]:   url: undefined,
      Jun 25 07:02:43 xoa xo-server[2661382]:   task: task {
      Jun 25 07:02:43 xoa xo-server[2661382]:     uuid: '19892e76-0681-defa-7d86-8adff4c519df',
      Jun 25 07:02:43 xoa xo-server[2661382]:     name_label: 'Async.VDI.list_changed_blocks',
      Jun 25 07:02:43 xoa xo-server[2661382]:     name_description: '',
      Jun 25 07:02:43 xoa xo-server[2661382]:     allowed_operations: [],
      Jun 25 07:02:43 xoa xo-server[2661382]:     current_operations: {},
      Jun 25 07:02:43 xoa xo-server[2661382]:     created: '20250625T11:02:23Z',
      Jun 25 07:02:43 xoa xo-server[2661382]:     finished: '20250625T11:02:43Z',
      Jun 25 07:02:43 xoa xo-server[2661382]:     status: 'failure',
      Jun 25 07:02:43 xoa xo-server[2661382]:     resident_on: 'OpaqueRef:38c38c49-d15f-e42a-7aca-ae093fca92c6',
      Jun 25 07:02:43 xoa xo-server[2661382]:     progress: 1,
      Jun 25 07:02:43 xoa xo-server[2661382]:     type: '<none/>',
      Jun 25 07:02:43 xoa xo-server[2661382]:     result: '',
      Jun 25 07:02:43 xoa xo-server[2661382]:     error_info: [
      Jun 25 07:02:43 xoa xo-server[2661382]:       'HANDLE_INVALID',
      Jun 25 07:02:43 xoa xo-server[2661382]:       'VDI',
      Jun 25 07:02:43 xoa xo-server[2661382]:       'OpaqueRef:0671251f-d1f0-2a16-53c8-125f2b357e0d'
      Jun 25 07:02:43 xoa xo-server[2661382]:     ],
      Jun 25 07:02:43 xoa xo-server[2661382]:     other_config: {},
      Jun 25 07:02:43 xoa xo-server[2661382]:     subtask_of: 'OpaqueRef:NULL',
      Jun 25 07:02:43 xoa xo-server[2661382]:     subtasks: [],
      Jun 25 07:02:43 xoa xo-server[2661382]:     backtrace: '(((process xapi)(filename ocaml/xapi-client/client.ml)(line 7))((process xapi)(filename ocaml/xapi-client/client.ml)(line 19))((process xapi)(filename ocaml/xapi-client/client.ml)(line 11643))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 144))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/rbac.ml)(line 188))((process xapi)(filename ocaml/xapi/rbac.ml)(line 197))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 77)))'
      Jun 25 07:02:43 xoa xo-server[2661382]:   }
      Jun 25 07:02:43 xoa xo-server[2661382]: }
      Jun 25 07:02:46 xoa xo-server[2661382]: 2025-06-25T11:02:46.867Z xo:xapi:vdi WARN invalid HTTP header in response body {
      Jun 25 07:02:46 xoa xo-server[2661382]:   body: 'HTTP/1.1 500 Internal Error\r\n' +
      Jun 25 07:02:46 xoa xo-server[2661382]:     'content-length: 346\r\n' +
      Jun 25 07:02:46 xoa xo-server[2661382]:     'content-type: text/html\r\n' +
      Jun 25 07:02:46 xoa xo-server[2661382]:     'connection: close\r\n' +
      Jun 25 07:02:46 xoa xo-server[2661382]:     'cache-control: no-cache, no-store\r\n' +
      Jun 25 07:02:46 xoa xo-server[2661382]:     '\r\n' +
      Jun 25 07:02:46 xoa xo-server[2661382]:     '<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred; please wait a while and try again. If the problem persists, please contact your support representative.<h1> Additional information </h1>Db_exn.Read_missing_uuid(&quot;VDI&quot;, &quot;&quot;, &quot;OpaqueRef:0671251f-d1f0-2a16-53c8-125f2b357e0d&quot;)</body></html>'
      Jun 25 07:02:46 xoa xo-server[2661382]: }
      Jun 25 07:02:52 xoa xo-server[2661382]: 2025-06-25T11:02:52.120Z xo:backups:worker INFO backup has ended
      Jun 25 07:02:52 xoa xo-server[2661382]: 2025-06-25T11:02:52.133Z xo:backups:worker INFO process will exit {
      Jun 25 07:02:52 xoa xo-server[2661382]:   duration: 54097776,
      Jun 25 07:02:52 xoa xo-server[2661382]:   exitCode: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:   resourceUsage: {
      Jun 25 07:02:52 xoa xo-server[2661382]:     userCPUTime: 2370678,
      Jun 25 07:02:52 xoa xo-server[2661382]:     systemCPUTime: 266735,
      Jun 25 07:02:52 xoa xo-server[2661382]:     maxRSS: 37208,
      Jun 25 07:02:52 xoa xo-server[2661382]:     sharedMemorySize: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     unsharedDataSize: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     unsharedStackSize: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     minorPageFault: 22126,
      Jun 25 07:02:52 xoa xo-server[2661382]:     majorPageFault: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     swappedOut: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     fsRead: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     fsWrite: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     ipcSent: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     ipcReceived: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     signalsCount: 0,
      Jun 25 07:02:52 xoa xo-server[2661382]:     voluntaryContextSwitches: 2163,
      Jun 25 07:02:52 xoa xo-server[2661382]:     involuntaryContextSwitches: 658
      Jun 25 07:02:52 xoa xo-server[2661382]:   },
      Jun 25 07:02:52 xoa xo-server[2661382]:   summary: { duration: '54s', cpuUsage: '5%', memoryUsage: '36.34 MiB' }
      Jun 25 07:02:52 xoa xo-server[2661382]: }
      
      
      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      @olivierlambert Also the same in XOA, switched "latest" in the "release channel" before updating:

      Initial backup succeeds, the second one fails, the third one succeeds (but is transferred as "full" again, as the first one)

      2232da57-c151-443e-8033-6aa0dbb43f25-image.png

      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      @olivierlambert I just updated to ('a348ce07d') and the problem remains:
      (this backup job was set up just as a test yesterday, only local VDI in the VM and storing backups to NAS over NFS)
      30edb031-84c9-4d20-808c-2ffaeda91861-image.png

      As before, it happens on every second manual run of the job (when the last line of the error says "Type: Delta")

      Also, a slight UI problem, but that's not there (or at least was not yesterday) in the real version.

      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      @FritzGerald As I replied, most of my VMs have at least two virtual disks (also mixed locations: some on local NVMe, some on local SSD and some on the NAS over NFS).

      I would say this is not a reportable issue until you have the backups running again (without deletion of the snapshots). When you activate the snapshot deletion again, the problem will appear on every second run of a backup job (the first is "full" and it succeeds, the second attempt will be "delta" and it fails, the third attempt will again be "full" and will succeed). This is (was at least for me) independent of the number of attached disks per VM.

      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      @FritzGerald I have not had any problems like this since I disabled the deletion of the in-between-backup snapshots. I have for example a couple of machines with 50GB+ disks (one with a 100+ GB, mostly unused now, so the snapshot in between the backups takes less than a MB).
      All backups were failing (most of my VMs have more than one disk, a trick I use to lock it to a specific host) until I disabled the deletion of the snapshot. Not at once, but more and more of them until all..

      I still have the other "imaginary problem" with my VM for Docker (but that's a completely other problem which have not yet been acknowledged - backups "fail" but I'm able to restore them to a fully working new VM)

      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      @FritzGerald The snapshots (left on the disk until next backup) will only consist of the differences between the previous backup and the current one.
      BUT.. when you do the first backup of a machine, the snapshot will use the full (used) size of each disk attached to the machine (this might be what happened at your first attempt).

      If you have the space for it, just do one backup at the time with snapshot deletion disabled, then do another one when it's finished. The snapshots will then be reduced to only the difference between the first and second backup.

      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      @FritzGerald It has nothing to do with the number of disks attached to the VM. It just fails every second time:
      https://xcp-ng.org/forum/post/93508

      The "solution" (until there is a real solution) is in the reply below the linked one: turn off "Purge snapshot data when using CBT" under advanced settings for all backup jobs.

      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      @manilx said in Error: invalid HTTP header in response body:

      @peo I don't have this setting set. The errors appear inconsistently.

      So you confirmed my "solution" to the problem, even if you did not had that setting enabled when I suggested turning it off.

      It's great that more people have this problem and it gets resolved with the same "solution". I was starting to think I imagined the problems week after week before I first reported it.

      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      @olivierlambert Have you heard of any progress in this case ? I found out that I got rid of the error when I disabled "Purge snapshot data when using CBT" on all my backups, so the backups are no longer failing, except from the Docker VM I have mentioned in another case "Success or failure" (or similar), which indicates failure, but restores successfully (which is weird).

      posted in Backup
      P
      peo
    • RE: XO - Files Restore

      @lsouai-vates Is this a description of the cause of the problem ? From what you describe, if one selects a backup containing a LVM partition scheme, it should be able to be mounted at least the first time ?
      As in here: I verified that this machine is using LVM first, and when selecting it, I immediately selected the large LVM partition to try to restore a file from that (which failed).
      e7148e43-aa01-433d-8af4-a936e30550e6-image.png
      The machine (Debian 12) running XO do not itself use LVM, so "ubuntu-vg" should be free for mounting this first time.

      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      @manilx as you describe it, it might be a problem with the remote NFS storage, but still, the it would be nice if that error message could be a bit clearer.

      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      @manilx Does your backup job succeed if you run it manually after a failure (as I have figured out is happening for me) ?
      I did some more tests and found out that the setting "Purge snapshot data when using CBT" under advanced backup settings might be the cause of the problem. The only thing is that with this setting disabled, every machine being backed up will keep a space-wasting snapshot in the same storage repository as the snapshotted disk.

      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      I can also add in that my specific backup ("Admin Ubuntu 24") is not part of a sequence, it's a separate "Continuous replication" job (failed again today).

      I will try this now: disable the schedule in XO, and schedule it using cron (and my script calling xo-cli) on the machine running XO.

      As I now have tested by initiating the backup job using crontab, I have found out that it fails every second time. The difference in the backup logs between success and failure is the "Type" at the end of the in XO visible result display: "Delta" fails and "Full" succeeds.
      I will not clutter up the thread with more details unless asked for.

      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      One of the latest failures of this kind ruled out the remote as a possible source for the failure:
      The virtual machine is being replicated every week to another host, so the storage is local storage on that host.
      3d076dc3-9de4-4bb6-a3ac-5fa586b7f271-image.png

      As almost shown in the log, the XO version used was 'c5a268277' (before the update I did today).

      Backup log (ask for other details if interested in investigating this further):

      {
        "data": {
          "mode": "delta",
          "reportWhen": "failure"
        },
        "id": "1748477700007",
        "jobId": "0bb53ced-4d52-40a9-8b14-7cd1fa2b30fe",
        "jobName": "Admin Ubuntu 24",
        "message": "backup",
        "scheduleId": "69a05a67-c43b-4d23-b1e8-ada77c70ccc4",
        "start": 1748477700007,
        "status": "failure",
        "infos": [
          {
            "data": {
              "vms": [
                "1728e876-5644-2169-6c62-c764bd8b6bdf"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "1728e876-5644-2169-6c62-c764bd8b6bdf",
              "name_label": "Admin Ubuntu 24"
            },
            "id": "1748477701623",
            "message": "backup VM",
            "start": 1748477701623,
            "status": "failure",
            "tasks": [
              {
                "id": "1748477702092",
                "message": "snapshot",
                "start": 1748477702092,
                "status": "success",
                "end": 1748477703786,
                "result": "729ae454-1570-2838-5449-0271113ee53e"
              },
              {
                "data": {
                  "id": "46f9b5ee-c937-ff71-29b1-520ba0546675",
                  "isFull": false,
                  "name_label": "Local h2 SSD",
                  "type": "SR"
                },
                "id": "1748477703786:0",
                "message": "export",
                "start": 1748477703786,
                "status": "interrupted"
              }
            ],
            "infos": [
              {
                "message": "will delete snapshot data"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:299ed677-2b2e-4e28-4300-3f7d053be0ac"
                },
                "message": "Snapshot data has been deleted"
              }
            ],
            "end": 1748477707592,
            "result": {
              "message": "invalid HTTP header in response body",
              "name": "Error",
              "stack": "Error: invalid HTTP header in response body\n    at checkVdiExport (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/vdi.mjs:37:19)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/vdi.mjs:244:5)\n    at async #getExportStream (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:123:20)\n    at async XapiVhdStreamNbdSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:135:23)\n    at async XapiVhdStreamNbdSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamNbd.mjs:31:5)\n    at async #openNbdStream (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/Xapi.mjs:69:7)\n    at async XapiDiskSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/backups/_incrementalVm.mjs:65:5\n    at async Promise.all (index 1)"
            }
          }
        ],
        "end": 1748477707593
      }
      
      posted in Backup
      P
      peo
    • Backup success or failure ?

      I'm continuing to investigate a persistent (but random) backup issue I've encountered during my use of xcp-ng. I’ve previously raised this as part of other backup-related threads. The backup reports for some of my jobs continues to indicate a failure (specifically for "overall status" and "remote status"), even though the backup transfer itself appears to complete successfully.

      74b028a4-d5ed-49be-8cad-93d83e1c8886-image.png

      The email backup report:
      20e4158c-01a8-4a94-bad3-2c99663f7be9-image.png
      bcb18554-3539-4086-a0ad-610760ea0012-image.png

      When testing the remote from the settings, it is working without any problem.

      To provide more context, I recently performed a restore health check and a full restore to a new VM after the original machine received an OS and Docker update. This involved reverting the machine back to its pre-update state. This restore process was successful. Notably, when I used xo-cli to re-run the backup on this machine (from a sequence of 6 machines, all of which failed with ”Error: invalid HTTP header in response body"), it was the only one reporting this kind of error.

      I’ve attached the full JSON log for this job.

      {
        "data": {
          "mode": "delta",
          "reportWhen": "failure"
        },
        "id": "1748358247453",
        "jobId": "b9142eba-4954-414a-bbc0-180b071a6490",
        "jobName": "Docker",
        "message": "backup",
        "scheduleId": "148eb0e1-1a22-4966-9143-ce53849875de",
        "start": 1748358247453,
        "status": "failure",
        "infos": [
          {
            "data": {
              "vms": [
                "2af0192f-3cb3-daa2-33e6-ec8f18b0cfb1"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "2af0192f-3cb3-daa2-33e6-ec8f18b0cfb1",
              "name_label": "Docker"
            },
            "id": "1748358250243",
            "message": "backup VM",
            "start": 1748358250243,
            "status": "failure",
            "tasks": [
              {
                "id": "1748358250247",
                "message": "clean-vm",
                "start": 1748358250247,
                "status": "success",
                "end": 1748358253304,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1748358253543",
                "message": "snapshot",
                "start": 1748358253543,
                "status": "success",
                "end": 1748358263363,
                "result": "f938e63e-3d5c-517c-f487-a28e2d93d44f"
              },
              {
                "data": {
                  "id": "2b919467-704c-4e35-bac9-2d6a43118bda",
                  "isFull": true,
                  "type": "remote"
                },
                "id": "1748358263364",
                "message": "export",
                "start": 1748358263364,
                "status": "failure",
                "tasks": [
                  {
                    "id": "1748358268282",
                    "message": "transfer",
                    "start": 1748358268282,
                    "status": "success",
                    "end": 1748358860438,
                    "result": {
                      "size": 38249955328
                    }
                  },
                  {
                    "id": "1748358884098",
                    "message": "clean-vm",
                    "start": 1748358884098,
                    "status": "failure",
                    "end": 1748358895353,
                    "result": {
                      "name": "InternalError",
                      "$fault": "client",
                      "$metadata": {
                        "httpStatusCode": 500,
                        "requestId": "074282D2C6F6946B",
                        "extendedRequestId": "MDc0MjgyRDJDNkY2OTQ2QjA3NDI4MkQyQzZGNjk0NkIwNzQyODJEMkM2RjY5NDZCMDc0MjgyRDJDNkY2OTQ2Qg==",
                        "attempts": 3,
                        "totalRetryDelay": 89
                      },
                      "Code": "InternalError",
                      "message": "Internal Error",
                      "stack": "InternalError: Internal Error\n    at throwDefaultError (/opt/xo/xo-builds/xen-orchestra-202505261454/node_modules/@smithy/smithy-client/dist-cjs/index.js:867:20)\n    at /opt/xo/xo-builds/xen-orchestra-202505261454/node_modules/@smithy/smithy-client/dist-cjs/index.js:876:5\n    at de_CommandError (/opt/xo/xo-builds/xen-orchestra-202505261454/node_modules/@aws-sdk/client-s3/dist-cjs/index.js:4952:14)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async /opt/xo/xo-builds/xen-orchestra-202505261454/node_modules/@smithy/middleware-serde/dist-cjs/index.js:35:20\n    at async /opt/xo/xo-builds/xen-orchestra-202505261454/node_modules/@aws-sdk/middleware-sdk-s3/dist-cjs/index.js:484:18\n    at async /opt/xo/xo-builds/xen-orchestra-202505261454/node_modules/@smithy/middleware-retry/dist-cjs/index.js:320:38\n    at async /opt/xo/xo-builds/xen-orchestra-202505261454/node_modules/@aws-sdk/middleware-sdk-s3/dist-cjs/index.js:110:22\n    at async /opt/xo/xo-builds/xen-orchestra-202505261454/node_modules/@aws-sdk/middleware-sdk-s3/dist-cjs/index.js:137:14\n    at async /opt/xo/xo-builds/xen-orchestra-202505261454/node_modules/@aws-sdk/middleware-logger/dist-cjs/index.js:33:22"
                    }
                  }
                ],
                "end": 1748358895354
              }
            ],
            "infos": [
              {
                "message": "will delete snapshot data"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:f5151a82-9e27-e821-fb17-067c7c1de856"
                },
                "message": "Snapshot data has been deleted"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:d8613b53-fe2b-014a-da5b-db943ec55b67"
                },
                "message": "Snapshot data has been deleted"
              }
            ],
            "end": 1748358895355
          }
        ],
        "end": 1748358895355
      }
      
      posted in Backup
      P
      peo
    • RE: XO - Files Restore

      Great that this bug/problem is being confirmed by others. Reproducing is as simple as create a new Linux VM (only using defaults when installing Linux), back it up, then try to restore files.
      Restoring single files is a feature that is at least needed in production environments (anything outside the "home lab"). Personally, I have no problem with waiting an hour or two for a full restore to a temporary VM to be able to access a file deleted or modified by mistake.
      There are people willing to help find and pinpoint problems like this, but having to use XOA to get attention to problems, given that it requires a license beyond the free trial month makes it less appealing for us spend our free time to help with this.

      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      @olivierlambert Yes, I understand that you need a bit more details about the configuration to be able to find out what's causing the problems.

      I'm using XO for now, latest version as of yesterday (as I try with updating on each day I get these recurring failures), so I'm just 3 commits behind (Xen Orchestra, commit c5a26).

      Tonight (as expected) 6 machines got that same backup failure. I'm now running one of them manually with a script I wrote, and it looks good so far.
      I asked for more debug-options to enable, to be able to catch these errors, but did not get any reply in time for tonight's failure. I don't know how much the short backup logs might help (there is actually nothing else useful in there than the mentioned error).

      The next failure is scheduled for 03:15 tonight (15 3 * * 3)

      This is the configuration you might find useful:
      Sequence (3c05) "Special-task machines" (15 1 * * 2)
      Unnamed schedule (Docker)
      Unnamed schedule (HomeAssistant)
      weekly tuesday (TrueNAS)
      Unnamed schedule (Win10 22H2 (XCP-VM-BU1))
      Unnamed schedule (pfSense)
      Unnamed schedule (unifi)

      The schedules included in the sequence are disabled in the backup job setup.

      Backup log for job "Docker"

      {
        "data": {
          "mode": "delta",
          "reportWhen": "failure"
        },
        "id": "1748301300016",
        "jobId": "b9142eba-4954-414a-bbc0-180b071a6490",
        "jobName": "Docker",
        "message": "backup",
        "scheduleId": "148eb0e1-1a22-4966-9143-ce53849875de",
        "start": 1748301300016,
        "status": "failure",
        "infos": [
          {
            "data": {
              "vms": [
                "2af0192f-3cb3-daa2-33e6-ec8f18b0cfb1"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "2af0192f-3cb3-daa2-33e6-ec8f18b0cfb1",
              "name_label": "Docker"
            },
            "id": "1748301324573",
            "message": "backup VM",
            "start": 1748301324573,
            "status": "failure",
            "tasks": [
              {
                "id": "1748301324578",
                "message": "clean-vm",
                "start": 1748301324578,
                "status": "success",
                "end": 1748301326826,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1748301327442",
                "message": "snapshot",
                "start": 1748301327442,
                "status": "success",
                "end": 1748301336945,
                "result": "2bfe93f6-4c49-2756-2e27-f620946392ab"
              },
              {
                "data": {
                  "id": "2b919467-704c-4e35-bac9-2d6a43118bda",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1748301336945:0",
                "message": "export",
                "start": 1748301336945,
                "status": "success",
                "tasks": [
                  {
                    "id": "1748301360976",
                    "message": "clean-vm",
                    "start": 1748301360976,
                    "status": "success",
                    "end": 1748301362952,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1748301362953
              }
            ],
            "infos": [
              {
                "message": "will delete snapshot data"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:ccff14c1-2a28-38fb-57e2-4040e4081ea3"
                },
                "message": "Snapshot data has been deleted"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:72d435e0-19f9-7b84-2f57-950c66d0c5a7"
                },
                "message": "Snapshot data has been deleted"
              }
            ],
            "end": 1748301362953,
            "result": {
              "message": "invalid HTTP header in response body",
              "name": "Error",
              "stack": "Error: invalid HTTP header in response body\n    at checkVdiExport (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/vdi.mjs:37:19)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/vdi.mjs:244:5)\n    at async #getExportStream (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:123:20)\n    at async XapiVhdStreamNbdSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:135:23)\n    at async XapiVhdStreamNbdSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamNbd.mjs:31:5)\n    at async #openNbdStream (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/Xapi.mjs:69:7)\n    at async XapiDiskSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/backups/_incrementalVm.mjs:65:5\n    at async Promise.all (index 0)"
            }
          }
        ],
        "end": 1748301362954
      }
      

      Schedule for backup job "Docker"

        {
          id: 'b9142eba-4954-414a-bbc0-180b071a6490',
          mode: 'delta',
          name: 'Docker',
          remotes: { id: '2b919467-704c-4e35-bac9-2d6a43118bda' },
          settings: {
            '': {
              cbtDestroySnapshotData: true,
              fullInterval: 28,
              longTermRetention: {
                daily: { retention: 32, settings: {} },
                monthly: { retention: 13, settings: {} },
                weekly: { retention: 5, settings: {} }
              },
              nbdConcurrency: 3,
              preferNbd: true,
              timezone: 'Europe/Stockholm'
            },
            '148eb0e1-1a22-4966-9143-ce53849875de': { exportRetention: 1 }
          },
          srs: { id: { __or: [] } },
          type: 'backup',
          vms: { id: '2af0192f-3cb3-daa2-33e6-ec8f18b0cfb1' }
        },
      

      Backup log for job "HomeAssistant"

      {
        "data": {
          "mode": "delta",
          "reportWhen": "failure"
        },
        "id": "1748301362981",
        "jobId": "38f0068f-c124-4876-85d3-83f1003db60c",
        "jobName": "HomeAssistant",
        "message": "backup",
        "scheduleId": "dcb1c759-76b8-441b-9dc0-595914e60608",
        "start": 1748301362981,
        "status": "failure",
        "infos": [
          {
            "data": {
              "vms": [
                "ed4758f3-de34-7a7e-a46b-dc007d52f5c3"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "ed4758f3-de34-7a7e-a46b-dc007d52f5c3",
              "name_label": "HomeAssistant"
            },
            "id": "1748301365217",
            "message": "backup VM",
            "start": 1748301365217,
            "status": "failure",
            "tasks": [
              {
                "id": "1748301365221",
                "message": "clean-vm",
                "start": 1748301365221,
                "status": "success",
                "end": 1748301367237,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1748301411356",
                "message": "snapshot",
                "start": 1748301411356,
                "status": "success",
                "end": 1748301414333,
                "result": "a3284471-7138-9fed-b6c2-c1c4314a6569"
              },
              {
                "data": {
                  "id": "2b919467-704c-4e35-bac9-2d6a43118bda",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1748301414334",
                "message": "export",
                "start": 1748301414334,
                "status": "success",
                "tasks": [
                  {
                    "id": "1748301433923",
                    "message": "clean-vm",
                    "start": 1748301433923,
                    "status": "success",
                    "end": 1748301436079,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1748301436080
              }
            ],
            "infos": [
              {
                "message": "will delete snapshot data"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:ee706ccd-faeb-3c3e-3f17-153b9767a44e"
                },
                "message": "Snapshot data has been deleted"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:6be5cd98-da3e-9c9b-e193-aa80b78608d5"
                },
                "message": "Snapshot data has been deleted"
              }
            ],
            "end": 1748301436080,
            "result": {
              "message": "invalid HTTP header in response body",
              "name": "Error",
              "stack": "Error: invalid HTTP header in response body\n    at checkVdiExport (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/vdi.mjs:37:19)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/vdi.mjs:244:5)\n    at async #getExportStream (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:123:20)\n    at async XapiVhdStreamNbdSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:135:23)\n    at async XapiVhdStreamNbdSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamNbd.mjs:31:5)\n    at async #openNbdStream (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/Xapi.mjs:69:7)\n    at async XapiDiskSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/backups/_incrementalVm.mjs:65:5\n    at async Promise.all (index 0)"
            }
          }
        ],
        "end": 1748301436081
      }
      

      Schedule for backup job "HomeAssistant"

        {
          id: '38f0068f-c124-4876-85d3-83f1003db60c',
          mode: 'delta',
          name: 'HomeAssistant',
          remotes: { id: '2b919467-704c-4e35-bac9-2d6a43118bda' },
          settings: {
            '': {
              cbtDestroySnapshotData: true,
              checkpointSnapshot: false,
              fullInterval: 28,
              longTermRetention: {
                daily: { retention: 8, settings: {} },
                monthly: { retention: 13, settings: {} },
                weekly: { retention: 5, settings: {} }
              },
              nbdConcurrency: 3,
              offlineSnapshot: true,
              preferNbd: true,
              timezone: 'Europe/Stockholm'
            },
            'dcb1c759-76b8-441b-9dc0-595914e60608': { exportRetention: 1 }
          },
          srs: { id: { __or: [] } },
          type: 'backup',
          vms: { id: 'ed4758f3-de34-7a7e-a46b-dc007d52f5c3' }
        },
      

      Backup log for job "TrueNAS"

      {
        "data": {
          "mode": "delta",
          "reportWhen": "failure"
        },
        "id": "1748301436105",
        "jobId": "d40f8444-0364-4945-acd9-01f85ef4a53d",
        "jobName": "TrueNAS",
        "message": "backup",
        "scheduleId": "ffd808a2-563f-4f3c-84b3-445e724ce78c",
        "start": 1748301436105,
        "status": "failure",
        "infos": [
          {
            "data": {
              "vms": [
                "086b19ec-2162-6b28-c9d2-0e2214ab85ad"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "086b19ec-2162-6b28-c9d2-0e2214ab85ad",
              "name_label": "TrueNAS"
            },
            "id": "1748301438810",
            "message": "backup VM",
            "start": 1748301438810,
            "status": "failure",
            "tasks": [
              {
                "id": "1748301438815",
                "message": "clean-vm",
                "start": 1748301438815,
                "status": "success",
                "end": 1748301441801,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1748301442293",
                "message": "snapshot",
                "start": 1748301442293,
                "status": "success",
                "end": 1748301447502,
                "result": "e6988956-774f-7421-d14d-5585bc9f06a8"
              },
              {
                "data": {
                  "id": "2b919467-704c-4e35-bac9-2d6a43118bda",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1748301447503",
                "message": "export",
                "start": 1748301447503,
                "status": "success",
                "tasks": [
                  {
                    "id": "1748301452311",
                    "message": "clean-vm",
                    "start": 1748301452311,
                    "status": "success",
                    "end": 1748301454142,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1748301454143
              }
            ],
            "infos": [
              {
                "message": "will delete snapshot data"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:a6285710-3e8d-a7cc-1f87-284146c0e5be"
                },
                "message": "Snapshot data has been deleted"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:02477e9a-146e-34f8-0729-a04deb9ce7a2"
                },
                "message": "Snapshot data has been deleted"
              }
            ],
            "end": 1748301454143,
            "result": {
              "message": "invalid HTTP header in response body",
              "name": "Error",
              "stack": "Error: invalid HTTP header in response body\n    at checkVdiExport (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/vdi.mjs:37:19)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/vdi.mjs:244:5)\n    at async #getExportStream (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:123:20)\n    at async XapiVhdStreamNbdSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:135:23)\n    at async XapiVhdStreamNbdSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamNbd.mjs:31:5)\n    at async #openNbdStream (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/Xapi.mjs:69:7)\n    at async XapiDiskSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/backups/_incrementalVm.mjs:65:5\n    at async Promise.all (index 1)"
            }
          }
        ],
        "end": 1748301454143
      }
      

      Schedule for backup job "TrueNAS"

        {
          id: 'd40f8444-0364-4945-acd9-01f85ef4a53d',
          mode: 'delta',
          name: 'TrueNAS',
          remotes: { id: '2b919467-704c-4e35-bac9-2d6a43118bda' },
          settings: {
            '': {
              cbtDestroySnapshotData: true,
              fullInterval: 28,
              longTermRetention: {
                daily: { retention: 8, settings: {} },
                monthly: { retention: 13, settings: {} },
                weekly: { retention: 5, settings: {} }
              },
              nbdConcurrency: 3,
              preferNbd: true,
              timezone: 'Europe/Stockholm'
            },
            'ffd808a2-563f-4f3c-84b3-445e724ce78c': { exportRetention: 1 }
          },
          srs: { id: { __or: [] } },
          type: 'backup',
          vms: { id: '086b19ec-2162-6b28-c9d2-0e2214ab85ad' }
        },
      

      Backup log for job "Win10 22H2 (XCP-VM-BU1)"

      {
        "data": {
          "mode": "delta",
          "reportWhen": "failure"
        },
        "id": "1748301454157",
        "jobId": "006447e3-d632-4088-871d-d31925146165",
        "jobName": "Win10 22H2 (XCP-VM-BU1)",
        "message": "backup",
        "scheduleId": "49db3524-1fea-4964-9eb9-1ee78c7f1533",
        "start": 1748301454157,
        "status": "failure",
        "infos": [
          {
            "data": {
              "vms": [
                "8a2b28e8-17fe-1a22-6b69-dc9e70b54440"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "8a2b28e8-17fe-1a22-6b69-dc9e70b54440",
              "name_label": "Win10 22H2 (XCP-VM-BU1)"
            },
            "id": "1748301457105",
            "message": "backup VM",
            "start": 1748301457105,
            "status": "failure",
            "tasks": [
              {
                "id": "1748301457116",
                "message": "clean-vm",
                "start": 1748301457116,
                "status": "success",
                "end": 1748301459104,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1748301459708",
                "message": "snapshot",
                "start": 1748301459708,
                "status": "success",
                "end": 1748301465288,
                "result": "6eeb6ce1-3867-1bf2-ee02-61629c335998"
              },
              {
                "data": {
                  "id": "2b919467-704c-4e35-bac9-2d6a43118bda",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1748301465289",
                "message": "export",
                "start": 1748301465289,
                "status": "success",
                "tasks": [
                  {
                    "id": "1748301469585",
                    "message": "clean-vm",
                    "start": 1748301469585,
                    "status": "success",
                    "end": 1748301471474,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1748301471474
              }
            ],
            "infos": [
              {
                "message": "will delete snapshot data"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:d93298c1-6b15-75f1-abfe-23d7f09bea18"
                },
                "message": "Snapshot data has been deleted"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:4962f8a3-17bf-2724-1b11-52a0b718bc93"
                },
                "message": "Snapshot data has been deleted"
              }
            ],
            "end": 1748301471474,
            "result": {
              "message": "invalid HTTP header in response body",
              "name": "Error",
              "stack": "Error: invalid HTTP header in response body\n    at checkVdiExport (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/vdi.mjs:37:19)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/vdi.mjs:244:5)\n    at async #getExportStream (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:123:20)\n    at async XapiVhdStreamNbdSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:135:23)\n    at async XapiVhdStreamNbdSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamNbd.mjs:31:5)\n    at async #openNbdStream (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/Xapi.mjs:69:7)\n    at async XapiDiskSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/backups/_incrementalVm.mjs:65:5\n    at async Promise.all (index 1)"
            }
          }
        ],
        "end": 1748301471474
      }
      

      Schedule for backup job "Win10 22H2 (XCP-VM-BU1)"

        {
          id: '006447e3-d632-4088-871d-d31925146165',
          mode: 'delta',
          name: 'Win10 22H2 (XCP-VM-BU1)',
          remotes: { id: '2b919467-704c-4e35-bac9-2d6a43118bda' },
          settings: {
            '': {
              cbtDestroySnapshotData: true,
              fullInterval: 28,
              longTermRetention: {
                daily: { retention: 8, settings: {} },
                monthly: { retention: 13, settings: {} },
                weekly: { retention: 5, settings: {} }
              },
              nbdConcurrency: 3,
              preferNbd: true,
              timezone: 'Europe/Stockholm'
            },
            '49db3524-1fea-4964-9eb9-1ee78c7f1533': { exportRetention: 1 }
          },
          srs: { id: { __or: [] } },
          type: 'backup',
          vms: { id: '8a2b28e8-17fe-1a22-6b69-dc9e70b54440' }
        },
      

      Backup log for job "pfSense"

      {
        "data": {
          "mode": "delta",
          "reportWhen": "failure"
        },
        "id": "1748301471487",
        "jobId": "ed71705b-61aa-4081-b765-bfa5e2bb8dd6",
        "jobName": "pfSense",
        "message": "backup",
        "scheduleId": "2f5fc714-9c4d-4b46-8da6-db771926de68",
        "start": 1748301471487,
        "status": "failure",
        "infos": [
          {
            "data": {
              "vms": [
                "34bb1a0f-d662-669d-6e07-39f3ef102129"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "34bb1a0f-d662-669d-6e07-39f3ef102129",
              "name_label": "pfSense"
            },
            "id": "1748301474076",
            "message": "backup VM",
            "start": 1748301474076,
            "status": "failure",
            "tasks": [
              {
                "id": "1748301474081",
                "message": "clean-vm",
                "start": 1748301474081,
                "status": "success",
                "end": 1748301475569,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1748301476075",
                "message": "snapshot",
                "start": 1748301476075,
                "status": "success",
                "end": 1748301479451,
                "result": "6fb562a0-e544-cc43-92a8-b250f0d6e55f"
              },
              {
                "data": {
                  "id": "2b919467-704c-4e35-bac9-2d6a43118bda",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1748301479451:0",
                "message": "export",
                "start": 1748301479451,
                "status": "success",
                "tasks": [
                  {
                    "id": "1748301484740",
                    "message": "clean-vm",
                    "start": 1748301484740,
                    "status": "success",
                    "end": 1748301486146,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1748301486147
              }
            ],
            "infos": [
              {
                "message": "will delete snapshot data"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:9bbe8ff4-5366-e76f-c422-b11c02f2dbeb"
                },
                "message": "Snapshot data has been deleted"
              }
            ],
            "end": 1748301486147,
            "result": {
              "message": "invalid HTTP header in response body",
              "name": "Error",
              "stack": "Error: invalid HTTP header in response body\n    at checkVdiExport (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/vdi.mjs:37:19)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/vdi.mjs:244:5)\n    at async #getExportStream (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:123:20)\n    at async XapiVhdStreamNbdSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:135:23)\n    at async XapiVhdStreamNbdSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamNbd.mjs:31:5)\n    at async #openNbdStream (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/Xapi.mjs:69:7)\n    at async XapiDiskSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/backups/_incrementalVm.mjs:65:5\n    at async Promise.all (index 0)"
            }
          }
        ],
        "end": 1748301486148
      }
      

      Schedule for backup job "pfSense"

        {
          id: 'ed71705b-61aa-4081-b765-bfa5e2bb8dd6',
          mode: 'delta',
          name: 'pfSense',
          remotes: { id: '2b919467-704c-4e35-bac9-2d6a43118bda' },
          settings: {
            '': {
              cbtDestroySnapshotData: true,
              fullInterval: 28,
              longTermRetention: {
                daily: { retention: 8, settings: {} },
                monthly: { retention: 13, settings: {} },
                weekly: { retention: 5, settings: {} }
              },
              nbdConcurrency: 3,
              preferNbd: true,
              timezone: 'Europe/Stockholm'
            },
            '2f5fc714-9c4d-4b46-8da6-db771926de68': { exportRetention: 1 }
          },
          srs: { id: { __or: [] } },
          type: 'backup',
          vms: { id: '34bb1a0f-d662-669d-6e07-39f3ef102129' }
        },
      

      Backup log for job "unifi"

      {
        "data": {
          "mode": "delta",
          "reportWhen": "failure"
        },
        "id": "1748301486170",
        "jobId": "167accc6-65e1-4782-a7e1-b7af72070286",
        "jobName": "unifi",
        "message": "backup",
        "scheduleId": "25636582-2efd-45a9-b90a-f0e2e96df179",
        "start": 1748301486170,
        "status": "failure",
        "infos": [
          {
            "data": {
              "vms": [
                "95e2e5a4-4f21-bd2c-0af5-0a1216d05d0e"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "95e2e5a4-4f21-bd2c-0af5-0a1216d05d0e",
              "name_label": "unifi"
            },
            "id": "1748301488467",
            "message": "backup VM",
            "start": 1748301488467,
            "status": "failure",
            "tasks": [
              {
                "id": "1748301488472",
                "message": "clean-vm",
                "start": 1748301488472,
                "status": "success",
                "end": 1748301490266,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1748301490816",
                "message": "snapshot",
                "start": 1748301490816,
                "status": "success",
                "end": 1748301494252,
                "result": "513ce90c-c893-45ce-6693-a74e39be90b8"
              },
              {
                "data": {
                  "id": "2b919467-704c-4e35-bac9-2d6a43118bda",
                  "isFull": false,
                  "type": "remote"
                },
                "id": "1748301494253",
                "message": "export",
                "start": 1748301494253,
                "status": "success",
                "tasks": [
                  {
                    "id": "1748301497980",
                    "message": "clean-vm",
                    "start": 1748301497980,
                    "status": "success",
                    "end": 1748301499587,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1748301499587
              }
            ],
            "infos": [
              {
                "message": "will delete snapshot data"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:062d4f15-9f44-cfb9-840c-4af51ef8a4aa"
                },
                "message": "Snapshot data has been deleted"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:67d3668b-bd8d-db05-76a2-f9d0da5db47a"
                },
                "message": "Snapshot data has been deleted"
              }
            ],
            "end": 1748301499587,
            "result": {
              "message": "invalid HTTP header in response body",
              "name": "Error",
              "stack": "Error: invalid HTTP header in response body\n    at checkVdiExport (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/vdi.mjs:37:19)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async Xapi.exportContent (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/vdi.mjs:244:5)\n    at async #getExportStream (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:123:20)\n    at async XapiVhdStreamNbdSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamSource.mjs:135:23)\n    at async XapiVhdStreamNbdSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/XapiVhdStreamNbd.mjs:31:5)\n    at async #openNbdStream (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/xapi/disks/Xapi.mjs:69:7)\n    at async XapiDiskSource.init (file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/disk-transform/dist/DiskPassthrough.mjs:28:41)\n    at async file:///opt/xo/xo-builds/xen-orchestra-202505261454/@xen-orchestra/backups/_incrementalVm.mjs:65:5\n    at async Promise.all (index 1)"
            }
          }
        ],
        "end": 1748301499587
      }
      

      Schedule for backup job "unifi"

        {
          id: '167accc6-65e1-4782-a7e1-b7af72070286',
          mode: 'delta',
          name: 'unifi',
          remotes: { id: '2b919467-704c-4e35-bac9-2d6a43118bda' },
          settings: {
            '': {
              cbtDestroySnapshotData: true,
              fullInterval: 28,
              longTermRetention: {
                daily: { retention: 8, settings: {} },
                monthly: { retention: 13, settings: {} },
                weekly: { retention: 5, settings: {} }
              },
              nbdConcurrency: 3,
              preferNbd: true,
              timezone: 'Europe/Stockholm'
            },
            '25636582-2efd-45a9-b90a-f0e2e96df179': { exportRetention: 1 }
          },
          srs: { id: { __or: [] } },
          type: 'backup',
          vms: { id: '95e2e5a4-4f21-bd2c-0af5-0a1216d05d0e' }
        },
      

      The job ("HomeAssistant") I ran manually after the failure, without any changes to it, from my shellscript succeeded. This is the log:

      {
        "data": {
          "mode": "delta",
          "reportWhen": "failure"
        },
        "id": "1748332955709",
        "jobId": "38f0068f-c124-4876-85d3-83f1003db60c",
        "jobName": "HomeAssistant",
        "message": "backup",
        "scheduleId": "dcb1c759-76b8-441b-9dc0-595914e60608",
        "start": 1748332955709,
        "status": "success",
        "infos": [
          {
            "data": {
              "vms": [
                "ed4758f3-de34-7a7e-a46b-dc007d52f5c3"
              ]
            },
            "message": "vms"
          }
        ],
        "tasks": [
          {
            "data": {
              "type": "VM",
              "id": "ed4758f3-de34-7a7e-a46b-dc007d52f5c3",
              "name_label": "HomeAssistant"
            },
            "id": "1748332999004",
            "message": "backup VM",
            "start": 1748332999004,
            "status": "success",
            "tasks": [
              {
                "id": "1748332999009",
                "message": "clean-vm",
                "start": 1748332999009,
                "status": "success",
                "end": 1748333001471,
                "result": {
                  "merge": false
                }
              },
              {
                "id": "1748333024461",
                "message": "snapshot",
                "start": 1748333024461,
                "status": "success",
                "end": 1748333027231,
                "result": "7c1c5df6-5a0f-28c6-eaa3-8cf96b871bf8"
              },
              {
                "data": {
                  "id": "2b919467-704c-4e35-bac9-2d6a43118bda",
                  "isFull": true,
                  "type": "remote"
                },
                "id": "1748333027231:0",
                "message": "export",
                "start": 1748333027231,
                "status": "success",
                "tasks": [
                  {
                    "id": "1748333029816",
                    "message": "transfer",
                    "start": 1748333029816,
                    "status": "success",
                    "end": 1748333658179,
                    "result": {
                      "size": 38681968640
                    }
                  },
                  {
                    "id": "1748333661183",
                    "message": "clean-vm",
                    "start": 1748333661183,
                    "status": "success",
                    "end": 1748333669221,
                    "result": {
                      "merge": false
                    }
                  }
                ],
                "end": 1748333669222
              }
            ],
            "infos": [
              {
                "message": "will delete snapshot data"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:e40df229-f55d-e658-5242-cd0746079661"
                },
                "message": "Snapshot data has been deleted"
              },
              {
                "data": {
                  "vdiRef": "OpaqueRef:f03dc466-24a3-5269-3e9b-5634a99219f9"
                },
                "message": "Snapshot data has been deleted"
              }
            ],
            "end": 1748333669222
          }
        ],
        "end": 1748333669223
      }
      
      posted in Backup
      P
      peo
    • RE: Error: invalid HTTP header in response body

      A follow-up on this. Today, I had emails about the four backup jobs that is included in the backup sequence that was scheduled to run during the night. The same error as the previous failed, and the same (successful) result when I manually ran the individual backup jobs.

      Is it possible that "sequences" stopped working in some version of XO and noone noticed because there is no user base on this feature?

      As I have run the four failed jobs manually for the night's failures, I have no more information other than the backup job logs which I saved for at least one of the jobs on each night they failed. I'm expecting a failed sequence the coming night too, so if there is something I can do to help you find out what's going wrong, I have a couple of hours to prepare for it now.

      Otherwise, if "sequences" are not supposed to be used, is there a way I can create this functionality using a shell script and the xo-cli command, so I back up a bunch of machines, one machine at a time at a specific time on selected days (just a cron job for each of the sequences) ?

      posted in Backup
      P
      peo