any ideas?

Posts
-
RE: mirror backup to S3
@florent i've a little problem with backup to s3/wasabi..
for delta seems all ok:
{ "data": { "mode": "delta", "reportWhen": "failure" }, "id": "1751914964818", "jobId": "e4adc26c-8723-4388-a5df-c2a1663ed0f7", "jobName": "Mirror wasabi delta", "message": "backup", "scheduleId": "62a5edce-88b8-4db9-982e-ad2f525c4eb9", "start": 1751914964818, "status": "success", "infos": [ { "data": { "vms": [ "2771e7a0-2572-ca87-97cf-e174a1d35e6f", "b89670f6-b785-7df0-3791-e5e41ec8ee08", "cac6afed-5df8-0817-604c-a047a162093f" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "b89670f6-b785-7df0-3791-e5e41ec8ee08" }, "id": "1751914968373", "message": "backup VM", "start": 1751914968373, "status": "success", "tasks": [ { "id": "1751914968742", "message": "clean-vm", "start": 1751914968742, "status": "success", "end": 1751914979708, "result": { "merge": false } }, { "data": { "id": "ea222c7a-b242-4605-83f0-fdcc9865eb88", "type": "remote" }, "id": "1751914984503", "message": "export", "start": 1751914984503, "status": "success", "tasks": [ { "id": "1751914984667", "message": "transfer", "start": 1751914984667, "status": "success", "end": 1751914992365, "result": { "size": 125829120 } }, { "id": "1751914995521", "message": "clean-vm", "start": 1751914995521, "status": "success", "tasks": [ { "id": "1751915004208", "message": "merge", "start": 1751915004208, "status": "success", "end": 1751915018911 } ], "end": 1751915020075, "result": { "merge": true } } ], "end": 1751915020077 } ], "end": 1751915020077 }, { "data": { "type": "VM", "id": "2771e7a0-2572-ca87-97cf-e174a1d35e6f" }, "id": "1751914968380", "message": "backup VM", "start": 1751914968380, "status": "success", "tasks": [ { "id": "1751914968903", "message": "clean-vm", "start": 1751914968903, "status": "success", "end": 1751914979840, "result": { "merge": false } }, { "data": { "id": "ea222c7a-b242-4605-83f0-fdcc9865eb88", "type": "remote" }, "id": "1751914986808", "message": "export", "start": 1751914986808, "status": "success", "tasks": [ { "id": "1751914987416", "message": "transfer", "start": 1751914987416, "status": "success", "end": 1751914993152, "result": { "size": 119537664 } }, { "id": "1751914996024", "message": "clean-vm", "start": 1751914996024, "status": "success", "tasks": [ { "id": "1751915005023", "message": "merge", "start": 1751915005023, "status": "success", "end": 1751915035567 } ], "end": 1751915039414, "result": { "merge": true } } ], "end": 1751915039414 } ], "end": 1751915039415 }, { "data": { "type": "VM", "id": "cac6afed-5df8-0817-604c-a047a162093f" }, "id": "1751915020089", "message": "backup VM", "start": 1751915020089, "status": "success", "tasks": [ { "id": "1751915020443", "message": "clean-vm", "start": 1751915020443, "status": "success", "end": 1751915030194, "result": { "merge": false } }, { "data": { "id": "ea222c7a-b242-4605-83f0-fdcc9865eb88", "type": "remote" }, "id": "1751915034962", "message": "export", "start": 1751915034962, "status": "success", "tasks": [ { "id": "1751915035142", "message": "transfer", "start": 1751915035142, "status": "success", "end": 1751915052723, "result": { "size": 719323136 } }, { "id": "1751915056146", "message": "clean-vm", "start": 1751915056146, "status": "success", "tasks": [ { "id": "1751915064681", "message": "merge", "start": 1751915064681, "status": "success", "end": 1751915116508 } ], "end": 1751915117838, "result": { "merge": true } } ], "end": 1751915117839 } ], "end": 1751915117839 } ], "end": 1751915117839 }
For full i'm not sure:
{ "data": { "mode": "full", "reportWhen": "always" }, "id": "1751757492933", "jobId": "35c78a31-67c5-47ba-9988-9c4cb404ed8e", "jobName": "Mirror wasabi full", "message": "backup", "scheduleId": "476b863d-a651-42e5-9bb3-db830dbdac7c", "start": 1751757492933, "status": "success", "infos": [ { "data": { "vms": [ "2771e7a0-2572-ca87-97cf-e174a1d35e6f", "b89670f6-b785-7df0-3791-e5e41ec8ee08", "cac6afed-5df8-0817-604c-a047a162093f" ] }, "message": "vms" } ], "end": 1751757496499 }
XOA send to me the email with this report
Job ID: 35c78a31-67c5-47ba-9988-9c4cb404ed8e Run ID: 1751757492933 Mode: full Start time: Sunday, July 6th 2025, 1:18:12 am End time: Sunday, July 6th 2025, 1:18:16 am Duration: a few seconds
four second for 203 gb?
-
RE: mirror backup to S3
Hi @florent, i've clean backup data, add the correct retention and now
it's fine
i'm lowering nbd connection (from 4 to 1), the speed of "test backup con mirror" is too low -
RE: mirror backup to S3
@florent hi, i've adjusted the retention parameters and i'm waiting for some days of backup/mirror for checking
-
RE: mirror backup to S3
@acebmxer of course, this is only a test.
the problem is not the schedulng but why incremental send every time all data. -
RE: mirror backup to S3
@acebmxer [excuse for the poor english!]
i've now this situation:
1 backup job with two disables schedules, one full and one delta, to a nas
1 mirror full backup to wasabi (S3)
1 mirror incremental backupi've insert two sequences:
one starting at sunday for full backup (the sequence is full backup and then full mirror)
one every 3 hours with delta backup and then mirror incrementalthe job start at the correct hour but the mirror incremental send every time the same data size..
backup to nas:dns_interno1 (ctx1.tosnet.it) Transfer data using NBD Clean VM directory cleanVm: incorrect backup size in metadata Start: 2025-06-24 16:00 End: 2025-06-24 16:00 Snapshot Start: 2025-06-24 16:00 End: 2025-06-24 16:00 Backup XEN OLD transfer Start: 2025-06-24 16:00 End: 2025-06-24 16:01 Duration: a few seconds Size: 132 MiB Speed: 11.86 MiB/s Start: 2025-06-24 16:00 End: 2025-06-24 16:01 Duration: a minute Start: 2025-06-24 16:00 End: 2025-06-24 16:01 Duration: a minute Type: delta
dns_interno1 (ctx1.tosnet.it) Wasabi transfer Start: 2025-06-24 16:02 End: 2025-06-24 16:15 Duration: 13 minutes Size: 25.03 GiB Speed: 34.14 MiB/s transfer Start: 2025-06-24 16:15 End: 2025-06-24 16:15 Duration: a few seconds Size: 394 MiB Speed: 22.49 MiB/s Start: 2025-06-24 16:02 End: 2025-06-24 16:17 Duration: 15 minutes Wasabi Start: 2025-06-24 16:15 End: 2025-06-24 16:17 Duration: 2 minutes Start: 2025-06-24 16:02 End: 2025-06-24 16:17 Duration: 15 minutes
the job send every time 25gb to wasabi, not the incremental data.
-
error in xo task with sequence?
Good morning, sequence work fine but i've a long list of task closed but at 50% (?)
raw log is correct{ "id": "0mca491c8", "properties": { "name": "Schedule sequence", "userId": "c5ce5e50-29d9-4c00-84e8-402e1063a5c7", "type": "xo:schedule:sequence", "progress": 50 }, "start": 1750744800007, "status": "success", "updatedAt": 1750746259107, "end": 1750746259107 }
It's only a ui problem?
-
RE: mirror backup to S3
@acebmxer ok, i don't use (for full + delta) the old schedulng (one job but two scheduling, one full and one delta), but i must separate the two jobs?
-
mirror backup to S3
Good morning, i'v some VM (~30) in four logical group.
for every group i create the backup (one full weekly and 40 incremental) and i want mirror to wasabi S3 storage.
How i can start mirroring when one of the full/incremental backup end?
I don't want to start mirror when a backup is not finished!
thank you -
RE: Short VM freeze when migrating to another host
@olivierlambert ops.. why the best topology?
-
RE: Short VM freeze when migrating to another host
@olivierlambert live migration, the vm is very important (today, in christmas holyday, i've received some phone calls for 7 minutes of freeze..)
-
RE: Short VM freeze when migrating to another host
@olivierlambert hi, today i've upgraded my host..
The big VM frozen for ~7 minutes, is a big vm (96 gbram and 32 cpu) but 7 minutes is a very long time (for customer!)
i've setting 96/06 in dynamic: is a normal time? -
xoa not show host patch?
Good morning, today i see a strange thing in my pool
I've not update ctx7..
I login to host (ctx7 and ctx6 for comparision), do a yum update and i see the same 9 packages: why xoa not see the patch for ctx7? -
RE: Clarification of "VM Limits" in XO
@olivierlambert said in Clarification of "VM Limits" in XO:
- Static is the global range that can be modified only when the VM is halted. Dynamic is the range when the the VM memory can be changed while the VM is running. Obviously, dynamic range is included inside the static one.
Most of the time, except if you have a very good reason for it, do not use dynamic memory.
ok.. but if i set memory limit static 1gb-16gb
and dynamic 16gb-16gb XCP assign 16 gb to vm?
the static limit is only a "barrier" for dynamic? -
RE: Short VM freeze when migrating to another host
@nikade
in VM (linux) with a free i see 94 gb of total memory -
RE: update via yum or via xoa?
@bleader said in update via yum or via xoa?:
yes you're basically doing an RPU manually.
But it is indeed odd that the process is stuck at 0%, it should be fairly fast to do the install patches, no errors in the logs?i've another install all patches and install all.
Now i'll use rpm update and see if speed is the same or not -
RE: update via yum or via xoa?
@bleader said in update via yum or via xoa?:
It actually depends if you chose "rolling pool update" or "Install all pool patches", as you're talking about evacuate I assume you went with the first one.
Rolling pool update (RPU) is documented here and Pool updates here.
But to sum it up, "Install all pool patches" button will indeed run the yum update on all server, so similar as doing it manually, while RPU will do hosts one by one, moving VM to other hosts in the pool, install updates and reboot host. Therefore it can take way longer to complete, time will vary based on the number of VMs that have to be migrated around, network speed between hosts, etc…
RPU is the recommanded way as it allows hosts to restart and therefore take hypervisor, microcode and dom0 kernel updates into account right away with no service interruption. But if you don't really mind shutting down some of the VMs to restart hosts, or if there are no low level updates that requires a reboot, you could get away with just the yum update manually. But if the RPU is started already, I would not advise trying to do things manually at the same time.
ok, i've do a "install all patches" from xoa host page, i want some control form move vm to other hosts
But is the same thing at all (if i evacuate by hand the host, rpm update, reboot host), right? -
update via yum or via xoa?
Hi, i'm updating my poo; i evacuate the master, goes to patch, install all patch..
for now is at 0% (11 minutes from start)
Why is too slow?
It's the same if i go with console and do a yum update to all server? -
RE: CBT: the thread to centralize your feedback
Too many error for me..
Time for backup very very long, error at start (with retry) and at the end (with error on backup:Couldn't deleted snapshot data Couldn't deleted snapshot data Retry the VM backup due to an error the writer IncrementalRemoteWriter has failed the step writer.beforeBackup() with error Lock file is already being held. It won't be used anymore in this job execution. Retry the VM backup due to an error Transfer data using NBD will delete snapshot data Snapshot data has been deleted Snapshot data has been deleted Transfer data using NBD Clean VM directory cleanVm: incorrect backup size in metadata Start: 2024-07-11 20:23 End: 2024-07-11 20:24 Snapshot Start: 2024-07-11 20:24 End: 2024-07-11 20:25 Backup XEN OLD Start: 2024-07-11 20:25 End: 2024-07-11 20:27 Duration: 2 minutes Clean VM directory cleanVm: incorrect backup size in metadata Start: 2024-07-12 01:44 End: 2024-07-12 01:44 Snapshot Start: 2024-07-12 01:44 End: 2024-07-12 01:46 Backup XEN OLD transfer Start: 2024-07-12 01:46 End: 2024-07-12 05:39 Duration: 4 hours Size: 397.86 GiB Speed: 29.12 MiB/s Start: 2024-07-12 01:46 End: 2024-07-12 05:39 Duration: 4 hours Start: 2024-07-11 20:23 End: 2024-07-12 05:39 Duration: 9 hours Error: Disk is still attached to DOM0 VM Type: delta
not good..
-
RE: CBT: the thread to centralize your feedback
@rtjdamen said in CBT: the thread to centralize your feedback:
We have this error "stream has ended with not enough data (actual: 446, expected: 512)" on multiple vms in the last few days anyone seeing this issue?
the same for me.
Backup too long (delta for 18 hours, before nbd+cbt less than hour.. is a full, not delta) and at the end many many errors{ "data": { "mode": "delta", "reportWhen": "failure" }, "id": "1720616400003", "jobId": "30159f11-3b54-48d1-ab8b-d32858991349", "jobName": "Delta FtPA", "message": "backup", "scheduleId": "b94e6227-c7b8-4a39-9bf1-b881422971df", "start": 1720616400003, "status": "failure", "infos": [ { "data": { "vms": [ "be0a9812-fd14-be75-e2fa-40c31ce8875c" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "be0a9812-fd14-be75-e2fa-40c31ce8875c", "name_label": "FtPA" }, "id": "1720616402582", "message": "backup VM", "start": 1720616402582, "status": "failure", "tasks": [ { "id": "1720616402661", "message": "clean-vm", "start": 1720616402661, "status": "success", "warnings": [ { "data": { "path": "/xo-vm-backups/be0a9812-fd14-be75-e2fa-40c31ce8875c/20240703T070203Z.json", "actual": 356950134784, "expected": 356950138368 }, "message": "cleanVm: incorrect backup size in metadata" } ], "end": 1720616408054, "result": { "merge": false } }, { "id": "1720616410224", "message": "snapshot", "start": 1720616410224, "status": "failure", "end": 1720616668284, "result": { "code": "HANDLE_INVALID", "params": [ "VBD", "OpaqueRef:27874b37-4e3b-4d33-9a68-0d3dbaae7664" ], "task": { "uuid": "1823000a-df0a-970e-db2b-c12be53943fc", "name_label": "Async.VM.snapshot", "name_description": "", "allowed_operations": [], "current_operations": {}, "created": "20240710T13:00:29Z", "finished": "20240710T13:04:28Z", "status": "failure", "resident_on": "OpaqueRef:4706cbe1-12ab-45d9-9001-cbe6ec1270ce", "progress": 1, "type": "<none/>", "result": "", "error_info": [ "HANDLE_INVALID", "VBD", "OpaqueRef:27874b37-4e3b-4d33-9a68-0d3dbaae7664" ], "other_config": {}, "subtask_of": "OpaqueRef:NULL", "subtasks": [], "backtrace": "(((process xapi)(filename ocaml/xapi-client/client.ml)(line 7))((process xapi)(filename ocaml/xapi-client/client.ml)(line 19))((process xapi)(filename ocaml/xapi-client/client.ml)(line 6016))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 134))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 205))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 95)))" }, "message": "HANDLE_INVALID(VBD, OpaqueRef:27874b37-4e3b-4d33-9a68-0d3dbaae7664)", "name": "XapiError", "stack": "XapiError: HANDLE_INVALID(VBD, OpaqueRef:27874b37-4e3b-4d33-9a68-0d3dbaae7664)\n at XapiError.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)\n at default (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_getTaskResult.mjs:13:29)\n at Xapi._addRecordToCache (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1033:24)\n at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1067:14\n at Array.forEach (<anonymous>)\n at Xapi._processEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1057:12)\n at Xapi._watchEvents (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/index.mjs:1230:14)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)" } }, { "id": "1720616668336", "message": "clean-vm", "start": 1720616668336, "status": "success", "end": 1720616674074, "result": { "merge": false } }, { "id": "1720616674189", "message": "clean-vm", "start": 1720616674189, "status": "success", "end": 1720616679904, "result": { "merge": false } }, { "id": "1720616681937", "message": "snapshot", "start": 1720616681937, "status": "success", "end": 1720616891256, "result": "4f5c9b99-d96e-4e1f-5f2e-0b9e7fa28952" }, { "data": { "id": "601f8729-7602-4d6f-a018-d4cc525ca371", "isFull": false, "type": "remote" }, "id": "1720616891257", "message": "export", "start": 1720616891257, "status": "success", "tasks": [ { "id": "1720617038363", "message": "clean-vm", "start": 1720617038363, "status": "success", "end": 1720617044510, "result": { "merge": true } } ], "end": 1720617044698 }, { "id": "1720617044736", "message": "clean-vm", "start": 1720617044736, "status": "failure", "tasks": [ { "id": "1720617048047", "message": "merge", "start": 1720617048047, "status": "failure", "end": 1720617656522, "result": { "errno": -2, "code": "ENOENT", "syscall": "open", "path": "/run/xo-server/mounts/601f8729-7602-4d6f-a018-d4cc525ca371/xo-vm-backups/be0a9812-fd14-be75-e2fa-40c31ce8875c/vdis/30159f11-3b54-48d1-ab8b-d32858991349/1f3803c8-0335-4470-8c26-297f98af442c/20240703T070203Z.vhd", "message": "ENOENT: no such file or directory, open '/run/xo-server/mounts/601f8729-7602-4d6f-a018-d4cc525ca371/xo-vm-backups/be0a9812-fd14-be75-e2fa-40c31ce8875c/vdis/30159f11-3b54-48d1-ab8b-d32858991349/1f3803c8-0335-4470-8c26-297f98af442c/20240703T070203Z.vhd'", "name": "Error", "stack": "Error: ENOENT: no such file or directory, open '/run/xo-server/mounts/601f8729-7602-4d6f-a018-d4cc525ca371/xo-vm-backups/be0a9812-fd14-be75-e2fa-40c31ce8875c/vdis/30159f11-3b54-48d1-ab8b-d32858991349/1f3803c8-0335-4470-8c26-297f98af442c/20240703T070203Z.vhd'\nFrom:\n at NfsHandler.addSyncStackTrace (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/fs/dist/local.js:21:26)\n at NfsHandler._openFile (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/fs/dist/local.js:154:35)\n at NfsHandler.__openFile (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/fs/dist/abstract.js:448:51)\n at NfsHandler.openFile (/usr/local/lib/node_modules/xo-server/node_modules/limit-concurrency-decorator/index.js:97:24)\n at VhdFile.open (/usr/local/lib/node_modules/xo-server/node_modules/vhd-lib/Vhd/VhdFile.js:86:30)\n at openVhd (/usr/local/lib/node_modules/xo-server/node_modules/vhd-lib/openVhd.js:15:28)\n at async #openVhds (/usr/local/lib/node_modules/xo-server/node_modules/vhd-lib/merge.js:118:23)\n at async Disposable.<anonymous> (/usr/local/lib/node_modules/xo-server/node_modules/vhd-lib/merge.js:164:39)" } } ], "end": 1720617656522, "result": { "errno": -2, "code": "ENOENT", "syscall": "open", "path": "/run/xo-server/mounts/601f8729-7602-4d6f-a018-d4cc525ca371/xo-vm-backups/be0a9812-fd14-be75-e2fa-40c31ce8875c/vdis/30159f11-3b54-48d1-ab8b-d32858991349/1f3803c8-0335-4470-8c26-297f98af442c/20240703T070203Z.vhd", "message": "ENOENT: no such file or directory, open '/run/xo-server/mounts/601f8729-7602-4d6f-a018-d4cc525ca371/xo-vm-backups/be0a9812-fd14-be75-e2fa-40c31ce8875c/vdis/30159f11-3b54-48d1-ab8b-d32858991349/1f3803c8-0335-4470-8c26-297f98af442c/20240703T070203Z.vhd'", "name": "Error", "stack": "Error: ENOENT: no such file or directory, open '/run/xo-server/mounts/601f8729-7602-4d6f-a018-d4cc525ca371/xo-vm-backups/be0a9812-fd14-be75-e2fa-40c31ce8875c/vdis/30159f11-3b54-48d1-ab8b-d32858991349/1f3803c8-0335-4470-8c26-297f98af442c/20240703T070203Z.vhd'\nFrom:\n at NfsHandler.addSyncStackTrace (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/fs/dist/local.js:21:26)\n at NfsHandler._openFile (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/fs/dist/local.js:154:35)\n at NfsHandler.__openFile (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/fs/dist/abstract.js:448:51)\n at NfsHandler.openFile (/usr/local/lib/node_modules/xo-server/node_modules/limit-concurrency-decorator/index.js:97:24)\n at VhdFile.open (/usr/local/lib/node_modules/xo-server/node_modules/vhd-lib/Vhd/VhdFile.js:86:30)\n at openVhd (/usr/local/lib/node_modules/xo-server/node_modules/vhd-lib/openVhd.js:15:28)\n at async #openVhds (/usr/local/lib/node_modules/xo-server/node_modules/vhd-lib/merge.js:118:23)\n at async Disposable.<anonymous> (/usr/local/lib/node_modules/xo-server/node_modules/vhd-lib/merge.js:164:39)" } }, { "id": "1720617656648", "message": "snapshot", "start": 1720617656648, "status": "success", "end": 1720618073493, "result": "b5ae2b32-ef3c-aa99-4e1f-07d42835746a" }, { "data": { "id": "601f8729-7602-4d6f-a018-d4cc525ca371", "isFull": true, "type": "remote" }, "id": "1720618073494", "message": "export", "start": 1720618073494, "status": "success", "tasks": [ { "id": "1720618090517", "message": "transfer", "start": 1720618090517, "status": "success", "end": 1720680654839, "result": { "size": 2496704791552 } }, { "id": "1720680657579", "message": "clean-vm", "start": 1720680657579, "status": "success", "warnings": [ { "data": { "mergeStatePath": "/xo-vm-backups/be0a9812-fd14-be75-e2fa-40c31ce8875c/vdis/30159f11-3b54-48d1-ab8b-d32858991349/9becf507-abfc-47b4-9091-803ef2a1b47c/.20240703T070203Z.vhd.merge.json", "missingVhdPath": "/xo-vm-backups/be0a9812-fd14-be75-e2fa-40c31ce8875c/vdis/30159f11-3b54-48d1-ab8b-d32858991349/9becf507-abfc-47b4-9091-803ef2a1b47c/20240703T070203Z.vhd" }, "message": "orphan merge state" }, { "data": { "path": "/xo-vm-backups/be0a9812-fd14-be75-e2fa-40c31ce8875c/20240703T090201Z.json", "actual": 83366506496, "expected": 360746910208 }, "message": "cleanVm: incorrect backup size in metadata" } ], "end": 1720680685535, "result": { "merge": false } } ], "end": 1720680685538 } ], "warnings": [ { "data": { "attempt": 1, "error": "HANDLE_INVALID(VBD, OpaqueRef:27874b37-4e3b-4d33-9a68-0d3dbaae7664)" }, "message": "Retry the VM backup due to an error" }, { "data": { "error": { "code": "VDI_IN_USE", "params": [ "OpaqueRef:7462ea3f-8b99-444e-9007-07529868daf2", "data_destroy" ], "call": { "method": "VDI.data_destroy", "params": [ "OpaqueRef:7462ea3f-8b99-444e-9007-07529868daf2" ] } }, "vdiRef": "OpaqueRef:7462ea3f-8b99-444e-9007-07529868daf2" }, "message": "Couldn't deleted snapshot data" }, { "data": { "error": { "code": "VDI_IN_USE", "params": [ "OpaqueRef:1dfc4766-a3b1-4540-ba6a-8c0eab4dbaca", "data_destroy" ], "call": { "method": "VDI.data_destroy", "params": [ "OpaqueRef:1dfc4766-a3b1-4540-ba6a-8c0eab4dbaca" ] } }, "vdiRef": "OpaqueRef:1dfc4766-a3b1-4540-ba6a-8c0eab4dbaca" }, "message": "Couldn't deleted snapshot data" }, { "data": { "attempt": 2, "error": "stream has ended with not enough data (actual: 446, expected: 512)" }, "message": "Retry the VM backup due to an error" } ], "infos": [ { "message": "will delete snapshot data" }, { "data": { "vdiRef": "OpaqueRef:18043fd4-1a85-495a-b011-18ce047e46de" }, "message": "Snapshot data has been deleted" }, { "data": { "vdiRef": "OpaqueRef:e4f89ea5-626d-4131-a9ce-ed330d3b2aec" }, "message": "Snapshot data has been deleted" }, { "data": { "vdiRef": "OpaqueRef:1b61a681-598b-4d6e-92c3-05c384fa0164" }, "message": "Snapshot data has been deleted" }, { "data": { "vdiRef": "OpaqueRef:ec381ed4-9d2d-407a-b56b-203c8029fcee" }, "message": "Snapshot data has been deleted" }, { "data": { "vdiRef": "OpaqueRef:abc11faa-73b0-46c3-b3dc-ffe8752671a7" }, "message": "Snapshot data has been deleted" }, { "data": { "vdiRef": "OpaqueRef:a387b947-47bf-4339-92c8-0d749803115f" }, "message": "Snapshot data has been deleted" }, { "data": { "vdiRef": "OpaqueRef:2f054a5c-f1f6-4b1d-9d54-f2e8d74b5757" }, "message": "Snapshot data has been deleted" }, { "message": "Transfer data using NBD" } ], "end": 1720680685540, "result": { "generatedMessage": false, "code": "ERR_ASSERTION", "actual": true, "expected": false, "operator": "strictEqual", "message": "Disk is still attached to DOM0 VM", "name": "AssertionError", "stack": "AssertionError [ERR_ASSERTION]: Disk is still attached to DOM0 VM\n at Array.<anonymous> (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:244:20)\n at Function.from (<anonymous>)\n at asyncMap (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/async-map/index.js:21:28)\n at Array.<anonymous> (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:233:13)\n at Function.from (<anonymous>)\n at asyncMap (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/async-map/index.js:21:28)\n at IncrementalXapiVmBackupRunner._removeUnusedSnapshots (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:219:11)\n at IncrementalXapiVmBackupRunner.run (file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:375:18)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async file:///usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/backups/_runners/VmsXapi.mjs:166:38" } } ], "end": 1720680685540 }