@Tristis-Oris
Thanks, still a problem. I restarted the tool stack and got it to be "unstuck", but the system wouldn't migrate a VM from one host to another. So I shut down all the VMs, and shut down the hosts. After power up the VMs took a long time to go from yellow to green saying they were "starting", eventually everything came up, mostly. The Server 2025 had now lost the management agent (again), so I got tired of fooling with it and deleted the VM.
Then I built a new backup for a Server 2022 VM that seemed to be working fine, and ran it... Got to about 60% and failed. Restarted the job and it got stuck at 5% and was showing a VBD_force_unplug that would sit at zero, then go away (error out), then come back. I knew from a previous "successful" backup that this means it was going to fail and would hit my 3 hour time limit (didn't want to wait 24 hours this time).
Over all, something is broke and I don't know if it was the warm migration that got me here, or something else. The NAS seems to be working fine, files get created, lock file gets created, but this last time no image file was ever saved.
I think I'm going to burn it down and start fresh, right down to the OS install and see what's what.




{
"data": {
"mode": "delta",
"reportWhen": "failure"
},
"id": "1744897167241",
"jobId": "e58a6a56-921a-41dd-b195-165b09023c8e",
"jobName": "Win2022test",
"message": "backup",
"scheduleId": "d003e148-c436-42d5-8a8e-25dbd00154ce",
"start": 1744897167241,
"status": "failure",
"infos": [
{
"data": {
"vms": [
"bcc13ee0-eefa-01c1-07ce-91af06db25d8"
]
},
"message": "vms"
}
],
"tasks": [
{
"data": {
"type": "VM",
"id": "bcc13ee0-eefa-01c1-07ce-91af06db25d8",
"name_label": "LAB-2022 Test Mule - Warm migration - (20250306T163203Z)"
},
"id": "1744897169111",
"message": "backup VM",
"start": 1744897169111,
"status": "failure",
"tasks": [
{
"id": "1744897169132",
"message": "clean-vm",
"start": 1744897169132,
"status": "success",
"end": 1744897169142,
"result": {
"merge": false
}
},
{
"id": "1744897169334",
"message": "snapshot",
"start": 1744897169334,
"status": "success",
"end": 1744897174214,
"result": "1a8a2be6-f358-46a8-edaa-e1b8beeca77d"
},
{
"data": {
"id": "b6edd4f2-4966-4237-bfb5-f3c1c7c2b66e",
"isFull": true,
"type": "remote"
},
"id": "1744897174217",
"message": "export",
"start": 1744897174217,
"status": "failure",
"tasks": [
{
"id": "1744897175621",
"message": "transfer",
"start": 1744897175621,
"status": "failure",
"end": 1744898432582,
"result": {
"message": "stream has ended without data",
"name": "Error",
"stack": "Error: stream has ended without data\n at readChunkStrict (/opt/xo/xo-builds/xen-orchestra-202504160916/@vates/read-chunk/index.js:80:11)\n at process.processTicksAndRejections (node:internal/process/task_queues:105:5)"
}
}
],
"end": 1744898432583,
"result": {
"message": "stream has ended without data",
"name": "Error",
"stack": "Error: stream has ended without data\n at readChunkStrict (/opt/xo/xo-builds/xen-orchestra-202504160916/@vates/read-chunk/index.js:80:11)\n at process.processTicksAndRejections (node:internal/process/task_queues:105:5)"
}
}
],
"infos": [
{
"message": "Transfer data using NBD"
}
],
"end": 1744898501425,
"result": {
"message": "stream has ended without data",
"name": "Error",
"stack": "Error: stream has ended without data\n at readChunkStrict (/opt/xo/xo-builds/xen-orchestra-202504160916/@vates/read-chunk/index.js:80:11)\n at process.processTicksAndRejections (node:internal/process/task_queues:105:5)"
}
}
],
"end": 1744898501426
}