VHD Check Error
-
Below are logs from the most recent backups in home lab. Previous job before these completed with no errors.
"data": { "type": "VM", "id": "fb72a8d7-a039-849f-b547-24fc56f056ba", "name_label": "Work PC" }, "id": "1772820551246", "message": "backup VM", "start": 1772820551246, "status": "success", "tasks": [ { "id": "1772820551378", "message": "clean-vm", "start": 1772820551378, "status": "success", "warnings": [ { "data": { "path": "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/vdis/ec47112e-6518-4c2c-a59d-e6700d6f4923/468c876b-0054-41e2-840b-631d3c665f5d/20260305T180818Z.vhd", "error": { "parent": "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/vdis/ec47112e-6518-4c2c-a59d-e6700d6f4923/468c876b-0054-41e2-840b-631d3c665f5d/20260304T180919Z.vhd", "child1": "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/vdis/ec47112e-6518-4c2c-a59d-e6700d6f4923/468c876b-0054-41e2-840b-631d3c665f5d/20260305T050936Z.vhd", "child2": "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/vdis/ec47112e-6518-4c2c-a59d-e6700d6f4923/468c876b-0054-41e2-840b-631d3c665f5d/20260305T180818Z.vhd" } }, "message": "VHD check error" } ], "end": 1772820551595, "result": { "merge": false } }, { "id": "1772820552011", "message": "snapshot", "start": 1772820552011, "status": "success", "end": 1772820553464, "result": "1028f3b3-d45b-326c-0c0d-96a452d9ef1c" }, { "id": "1772820943456", "message": "health check", "start": 1772820943456, "status": "success", "infos": [ { "message": "This VM doesn't match the health check's tags for this schedule" } ], "end": 1772820943457 }, { "data": { "id": "a5e54e04-d7e4-48cb-bafc-b2f306d39679", "isFull": false, "type": "remote" }, "id": "1772820553464:0", "message": "export", "start": 1772820553464, "status": "success", "tasks": [ { "id": "1772820556813", "message": "transfer", "start": 1772820556813, "status": "success", "end": 1772820942395, "result": { "size": 30651973632 } }, { "id": "1772820943468", "message": "clean-vm", "start": 1772820943468, "status": "success", "warnings": [ { "data": { "path": "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/vdis/ec47112e-6518-4c2c-a59d-e6700d6f4923/468c876b-0054-41e2-840b-631d3c665f5d/20260305T050936Z.vhd", "error": { "parent": "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/vdis/ec47112e-6518-4c2c-a59d-e6700d6f4923/468c876b-0054-41e2-840b-631d3c665f5d/20260304T180919Z.vhd", "child1": "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/vdis/ec47112e-6518-4c2c-a59d-e6700d6f4923/468c876b-0054-41e2-840b-631d3c665f5d/20260305T180818Z.vhd", "child2": "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/vdis/ec47112e-6518-4c2c-a59d-e6700d6f4923/468c876b-0054-41e2-840b-631d3c665f5d/20260305T050936Z.vhd" } }, "message": "VHD check error" }, { "data": { "backup": "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/20260305T050936Z.json", "missingVhds": [ "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/vdis/ec47112e-6518-4c2c-a59d-e6700d6f4923/468c876b-0054-41e2-840b-631d3c665f5d/20260305T050936Z.vhd" ] }, "message": "some VHDs linked to the backup are missing" } ], "end": 1772820944285, "result": { "merge": true } } ], "end": 1772820944299 } ], "end": 1772820944299 },Previous backup job, marked as Skipped...
{ "data": { "type": "VM", "id": "fb72a8d7-a039-849f-b547-24fc56f056ba", "name_label": "Work PC" }, "id": "1772773302800", "message": "backup VM", "start": 1772773302800, "status": "skipped", "tasks": [ { "id": "1772773302805", "message": "clean-vm", "start": 1772773302805, "status": "success", "warnings": [ { "data": { "path": "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/vdis/ec47112e-6518-4c2c-a59d-e6700d6f4923/468c876b-0054-41e2-840b-631d3c665f5d/20260305T180818Z.vhd", "error": { "parent": "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/vdis/ec47112e-6518-4c2c-a59d-e6700d6f4923/468c876b-0054-41e2-840b-631d3c665f5d/20260304T180919Z.vhd", "child1": "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/vdis/ec47112e-6518-4c2c-a59d-e6700d6f4923/468c876b-0054-41e2-840b-631d3c665f5d/20260305T050936Z.vhd", "child2": "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/vdis/ec47112e-6518-4c2c-a59d-e6700d6f4923/468c876b-0054-41e2-840b-631d3c665f5d/20260305T180818Z.vhd" } }, "message": "VHD check error" } ], "end": 1772773303762, "result": { "merge": false } }, { "id": "1772773305019", "message": "snapshot", "start": 1772773305019, "status": "skipped", "end": 1772773305121, "result": { "message": "unhealthy VDI chain", "name": "Error", "stack": "Error: unhealthy VDI chain\n at Xapi._assertHealthyVdiChain (file:///opt/xen-orchestra/@xen-orchestra/xapi/vm.mjs:150:15)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async Xapi.assertHealthyVdiChains (file:///opt/xen-orchestra/@xen-orchestra/xapi/vm.mjs:214:7)\n at async file:///opt/xen-orchestra/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:184:11" } }, { "id": "1772773305238", "message": "clean-vm", "start": 1772773305238, "status": "success", "warnings": [ { "data": { "path": "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/vdis/ec47112e-6518-4c2c-a59d-e6700d6f4923/468c876b-0054-41e2-840b-631d3c665f5d/20260305T180818Z.vhd", "error": { "parent": "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/vdis/ec47112e-6518-4c2c-a59d-e6700d6f4923/468c876b-0054-41e2-840b-631d3c665f5d/20260304T180919Z.vhd", "child1": "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/vdis/ec47112e-6518-4c2c-a59d-e6700d6f4923/468c876b-0054-41e2-840b-631d3c665f5d/20260305T050936Z.vhd", "child2": "/xo-vm-backups/fb72a8d7-a039-849f-b547-24fc56f056ba/vdis/ec47112e-6518-4c2c-a59d-e6700d6f4923/468c876b-0054-41e2-840b-631d3c665f5d/20260305T180818Z.vhd" } }, "message": "VHD check error" } ], "end": 1772773307297, "result": { "merge": false } } ], "end": 1772773307301, "result": { "message": "unhealthy VDI chain", "name": "Error", "stack": "Error: unhealthy VDI chain\n at Xapi._assertHealthyVdiChain (file:///opt/xen-orchestra/@xen-orchestra/xapi/vm.mjs:150:15)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async Xapi.assertHealthyVdiChains (file:///opt/xen-orchestra/@xen-orchestra/xapi/vm.mjs:214:7)\n at async file:///opt/xen-orchestra/@xen-orchestra/backups/_runners/_vmRunners/_AbstractXapi.mjs:184:11" } }
-
I looked into the backup job and i forgot to enable a few settings when recreated the backup job.
After renabling the below settings i reran the backup job. The vm in question did a full backup. All passed.

{ "data": { "type": "VM", "id": "fb72a8d7-a039-849f-b547-24fc56f056ba", "name_label": "Work PC" }, "id": "1772833993844", "message": "backup VM", "start": 1772833993844, "status": "success", "tasks": [ { "id": "1772833993868", "message": "clean-vm", "start": 1772833993868, "status": "success", "end": 1772833994233, "result": { "merge": false } }, { "id": "1772833994652", "message": "snapshot", "start": 1772833994652, "status": "success", "end": 1772833997235, "result": "16f5fe19-207a-4d89-017c-3f9405d22231" }, { "id": "1772834490355:0", "message": "health check", "start": 1772834490355, "status": "success", "infos": [ { "message": "This VM doesn't match the health check's tags for this schedule" } ], "end": 1772834490356 }, { "data": { "id": "a5e54e04-d7e4-48cb-bafc-b2f306d39679", "isFull": true, "type": "remote" }, "id": "1772833997235:0", "message": "export", "start": 1772833997235, "status": "success", "tasks": [ { "id": "1772834004185", "message": "transfer", "start": 1772834004185, "status": "success", "end": 1772834489032, "result": { "size": 119502012416 } }, { "id": "1772834490368", "message": "clean-vm", "start": 1772834490368, "status": "success", "end": 1772834490475, "result": { "merge": false } } ], "end": 1772834490476 } ], "infos": [ { "message": "will delete snapshot data" }, { "data": { "vdiRef": "OpaqueRef:31692cb3-7c43-de83-2cc8-f2e39a0105c8" }, "message": "Snapshot data has been deleted" } ], "end": 1772834490476 } -
Not sure if related to this. But now all my vms are failing back to full backups.
Attached is the first backup when all vms fail back full backup and the most recent backup log. All backups in between were full backups.
2026-03-07T05_00_00.117Z - backup NG.txt
2026-03-10T04_00_03.845Z - backup NG.txt
Backup file before vms fail back to full.
2026-03-06T21_42_15.233Z - backup NG.txt
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login