-
Hello All,
Backups have been running quite happily for months, and now they are failing with NO_HOSTS_AVAILABLE. I'm not sure how to diagnose this.
Can someone point me in the direction of the logs, troubleshooting, what this actually means (as it seems to be a generic message) . Please bear in mind that I am a newb at Opensource Linux flavour OS' so be gentle with your assumptions
For info, I'm running a Dell R740XD with a TrueNAS Scale Z2 configuration. I have XCP-NG setup on a another Dell server R7515 and a standard licence (so no deltas backups). The backups are sent to the TrueNAS box at about 1TB 3 times a week. There is a snapshot once a week on the TrueNAS that has a retention of 2 weeks. I have auto delete turned on in the backup settings in XO.
Assuming I've done everything correctly, which I should be the case as it has been running fine for months, I can't see what the issue is.
I'm one version behind on the XO and up to date on the TrusNAS.
Regards
Ryan
-
Hi,
I'm not sure this is the right error message. Can you provide more details? Also, you need to be fully up to date on your XO before reporting issues
-
Up to date.
{ "data": { "mode": "full", "reportWhen": "always" }, "id": "1694769218021", "jobId": "feeeefb8-0463-483a-a385-adc3a81be5e7", "jobName": "PS-XCP-01 - Full VM Backup", "message": "backup", "scheduleId": "032cb383-47dd-4dc4-b0b0-3f091f662656", "start": 1694769218021, "status": "failure", "infos": [ { "data": { "vms": [ "89aee0c4-f8f1-6961-1e85-aa76f7644220" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "89aee0c4-f8f1-6961-1e85-aa76f7644220", "name_label": "HBW16EXC10" }, "id": "1694769219646", "message": "backup VM", "start": 1694769219646, "status": "failure", "tasks": [ { "id": "1694769220046", "message": "snapshot", "start": 1694769220046, "status": "failure", "end": 1694769220397, "result": { "code": "NO_HOSTS_AVAILABLE", "params": [], "task": { "uuid": "15664390-4605-717d-d394-4f4ca8c0005b", "name_label": "Async.VM.snapshot", "name_description": "", "allowed_operations": [], "current_operations": {}, "created": "20230915T09:13:40Z", "finished": "20230915T09:13:40Z", "status": "failure", "resident_on": "OpaqueRef:b71238e8-bce1-4a59-b9be-870e2de57558", "progress": 1, "type": "<none/>", "result": "", "error_info": [ "NO_HOSTS_AVAILABLE" ], "other_config": {}, "subtask_of": "OpaqueRef:NULL", "subtasks": [], "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_helpers.ml)(line 788))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1240))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 231))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 103)))" }, "message": "NO_HOSTS_AVAILABLE()", "name": "XapiError", "stack": "XapiError: NO_HOSTS_AVAILABLE()\n at XapiError.wrap (/usr/local/lib/node_modules/xo-server/node_modules/xen-api/dist/_XapiError.js:21:12)\n at _default (/usr/local/lib/node_modules/xo-server/node_modules/xen-api/dist/_getTaskResult.js:18:38)\n at Xapi._addRecordToCache (/usr/local/lib/node_modules/xo-server/node_modules/xen-api/dist/index.js:752:51)\n at /usr/local/lib/node_modules/xo-server/node_modules/xen-api/dist/index.js:785:14\n at Array.forEach (<anonymous>)\n at Xapi._processEvents (/usr/local/lib/node_modules/xo-server/node_modules/xen-api/dist/index.js:773:12)\n at Xapi._watchEvents (/usr/local/lib/node_modules/xo-server/node_modules/xen-api/dist/index.js:908:14)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)" } }, { "id": "1694769220412", "message": "clean-vm", "start": 1694769220412, "status": "success", "end": 1694769220417, "result": { "merge": false } } ], "end": 1694769220422, "result": { "code": "NO_HOSTS_AVAILABLE", "params": [], "task": { "uuid": "15664390-4605-717d-d394-4f4ca8c0005b", "name_label": "Async.VM.snapshot", "name_description": "", "allowed_operations": [], "current_operations": {}, "created": "20230915T09:13:40Z", "finished": "20230915T09:13:40Z", "status": "failure", "resident_on": "OpaqueRef:b71238e8-bce1-4a59-b9be-870e2de57558", "progress": 1, "type": "<none/>", "result": "", "error_info": [ "NO_HOSTS_AVAILABLE" ], "other_config": {}, "subtask_of": "OpaqueRef:NULL", "subtasks": [], "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_helpers.ml)(line 788))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1240))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 231))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 103)))" }, "message": "NO_HOSTS_AVAILABLE()", "name": "XapiError", "stack": "XapiError: NO_HOSTS_AVAILABLE()\n at XapiError.wrap (/usr/local/lib/node_modules/xo-server/node_modules/xen-api/dist/_XapiError.js:21:12)\n at _default (/usr/local/lib/node_modules/xo-server/node_modules/xen-api/dist/_getTaskResult.js:18:38)\n at Xapi._addRecordToCache (/usr/local/lib/node_modules/xo-server/node_modules/xen-api/dist/index.js:752:51)\n at /usr/local/lib/node_modules/xo-server/node_modules/xen-api/dist/index.js:785:14\n at Array.forEach (<anonymous>)\n at Xapi._processEvents (/usr/local/lib/node_modules/xo-server/node_modules/xen-api/dist/index.js:773:12)\n at Xapi._watchEvents (/usr/local/lib/node_modules/xo-server/node_modules/xen-api/dist/index.js:908:14)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)" } } ], "end": 1694769220423 }
It seems that some of the VM's have now finished backing up, after forcing them. One of them keeps failing, hence the above error message.
-
Up to date? What's your commit number?
Also, the issue is not due to Xen Orchestra, but when XO is trying to ask for a snapshot or your VM (
Async.VM.snapshot
). This VM has a disk which is not usable because likely connected to another host which is shutdown or not there. -