Its a start of something great, need more functions though! For example:
- VM controls (start/stop)
- VM Information
- VM Import/Export
- Add/remove storage
- SR Information
- Insert/remove dvd
- Toolstacksrestart
- Warnings (low memory/storage space)
Its a start of something great, need more functions though! For example:
@gawlo if you can reboot then you can try, but if the process is started after the reboot, it will still block your mountpoint.
And the mountpoint is the result of this command
xe pbd-param-get param-name=device-config uuid=44e9e1b7-5a7e-8e95-c5f1-edeebbc6863c
@technot when you say performance was a bit on the low end when dom0 handled the drive, how low compared to when the controller is passthrough?
For ZFS, it is still good with few number of disks but the performance won't be high than raid10 until you have more vdevs (which means striping across multiple vdevs). You do need to passthrough the disks to dom0, so you'll have to destroy the raid as you mentioned and the more ram the better for ZFS, usually around 1GB per 1TB of storage for good caching performance.
One thing to note, you can't use ZFS for dom0 yet so you still need another drive for XCP-ng.
@gawlo to check mountpoint as olivier suggested
lsof +D /mountpoint
@gawlo have you try restarting the host? may be check if xapi service is running
systemctl status xapi.service
Machine type? Are you talking about QEMU stuff? If so that is only for emulation, XCP-ng is virtualization so you get the same hardware in your VM as the physical hardware on your host.
You can use the following command to display all the PCI devices being passthrough, and then just set them up again without the one you want to remove.
/opt/xensource/libexec/xen-cmdline --get-dom0 xen-pciback.hide
The 2nd option is basically forcing the static IP through XCP-ng, if you have SSH access to your XOA VM then you can also use option 3 (hint: XOA is based on debian 10).
The steps for option 2 are:
xe vif-list vm-uuid=<UUID of your XOA VM>
xe vif-configure-ipv4 uuid=<UUID of your XOA VM VIF> mode=static address=<IP address/Subnet mask> gateway=<Gateway address>
There is so much you can do through a web browser, I think your best bet is to click SSH
or SSH as...
to open up putty instead.
While we're at it, most CPU icons have "legs" on 4 sides, and RAM icons have "legs" on 2 sides, the current CPU icon looks like RAM chips instead
I forgot to add that there are other VMs on the same host and using the same shared CEPH storage via NFS (same SR) and they are able to backup normally.
Hello,
I'm facing this weird issue, after warm migrating this VM from an old storage (NFS) to a new CEPH storage (connected via NFS) I am having trouble backing up this particular VM.
{
"data": {
"mode": "delta",
"reportWhen": "failure"
},
"id": "1758981070521",
"jobId": "1642683d-1fa6-436d-8889-64338db97e40",
"jobName": "Daily VM Backup Local",
"message": "backup",
"scheduleId": "85899347-f880-42b8-9c44-82756354084c",
"start": 1758981070521,
"status": "failure",
"infos": [
{
"data": {
"vms": [
"0fcf121c-0849-299a-06b8-cabb0c482dc0"
]
},
"message": "vms"
}
],
"tasks": [
{
"data": {
"type": "VM",
"id": "0fcf121c-0849-299a-06b8-cabb0c482dc0",
"name_label": "SCMP MSSQL + hmailserver"
},
"id": "1758981072370",
"message": "backup VM",
"start": 1758981072370,
"status": "failure",
"tasks": [
{
"id": "1758981072482",
"message": "clean-vm",
"start": 1758981072482,
"status": "success",
"end": 1758981072486,
"result": {
"merge": false
}
},
{
"id": "1758981073026",
"message": "snapshot",
"start": 1758981073026,
"status": "failure",
"end": 1758981074934,
"result": {
"code": "SR_BACKEND_FAILURE_1200",
"params": [
"",
"[Errno 30] Read-only file system: '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/5fe9dbfa-685f-4de9-b099-ccb7ca823269.vhd' -> '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/add9095c-88fd-4226-b7c0-db0018b922ea.vhd'",
""
],
"task": {
"uuid": "fcbffbcb-1437-dce8-e270-42bfb3e63ccb",
"name_label": "Async.VM.snapshot",
"name_description": "",
"allowed_operations": [],
"current_operations": {},
"created": "20250927T13:51:13Z",
"finished": "20250927T13:51:14Z",
"status": "failure",
"resident_on": "OpaqueRef:746b6b26-7640-265b-0f14-fe8f7d857747",
"progress": 1,
"type": "<none/>",
"result": "",
"error_info": [
"SR_BACKEND_FAILURE_1200",
"",
"[Errno 30] Read-only file system: '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/5fe9dbfa-685f-4de9-b099-ccb7ca823269.vhd' -> '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/add9095c-88fd-4226-b7c0-db0018b922ea.vhd'",
""
],
"other_config": {},
"subtask_of": "OpaqueRef:NULL",
"subtasks": [
"OpaqueRef:36429912-8dac-e01a-7fb4-79237a132284"
],
"backtrace": "(((process xapi)(filename ocaml/xapi-client/client.ml)(line 7))((process xapi)(filename ocaml/xapi-client/client.ml)(line 19))((process xapi)(filename ocaml/xapi-client/client.ml)(line 5953))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 144))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1730))((process xapi)(filename ocaml/xapi/rbac.ml)(line 188))((process xapi)(filename ocaml/xapi/rbac.ml)(line 197))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 77)))"
},
"message": "SR_BACKEND_FAILURE_1200(, [Errno 30] Read-only file system: '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/5fe9dbfa-685f-4de9-b099-ccb7ca823269.vhd' -> '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/add9095c-88fd-4226-b7c0-db0018b922ea.vhd', )",
"name": "XapiError",
"stack": "XapiError: SR_BACKEND_FAILURE_1200(, [Errno 30] Read-only file system: '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/5fe9dbfa-685f-4de9-b099-ccb7ca823269.vhd' -> '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/add9095c-88fd-4226-b7c0-db0018b922ea.vhd', )\n at XapiError.wrap (file:///root/xen-orchestra/packages/xen-api/_XapiError.mjs:16:12)\n at default (file:///root/xen-orchestra/packages/xen-api/_getTaskResult.mjs:13:29)\n at Xapi._addRecordToCache (file:///root/xen-orchestra/packages/xen-api/index.mjs:1073:24)\n at file:///root/xen-orchestra/packages/xen-api/index.mjs:1107:14\n at Array.forEach (<anonymous>)\n at Xapi._processEvents (file:///root/xen-orchestra/packages/xen-api/index.mjs:1097:12)\n at Xapi._watchEvents (file:///root/xen-orchestra/packages/xen-api/index.mjs:1270:14)\n at process.processTicksAndRejections (node:internal/process/task_queues:105:5)"
}
},
{
"id": "1758981075210",
"message": "clean-vm",
"start": 1758981075210,
"status": "success",
"end": 1758981075213,
"result": {
"merge": false
}
},
{
"id": "1758981075261",
"message": "clean-vm",
"start": 1758981075261,
"status": "success",
"end": 1758981075262,
"result": {
"merge": false
}
},
{
"id": "1758981075549",
"message": "snapshot",
"start": 1758981075549,
"status": "failure",
"end": 1758981077254,
"result": {
"code": "SR_BACKEND_FAILURE_1200",
"params": [
"",
"[Errno 30] Read-only file system: '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/5fe9dbfa-685f-4de9-b099-ccb7ca823269.vhd' -> '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/e613d004-dcff-4836-8e0b-5e80ecf4ac9c.vhd'",
""
],
"task": {
"uuid": "4158274c-ef9e-3cef-f399-a614c044b029",
"name_label": "Async.VM.snapshot",
"name_description": "",
"allowed_operations": [],
"current_operations": {},
"created": "20250927T13:51:15Z",
"finished": "20250927T13:51:17Z",
"status": "failure",
"resident_on": "OpaqueRef:78fa9361-95c3-4d0a-b29e-838284ff9cb0",
"progress": 1,
"type": "<none/>",
"result": "",
"error_info": [
"SR_BACKEND_FAILURE_1200",
"",
"[Errno 30] Read-only file system: '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/5fe9dbfa-685f-4de9-b099-ccb7ca823269.vhd' -> '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/e613d004-dcff-4836-8e0b-5e80ecf4ac9c.vhd'",
""
],
"other_config": {},
"subtask_of": "OpaqueRef:NULL",
"subtasks": [
"OpaqueRef:0fadad9b-b5d9-30ed-0f0c-4abcf23b3879"
],
"backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 77))((process xapi)(filename list.ml)(line 110))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 120))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 128))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 171))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 209))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 220))((process xapi)(filename list.ml)(line 121))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 222))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 455))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/xapi_vm_snapshot.ml)(line 34))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 141))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1730))((process xapi)(filename ocaml/xapi/rbac.ml)(line 188))((process xapi)(filename ocaml/xapi/rbac.ml)(line 197))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 77)))"
},
"message": "SR_BACKEND_FAILURE_1200(, [Errno 30] Read-only file system: '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/5fe9dbfa-685f-4de9-b099-ccb7ca823269.vhd' -> '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/e613d004-dcff-4836-8e0b-5e80ecf4ac9c.vhd', )",
"name": "XapiError",
"stack": "XapiError: SR_BACKEND_FAILURE_1200(, [Errno 30] Read-only file system: '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/5fe9dbfa-685f-4de9-b099-ccb7ca823269.vhd' -> '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/e613d004-dcff-4836-8e0b-5e80ecf4ac9c.vhd', )\n at XapiError.wrap (file:///root/xen-orchestra/packages/xen-api/_XapiError.mjs:16:12)\n at default (file:///root/xen-orchestra/packages/xen-api/_getTaskResult.mjs:13:29)\n at Xapi._addRecordToCache (file:///root/xen-orchestra/packages/xen-api/index.mjs:1073:24)\n at file:///root/xen-orchestra/packages/xen-api/index.mjs:1107:14\n at Array.forEach (<anonymous>)\n at Xapi._processEvents (file:///root/xen-orchestra/packages/xen-api/index.mjs:1097:12)\n at Xapi._watchEvents (file:///root/xen-orchestra/packages/xen-api/index.mjs:1270:14)\n at process.processTicksAndRejections (node:internal/process/task_queues:105:5)"
}
},
{
"id": "1758981077542",
"message": "clean-vm",
"start": 1758981077542,
"status": "success",
"end": 1758981077544,
"result": {
"merge": false
}
}
],
"warnings": [
{
"data": {
"attempt": 1,
"error": "SR_BACKEND_FAILURE_1200(, [Errno 30] Read-only file system: '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/5fe9dbfa-685f-4de9-b099-ccb7ca823269.vhd' -> '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/add9095c-88fd-4226-b7c0-db0018b922ea.vhd', )"
},
"message": "Retry the VM backup due to an error"
}
],
"end": 1758981077598,
"result": {
"code": "SR_BACKEND_FAILURE_1200",
"params": [
"",
"[Errno 30] Read-only file system: '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/5fe9dbfa-685f-4de9-b099-ccb7ca823269.vhd' -> '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/e613d004-dcff-4836-8e0b-5e80ecf4ac9c.vhd'",
""
],
"task": {
"uuid": "4158274c-ef9e-3cef-f399-a614c044b029",
"name_label": "Async.VM.snapshot",
"name_description": "",
"allowed_operations": [],
"current_operations": {},
"created": "20250927T13:51:15Z",
"finished": "20250927T13:51:17Z",
"status": "failure",
"resident_on": "OpaqueRef:78fa9361-95c3-4d0a-b29e-838284ff9cb0",
"progress": 1,
"type": "<none/>",
"result": "",
"error_info": [
"SR_BACKEND_FAILURE_1200",
"",
"[Errno 30] Read-only file system: '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/5fe9dbfa-685f-4de9-b099-ccb7ca823269.vhd' -> '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/e613d004-dcff-4836-8e0b-5e80ecf4ac9c.vhd'",
""
],
"other_config": {},
"subtask_of": "OpaqueRef:NULL",
"subtasks": [
"OpaqueRef:0fadad9b-b5d9-30ed-0f0c-4abcf23b3879"
],
"backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 77))((process xapi)(filename list.ml)(line 110))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 120))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 128))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 171))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 209))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 220))((process xapi)(filename list.ml)(line 121))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 222))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 455))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/xapi_vm_snapshot.ml)(line 34))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 141))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 1730))((process xapi)(filename ocaml/xapi/rbac.ml)(line 188))((process xapi)(filename ocaml/xapi/rbac.ml)(line 197))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 77)))"
},
"message": "SR_BACKEND_FAILURE_1200(, [Errno 30] Read-only file system: '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/5fe9dbfa-685f-4de9-b099-ccb7ca823269.vhd' -> '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/e613d004-dcff-4836-8e0b-5e80ecf4ac9c.vhd', )",
"name": "XapiError",
"stack": "XapiError: SR_BACKEND_FAILURE_1200(, [Errno 30] Read-only file system: '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/5fe9dbfa-685f-4de9-b099-ccb7ca823269.vhd' -> '/var/run/sr-mount/27efe059-c8fd-fc75-e216-1bca3c4b3506/e613d004-dcff-4836-8e0b-5e80ecf4ac9c.vhd', )\n at XapiError.wrap (file:///root/xen-orchestra/packages/xen-api/_XapiError.mjs:16:12)\n at default (file:///root/xen-orchestra/packages/xen-api/_getTaskResult.mjs:13:29)\n at Xapi._addRecordToCache (file:///root/xen-orchestra/packages/xen-api/index.mjs:1073:24)\n at file:///root/xen-orchestra/packages/xen-api/index.mjs:1107:14\n at Array.forEach (<anonymous>)\n at Xapi._processEvents (file:///root/xen-orchestra/packages/xen-api/index.mjs:1097:12)\n at Xapi._watchEvents (file:///root/xen-orchestra/packages/xen-api/index.mjs:1270:14)\n at process.processTicksAndRejections (node:internal/process/task_queues:105:5)"
}
}
],
"end": 1758981077598
}
Seems like it failed during snapshoting, so I tried manual snapshot, that also fails with the same read-only error. I tried stop/start the vm but still the same issue occur. I checked with xe vdi-list and the read-only parameter is false. What do I have to check next?
I have another weird one. The numeric representative is correct, but the bar graph is unintuitive.
@olivierlambert But for my case, I have nothing else in that NFS share except XCP-ng VHDs (single SR folder with nothing outside it). For my case (currently) I checked du -h
on my NFS share and I get 572G and df -h
shows about the same usage 572.1G. If I add total allocated disk space from all VMs using this NFS share, it is only 450G, so this means there are 100G+ of file system usage (from snapshots?).
I can now understand what the numbers represents (the actual physical disk usage not VM disk usage) but the bar graph still doesn't show the same thing. So just to clarify, does the bar graph actually show VM disk usage and not physical disk usage?
@olivierlambert Thank you for replying, it is indeed what the NFS share reports as free, but why there is such a discrepancy in NFS file size and actual VM disk size? Is it to do with VDI coalescing?
@peder The bar is correct I think, after adding all the disk allocated I get 450 GiB, but the number 715.95 GiB used is incorrect. I can't figure out where does this number come from, may be it also counts backups? But I don't store backups on this nfs share.
Hello, anyone have this problem or is it just my configuration?
The number says 200 GiB free, which should be around 20% free, but the graph clearly shows more than 50% free.
Ok figured it out at the end using xe pool-list
on the existing pool gives me a list of existing hosts on the pool, and there was one that has been taken offline, without detaching in XO. So I did a xe host-forget uuid=<host uuid>
because xe pool-eject
doesn't work on offline host. And now I can add the new host just fine. Thanks everyone for helping!
Updating the BIOS of the new host made no differences. So I tried to add it to my test pool and what do you know, it works without any issue and the test pool has a master of even older CPU so I don't think its compatibility issue.
There must be something wrong with my main pool that I'm trying to add the new host to.
@nick-lloyd The new host is has new CPU, the old host already in the pool has older CPU. I will try to update the BIOS of the new host and see if it helps. I don't think the old host will have any updates.