-
@geoffbland The entry for issues is this repo: https://github.com/xcp-ng/xcp
The sm repo is used for the pull requests. -
First tests with XOSAN with newly created VMs have been good.
I'm now trying to migrate some existing VMs from NFS (TrueNAS) to XOSAN to test "active" VMs.
With the VM running - pressing the Migrate VDI button on the Disks tab, pauses the VM as expected but when the VM restarts the VDI is still on the original disk. The VDI has not been migrated to XOSAN.
If I first stop the VM and then press the Migrate VDI button on the Disks tab, I then do get an error.
vdi.migrate { "id": "8a3520ad-328f-4515-b547-2fb283edbd91", "sr_id": "cf896912-cd71-d2b2-488a-5792b7147c87" } { "code": "SR_BACKEND_FAILURE_46", "params": [ "", "The VDI is not available [opterr=Could not load f1ca0b16-ce23-408a-b80e-xxxxxxxxxxxx because: No such file or directory]", "" ], "task": { "uuid": "8b3b47ee-4135-fea7-5f30-xxxxxxxxxxxx", "name_label": "Async.VDI.pool_migrate", "name_description": "", "allowed_operations": [], "current_operations": {}, "created": "20220522T12:20:12Z", "finished": "20220522T12:20:54Z", "status": "failure", "resident_on": "OpaqueRef:a1e9a8f3-0a79-4824-b29f-d81b3246d190", "progress": 1, "type": "<none/>", "result": "", "error_info": [ "SR_BACKEND_FAILURE_46", "", "The VDI is not available [opterr=Could not load f1ca0b16-ce23-408a-b80e-xxxxxxxxxxxx because: No such file or directory]", "" ], "other_config": {}, "subtask_of": "OpaqueRef:NULL", "subtasks": [], "backtrace": "(((process xapi)(filename ocaml/xapi-client/client.ml)(line 7))((process xapi)(filename ocaml/xapi-client/client.ml)(line 19))((process xapi)(filename ocaml/xapi-client/client.ml)(line 12325))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 131))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 231))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 103)))" }, "message": "SR_BACKEND_FAILURE_46(, The VDI is not available [opterr=Could not load f1ca0b16-ce23-408a-b80e-xxxxxxxxxxxx because: No such file or directory], )", "name": "XapiError", "stack": "XapiError: SR_BACKEND_FAILURE_46(, The VDI is not available [opterr=Could not load f1ca0b16-ce23-408a-b80e-xxxxxxxxxxxx because: No such file or directory], ) at Function.wrap (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/_XapiError.js:16:12) at _default (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/_getTaskResult.js:11:29) at Xapi._addRecordToCache (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/index.js:949:24) at forEach (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/index.js:983:14) at Array.forEach (<anonymous>) at Xapi._processEvents (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/index.js:973:12) at Xapi._watchEvents (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/index.js:1139:14)" }
Exporting the VDI from NFS and (re)importing as a VM on XOSTOR does work.
I'm guessing this is not a problem with XOSTOR specifically but with XO or NFS, still I would like to work out what is causing migration and how to fix it?
Also I have noticed that the underlying volume on XOSTOR/linstor that was created and started to be populated does get cleaned up when the migrate fails.
This is using XO from the sources - updated fairly recently (commit 8ed84) and XO server 5.92.0.
-
It could be interesting to understand why the migration failed the first time. Is there absolutely no error during this first migration?
-
@olivierlambert Thanks for the prompt response.
I am pretty sure there was no error reported but as I cleared down the logs when I retried from on export/import I can't be 100% sure.
So I tested migration on another VM to try and replicate this and this time migration worked OK.
The only difference I can think of is that the failure occurred on a VM created quite a time ago - whilst the working VM had been created recently.
I will do a few more tests and see if I can replicate this again.
-
Okay great If you can reproduce, that would be even better to try to do the migration with
xe
CLI, this way we remove more moving pieces in the middle -
@olivierlambert Sorry, took some time to get around to this. But trying to migrate a VDI from an NFS store to XOSTOR is still failing most of the time. This is a VM that was created some time ago - it I do the same with the VDI of a a recently created VM the migration seems to work OK.
>xe vm-disk-list vm=lb01 Disk 0 VBD: uuid ( RO) : d9d06048-6f91-1913-714d-xxxxxxxxaece vm-name-label ( RO): lb01 userdevice ( RW): 0 Disk 0 VDI: uuid ( RO) : a38f27e8-c6a0-49d3-9fd3-xxxxxxxx10e3 name-label ( RW): lb01_tnc01_hdd sr-name-label ( RO): XCPNG_VMs_TrueNAS virtual-size ( RO): 10737418240 >xe sr-list name-label=XOSTOR01 uuid ( RO) : cf896912-cd71-d2b2-488a-xxxxxxxx7c87 name-label ( RW): XOSTOR01 name-description ( RW): host ( RO): <shared> type ( RO): linstor content-type ( RO): >xe vdi-pool-migrate uuid=a38f27e8-c6a0-49d3-9fd3-xxxxxxxx10e3 sr-uuid=cf896912-cd71-d2b2-488a-xxxxxxxx7c87 Error code: SR_BACKEND_FAILURE_46 Error parameters: , The VDI is not available [opterr=Could not load 735fc2d7-f1f0-4cc6-9d35-xxxxxxxxec6c because: ['XENAPI_PLUGIN_FAILURE', 'getVHDInfo', 'CommandException', 'No such file or directory']],
Running this I see the VM pause as expected for a few minutes and then it just starts up again. VM is still running with no issues - it just did not move the VDI.
What is the resource with UUID
735fc2d7-f1f0-4cc6-9d35-xxxxxxxxec6c
that it is trying to find? That UUID does not match the VDI.The VDI must be OK as the VM is still up and running with no errors.
As this is probably not an XOSTOR issue - should I raise a new topic for this?
-
It's hard to tell. If you can migrate between non-XOSTOR SRs and see if you reproduce, then it's another issue. If it's only happening when using XOSTOR in the loop, then it's relevant here
-
@geoffbland I can't reproduce your problem, can you send me the SMlog of your hosts please?
-
@ronan-a said in XOSTOR hyperconvergence preview:
@geoffbland I can't reproduce your problem, can you send me the SMlog of your hosts please?
As requested,
May 24 09:13:22 XCPNG01 SM: [18127] ['/usr/bin/vhd-util', 'query', '--debug', '-vsfp', '-n', '/dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-xxxxxxxx58c0/0'] May 24 09:13:23 XCPNG01 SM: [18127] FAILED in util.pread: (rc 2) stdout: '40960 May 24 09:13:23 XCPNG01 SM: [18127] 2048840192 May 24 09:13:23 XCPNG01 SM: [18127] query failed May 24 09:13:23 XCPNG01 SM: [18127] hidden: 0 May 24 09:13:23 XCPNG01 SM: [18127] ', stderr: '' May 24 09:13:23 XCPNG01 SM: [18127] linstor-manager:get_vhd_info error: No such file or directory May 24 09:13:26 XCPNG01 SM: [18158] ['/usr/bin/vhd-util', 'query', '--debug', '-vsfp', '-n', '/dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-xxxxxxxx58c0/0'] May 24 09:13:26 XCPNG01 SM: [18158] FAILED in util.pread: (rc 2) stdout: '40960 May 24 09:13:26 XCPNG01 SM: [18158] 2048840192 May 24 09:13:26 XCPNG01 SM: [18158] query failed May 24 09:13:26 XCPNG01 SM: [18158] hidden: 0 May 24 09:13:26 XCPNG01 SM: [18158] ', stderr: '' May 24 09:13:26 XCPNG01 SM: [18158] linstor-manager:get_vhd_info error: No such file or directory May 24 09:13:29 XCPNG01 SM: [18200] ['/usr/bin/vhd-util', 'query', '--debug', '-vsfp', '-n', '/dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-xxxxxxxx58c0/0'] May 24 09:13:29 XCPNG01 SM: [18200] FAILED in util.pread: (rc 2) stdout: '40960 May 24 09:13:29 XCPNG01 SM: [18200] 2048840192 May 24 09:13:29 XCPNG01 SM: [18200] query failed May 24 09:13:29 XCPNG01 SM: [18200] hidden: 0 May 24 09:13:29 XCPNG01 SM: [18200] ', stderr: '' May 24 09:13:29 XCPNG01 SM: [18200] linstor-manager:get_vhd_info error: No such file or directory May 24 09:13:32 XCPNG01 SM: [18212] ['/usr/bin/vhd-util', 'query', '--debug', '-vsfp', '-n', '/dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-xxxxxxxx58c0/0'] May 24 09:13:32 XCPNG01 SM: [18212] FAILED in util.pread: (rc 2) stdout: '40960 May 24 09:13:32 XCPNG01 SM: [18212] 2048840192 May 24 09:13:32 XCPNG01 SM: [18212] query failed May 24 09:13:32 XCPNG01 SM: [18212] hidden: 0 May 24 09:13:32 XCPNG01 SM: [18212] ', stderr: '' May 24 09:13:32 XCPNG01 SM: [18212] linstor-manager:get_vhd_info error: No such file or directory May 24 09:13:35 XCPNG01 SM: [18247] ['/usr/bin/vhd-util', 'query', '--debug', '-vsfp', '-n', '/dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-xxxxxxxx58c0/0'] May 24 09:13:35 XCPNG01 SM: [18247] FAILED in util.pread: (rc 2) stdout: '40960 May 24 09:13:35 XCPNG01 SM: [18247] 2048840192 May 24 09:13:35 XCPNG01 SM: [18247] query failed May 24 09:13:35 XCPNG01 SM: [18247] hidden: 0 May 24 09:13:35 XCPNG01 SM: [18247] ', stderr: '' May 24 09:13:35 XCPNG01 SM: [18247] linstor-manager:get_vhd_info error: No such file or directory May 24 09:13:36 XCPNG01 SM: [18259] ['/usr/bin/vhd-util', 'query', '--debug', '-vsfp', '-n', '/dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-xxxxxxxx58c0/0'] May 24 09:13:36 XCPNG01 SM: [18259] FAILED in util.pread: (rc 2) stdout: '40960 May 24 09:13:36 XCPNG01 SM: [18259] 2048840192 May 24 09:13:36 XCPNG01 SM: [18259] query failed May 24 09:13:36 XCPNG01 SM: [18259] hidden: 0 May 24 09:13:36 XCPNG01 SM: [18259] ', stderr: '' May 24 09:13:36 XCPNG01 SM: [18259] linstor-manager:get_vhd_info error: No such file or directory
-
I found one of the VMs I had been using to test XOSTOR was locked up this morning. I restarted it but it will not start up and gives an error about the VDI being missing.
>xe vm-list name-label=test04 uuid ( RO) : 8ec952b4-7229-7a30-81b6-1564a58f6343 name-label ( RW): test04 power-state ( RO): halted >xe vm-disk-list vm=test04 Disk 0 VBD: uuid ( RO) : e3c465b8-17a4-d147-6383-527bd9341a16 vm-name-label ( RO): test04 userdevice ( RW): 0 Disk 0 VDI: uuid ( RO) : 735fc2d7-f1f0-4cc6-9d35-42a049d8ec6c name-label ( RW): test04_xostor01_vdi sr-name-label ( RO): XOSTOR01 virtual-size ( RO): 42949672960 >xe vm-start vm=test04 Error code: SR_BACKEND_FAILURE_46 Error parameters: , The VDI is not available [opterr=Could not load 735fc2d7-f1f0-4cc6-9d35-42a049d8ec6c because: ['XENAPI_PLUGIN_FAILURE', 'getVHDInfo', 'CommandException', 'No such file or directory']],
The logs for this are attached as file xostor issue 1.txt
-
@geoffbland Can you execute this command on the other hosts please?
ls -l /dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-a0665bb758c0/0
Also I don't have all the info in your previous log, can you send me the previous SMlog files? (Using private message if you want. )
-
@ronan-a said in XOSTOR hyperconvergence preview:
Can you execute this command on the other hosts please?
As requested
XCPNG01 - Current linstor master
[10:59 XCPNG01 ~]# ls -l /dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-a0665bb758c0/0 lrwxrwxrwx 1 root root 17 May 22 19:24 /dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-a0665bb758c0/0 -> ../../../drbd1004
XCPNG02
[11:00 XCPNG02 ~]# ls -l /dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-a0665bb758c0/0 lrwxrwxrwx 1 root root 17 May 22 19:25 /dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-a0665bb758c0/0 -> ../../../drbd1004
XCPNG03
[07:31 XCPNG03 ~]# ls -l /dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-a0665bb758c0/0 lrwxrwxrwx 1 root root 17 May 22 19:24 /dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-a0665bb758c0/0 -> ../../../drbd1004
XCPNG04
[07:35 XCPNG04 ~]# ls -l /dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-a0665bb758c0/0 lrwxrwxrwx 1 root root 17 May 22 19:24 /dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-a0665bb758c0/0 -> ../../../drbd1004
XCPNG05
[10:49 XCPNG05 ~]# ls -l /dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-a0665bb758c0/0 lrwxrwxrwx 1 root root 17 May 22 19:24 /dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-a0665bb758c0/0 -> ../../../drbd1004
-
It seems like xe may be getting mixed up between where a host is running and where the XOSTOR storage is held.
Apologies if I have misunderstood and done something wrong here - but I think this migration should have worked.
I created a new VM on one of my hosts XCPNG05 using XOSTOR as the VDI RS. I can see that the linstore volumes are on hosts XCPNG01, XCPNG03 and XCPNG05.
XCPNG05 is an Intel server, XCPNG01 and XCPNG03 are AMD. The VM is running on XCPNG05.
Now when I try to migrate the VM's VDI from XOSTOR onto a local VDI on the same host the VM is currently running on I get a warning about incompatible CPUs.To replicate the issue:
Create new VM test05 on XOSTOR.
VM is created on host XCPNG05.>xe vm-list name-label=test05 uuid ( RO) : d3f8c52d-be3c-3712-0ccc-a526dcc241a5 name-label ( RW): test05 power-state ( RO): running >xe vm-disk-list vm=test05 Disk 0 VBD: uuid ( RO) : a337cd1f-04cc-ce46-fbfb-d5d8e290dc03 vm-name-label ( RO): test05 userdevice ( RW): 0 Disk 0 VDI: uuid ( RO) : f856680c-c00d-44af-ba3f-16d9952ccb2f name-label ( RW): test05_vdi sr-name-label ( RO): XOSTOR01 virtual-size ( RO): 34359738368 >xe sr-list name-label=XOSTOR01 uuid ( RO) : cf896912-cd71-d2b2-488a-xxxxxxxx7c87 name-label ( RW): XOSTOR01 name-description ( RW): host ( RO): <shared> type ( RO): linstor content-type ( RO):
Migrate to local disk (SSD1) on same host (XCPNG05) - this fails migrating to the same host the VM is currently running on due to incompatible CPU.
>xe sr-list name-label=XCPNG05SSD1 uuid ( RO) : c0851501-3a1b-c661-70b9-54373e0d9847 name-label ( RW): XCPNG05SSD1 name-description ( RW): host ( RO): XCPNG05 type ( RO): lvm content-type ( RO): user >xe vdi-pool-migrate uuid=f856680c-c00d-44af-ba3f-16d9952ccb2f sr-uuid=c0851501-3a1b-c661-70b9-54373e0d9847 The VM is incompatible with the CPU features of this host. vm: d3f8c52d-be3c-3712-0ccc-a526dcc241a5 (test05) host: 7bd62a77-71d6-4b51-9a86-850dd4ff4b60 (XCPNG05) reason: VM last booted on a host which had a CPU from a different vendor.
-
@geoffbland Thank you, so ok the VDI is still here on all hosts.
You can try to check the status of the VDH like the smapi using:
/usr/bin/vhd-util query --debug -vsfp -n /dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-a0665bb758c0/0
If you have this problem on many hosts, I suspect a problem with DRBD, so maybe there is a useful info in daemon.log and/or kern.log.
-
@ronan-a said in XOSTOR hyperconvergence preview:
You can try to check the status of the VDH like the smapi using:
/usr/bin/vhd-util query --debug -vsfp -n /dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-a0665bb758c0/0
useful info in daemon.log and/or kern.log.This gives the following:
[10:59 XCPNG01 ~]# /usr/bin/vhd-util query --debug -vsfp -n /dev/drbd/by-res/xcp-volume-75c11231-0fb8-4b40-9e2e-a0665bb758c0/0 40960 2061447680 query failed hidden: 0
I will send logs by direct mail.
-
@geoffbland Okay so it's probably not related to the driver itself, I will take a look to the logs after reception.
-
Failure trying to revert a VM to a snapshot with XOSTOR.
Created a VM with main VDI on XOSTOR (24GB) and with 6 disks each also on XOSTOR (2GB each).
All is running OK.
Now create a snapshot of the VM - this takes quite a while but does eventually succeed.
Now using XO (from sources) click the "Revert VM to this snapshot". This errors and the VM stops.vm.revert { "snapshot": "6032fc73-eb7f-cf64-2481-4346b7b57204" } { "code": "VM_REVERT_FAILED", "params": [ "OpaqueRef:1439fd0f-4e66-44c9-99af-1f8536e59378", "OpaqueRef:5ad4c51e-473e-4ab0-877d-2d0dbdb90add" ], "task": { "uuid": "4804fefd-0037-d7dd-9a7c-769230728483", "name_label": "Async.VM.revert", "name_description": "", "allowed_operations": [], "current_operations": {}, "created": "20220527T15:01:42Z", "finished": "20220527T15:01:46Z", "status": "failure", "resident_on": "OpaqueRef:a1e9a8f3-0a79-4824-b29f-d81b3246d190", "progress": 1, "type": "<none/>", "result": "", "error_info": [ "VM_REVERT_FAILED", "OpaqueRef:1439fd0f-4e66-44c9-99af-1f8536e59378", "OpaqueRef:5ad4c51e-473e-4ab0-877d-2d0dbdb90add" ], "other_config": {}, "subtask_of": "OpaqueRef:NULL", "subtasks": [], "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_snapshot.ml)(line 492))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 131))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 231))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 103)))" }, "message": "VM_REVERT_FAILED(OpaqueRef:1439fd0f-4e66-44c9-99af-1f8536e59378, OpaqueRef:5ad4c51e-473e-4ab0-877d-2d0dbdb90add)", "name": "XapiError", "stack": "XapiError: VM_REVERT_FAILED(OpaqueRef:1439fd0f-4e66-44c9-99af-1f8536e59378, OpaqueRef:5ad4c51e-473e-4ab0-877d-2d0dbdb90add) at Function.wrap (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/_XapiError.js:16:12) at _default (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/_getTaskResult.js:11:29) at Xapi._addRecordToCache (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/index.js:949:24) at forEach (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/index.js:983:14) at Array.forEach (<anonymous>) at Xapi._processEvents (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/index.js:973:12) at Xapi._watchEvents (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/index.js:1139:14)" }
Now viewing the VM with XO on the disks tab shows no attached disks - disk tab is blank.
But linstor appears to still have the disks and the snapshot disks too.
β XCPNG01 β xcp-volume-142cb89f-2850-4ac8-a47c-10bb2cfc4692 β xcp-sr-linstor_group β 0 β 1010 β /dev/drbd1010 β 2.02 GiB β Unused β UpToDate β β XCPNG02 β xcp-volume-142cb89f-2850-4ac8-a47c-10bb2cfc4692 β xcp-sr-linstor_group β 0 β 1010 β /dev/drbd1010 β 2.02 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-142cb89f-2850-4ac8-a47c-10bb2cfc4692 β DfltDisklessStorPool β 0 β 1010 β /dev/drbd1010 β β Unused β Diskless β β XCPNG04 β xcp-volume-142cb89f-2850-4ac8-a47c-10bb2cfc4692 β xcp-sr-linstor_group β 0 β 1010 β /dev/drbd1010 β 2.02 GiB β Unused β UpToDate β β XCPNG01 β xcp-volume-18fa145a-d36b-44bd-b1b5-af1e9424ea00 β xcp-sr-linstor_group β 0 β 1018 β /dev/drbd1018 β 2.02 GiB β Unused β UpToDate β β XCPNG02 β xcp-volume-18fa145a-d36b-44bd-b1b5-af1e9424ea00 β xcp-sr-linstor_group β 0 β 1018 β /dev/drbd1018 β 2.02 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-18fa145a-d36b-44bd-b1b5-af1e9424ea00 β DfltDisklessStorPool β 0 β 1018 β /dev/drbd1018 β β InUse β Diskless β β XCPNG04 β xcp-volume-18fa145a-d36b-44bd-b1b5-af1e9424ea00 β DfltDisklessStorPool β 0 β 1018 β /dev/drbd1018 β β Unused β Diskless β β XCPNG05 β xcp-volume-18fa145a-d36b-44bd-b1b5-af1e9424ea00 β xcp-sr-linstor_group β 0 β 1018 β /dev/drbd1018 β 2.02 GiB β Unused β UpToDate β β XCPNG01 β xcp-volume-1a6c7272-f718-4c4d-a8b0-ca8419eab314 β DfltDisklessStorPool β 0 β 1024 β /dev/drbd1024 β β Unused β Diskless β β XCPNG02 β xcp-volume-1a6c7272-f718-4c4d-a8b0-ca8419eab314 β xcp-sr-linstor_group β 0 β 1024 β /dev/drbd1024 β 2.02 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-1a6c7272-f718-4c4d-a8b0-ca8419eab314 β xcp-sr-linstor_group β 0 β 1024 β /dev/drbd1024 β 2.02 GiB β Unused β UpToDate β β XCPNG04 β xcp-volume-1a6c7272-f718-4c4d-a8b0-ca8419eab314 β xcp-sr-linstor_group β 0 β 1024 β /dev/drbd1024 β 2.02 GiB β Unused β UpToDate β β XCPNG05 β xcp-volume-1a6c7272-f718-4c4d-a8b0-ca8419eab314 β DfltDisklessStorPool β 0 β 1024 β /dev/drbd1024 β β Unused β Diskless β β XCPNG01 β xcp-volume-2cab6c2d-abf6-42c7-9094-d75351ed8ebb β xcp-sr-linstor_group β 0 β 1016 β /dev/drbd1016 β 2.02 GiB β Unused β UpToDate β β XCPNG02 β xcp-volume-2cab6c2d-abf6-42c7-9094-d75351ed8ebb β xcp-sr-linstor_group β 0 β 1016 β /dev/drbd1016 β 2.02 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-2cab6c2d-abf6-42c7-9094-d75351ed8ebb β xcp-sr-linstor_group β 0 β 1016 β /dev/drbd1016 β 2.02 GiB β Unused β UpToDate β β XCPNG04 β xcp-volume-2cab6c2d-abf6-42c7-9094-d75351ed8ebb β DfltDisklessStorPool β 0 β 1016 β /dev/drbd1016 β β Unused β Diskless β β XCPNG05 β xcp-volume-2cab6c2d-abf6-42c7-9094-d75351ed8ebb β DfltDisklessStorPool β 0 β 1016 β /dev/drbd1016 β β Unused β Diskless β β XCPNG01 β xcp-volume-30bf014b-025d-4f3f-a068-f9a9bf34fab2 β xcp-sr-linstor_group β 0 β 1013 β /dev/drbd1013 β 2.02 GiB β Unused β UpToDate β β XCPNG02 β xcp-volume-30bf014b-025d-4f3f-a068-f9a9bf34fab2 β xcp-sr-linstor_group β 0 β 1013 β /dev/drbd1013 β 2.02 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-30bf014b-025d-4f3f-a068-f9a9bf34fab2 β xcp-sr-linstor_group β 0 β 1013 β /dev/drbd1013 β 2.02 GiB β Unused β UpToDate β β XCPNG01 β xcp-volume-3bdb2b25-706c-4309-ab8f-df3190f57c43 β DfltDisklessStorPool β 0 β 1021 β /dev/drbd1021 β β Unused β Diskless β β XCPNG02 β xcp-volume-3bdb2b25-706c-4309-ab8f-df3190f57c43 β xcp-sr-linstor_group β 0 β 1021 β /dev/drbd1021 β 2.02 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-3bdb2b25-706c-4309-ab8f-df3190f57c43 β xcp-sr-linstor_group β 0 β 1021 β /dev/drbd1021 β 2.02 GiB β Unused β UpToDate β β XCPNG04 β xcp-volume-3bdb2b25-706c-4309-ab8f-df3190f57c43 β DfltDisklessStorPool β 0 β 1021 β /dev/drbd1021 β β Unused β Diskless β β XCPNG05 β xcp-volume-3bdb2b25-706c-4309-ab8f-df3190f57c43 β xcp-sr-linstor_group β 0 β 1021 β /dev/drbd1021 β 2.02 GiB β Unused β UpToDate β β XCPNG01 β xcp-volume-450f65f7-7fcc-4ffd-893e-761a2f6ac366 β xcp-sr-linstor_group β 0 β 1020 β /dev/drbd1020 β 2.02 GiB β Unused β UpToDate β β XCPNG02 β xcp-volume-450f65f7-7fcc-4ffd-893e-761a2f6ac366 β DfltDisklessStorPool β 0 β 1020 β /dev/drbd1020 β β Unused β Diskless β β XCPNG03 β xcp-volume-450f65f7-7fcc-4ffd-893e-761a2f6ac366 β DfltDisklessStorPool β 0 β 1020 β /dev/drbd1020 β β Unused β Diskless β β XCPNG04 β xcp-volume-450f65f7-7fcc-4ffd-893e-761a2f6ac366 β xcp-sr-linstor_group β 0 β 1020 β /dev/drbd1020 β 2.02 GiB β Unused β UpToDate β β XCPNG05 β xcp-volume-450f65f7-7fcc-4ffd-893e-761a2f6ac366 β xcp-sr-linstor_group β 0 β 1020 β /dev/drbd1020 β 2.02 GiB β Unused β UpToDate β β XCPNG01 β xcp-volume-466938db-11f1-4b59-8a90-ad08fa20e085 β xcp-sr-linstor_group β 0 β 1015 β /dev/drbd1015 β 2.02 GiB β Unused β UpToDate β β XCPNG02 β xcp-volume-466938db-11f1-4b59-8a90-ad08fa20e085 β xcp-sr-linstor_group β 0 β 1015 β /dev/drbd1015 β 2.02 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-466938db-11f1-4b59-8a90-ad08fa20e085 β DfltDisklessStorPool β 0 β 1015 β /dev/drbd1015 β β Unused β Diskless β β XCPNG04 β xcp-volume-466938db-11f1-4b59-8a90-ad08fa20e085 β xcp-sr-linstor_group β 0 β 1015 β /dev/drbd1015 β 2.02 GiB β Unused β UpToDate β β XCPNG05 β xcp-volume-466938db-11f1-4b59-8a90-ad08fa20e085 β DfltDisklessStorPool β 0 β 1015 β /dev/drbd1015 β β Unused β Diskless β β XCPNG01 β xcp-volume-470dcf6f-d916-403d-8258-e012c065b8ec β xcp-sr-linstor_group β 0 β 1009 β /dev/drbd1009 β 2.02 GiB β Unused β UpToDate β β XCPNG02 β xcp-volume-470dcf6f-d916-403d-8258-e012c065b8ec β xcp-sr-linstor_group β 0 β 1009 β /dev/drbd1009 β 2.02 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-470dcf6f-d916-403d-8258-e012c065b8ec β DfltDisklessStorPool β 0 β 1009 β /dev/drbd1009 β β Unused β Diskless β β XCPNG04 β xcp-volume-470dcf6f-d916-403d-8258-e012c065b8ec β xcp-sr-linstor_group β 0 β 1009 β /dev/drbd1009 β 2.02 GiB β Unused β UpToDate β β XCPNG01 β xcp-volume-551db5b5-7772-407a-9e8c-e549db3a0e5f β xcp-sr-linstor_group β 0 β 1008 β /dev/drbd1008 β 2.02 GiB β Unused β UpToDate β β XCPNG02 β xcp-volume-551db5b5-7772-407a-9e8c-e549db3a0e5f β xcp-sr-linstor_group β 0 β 1008 β /dev/drbd1008 β 2.02 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-551db5b5-7772-407a-9e8c-e549db3a0e5f β DfltDisklessStorPool β 0 β 1008 β /dev/drbd1008 β β Unused β Diskless β β XCPNG04 β xcp-volume-551db5b5-7772-407a-9e8c-e549db3a0e5f β xcp-sr-linstor_group β 0 β 1008 β /dev/drbd1008 β 2.02 GiB β Unused β UpToDate β β XCPNG01 β xcp-volume-699871db-2319-4ddd-9a44-0514d2e7aee3 β xcp-sr-linstor_group β 0 β 1025 β /dev/drbd1025 β 2.02 GiB β Unused β UpToDate β β XCPNG02 β xcp-volume-699871db-2319-4ddd-9a44-0514d2e7aee3 β DfltDisklessStorPool β 0 β 1025 β /dev/drbd1025 β β Unused β Diskless β β XCPNG03 β xcp-volume-699871db-2319-4ddd-9a44-0514d2e7aee3 β DfltDisklessStorPool β 0 β 1025 β /dev/drbd1025 β β Unused β Diskless β β XCPNG04 β xcp-volume-699871db-2319-4ddd-9a44-0514d2e7aee3 β xcp-sr-linstor_group β 0 β 1025 β /dev/drbd1025 β 2.02 GiB β Unused β UpToDate β β XCPNG05 β xcp-volume-699871db-2319-4ddd-9a44-0514d2e7aee3 β xcp-sr-linstor_group β 0 β 1025 β /dev/drbd1025 β 2.02 GiB β Unused β UpToDate β β XCPNG01 β xcp-volume-6c96822b-7ded-41dd-b4ff-690dc4795ee7 β xcp-sr-linstor_group β 0 β 1023 β /dev/drbd1023 β 2.02 GiB β Unused β UpToDate β β XCPNG02 β xcp-volume-6c96822b-7ded-41dd-b4ff-690dc4795ee7 β DfltDisklessStorPool β 0 β 1023 β /dev/drbd1023 β β Unused β Diskless β β XCPNG03 β xcp-volume-6c96822b-7ded-41dd-b4ff-690dc4795ee7 β xcp-sr-linstor_group β 0 β 1023 β /dev/drbd1023 β 2.02 GiB β Unused β UpToDate β β XCPNG04 β xcp-volume-6c96822b-7ded-41dd-b4ff-690dc4795ee7 β DfltDisklessStorPool β 0 β 1023 β /dev/drbd1023 β β Unused β Diskless β β XCPNG05 β xcp-volume-6c96822b-7ded-41dd-b4ff-690dc4795ee7 β xcp-sr-linstor_group β 0 β 1023 β /dev/drbd1023 β 2.02 GiB β Unused β UpToDate β β XCPNG01 β xcp-volume-70004559-a2c4-480f-b7bc-b26dcb95bfba β xcp-sr-linstor_group β 0 β 1027 β /dev/drbd1027 β 24.06 GiB β Unused β UpToDate β β XCPNG02 β xcp-volume-70004559-a2c4-480f-b7bc-b26dcb95bfba β xcp-sr-linstor_group β 0 β 1027 β /dev/drbd1027 β 24.06 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-70004559-a2c4-480f-b7bc-b26dcb95bfba β DfltDisklessStorPool β 0 β 1027 β /dev/drbd1027 β β Unused β Diskless β β XCPNG04 β xcp-volume-70004559-a2c4-480f-b7bc-b26dcb95bfba β xcp-sr-linstor_group β 0 β 1027 β /dev/drbd1027 β 24.06 GiB β Unused β UpToDate β β XCPNG05 β xcp-volume-70004559-a2c4-480f-b7bc-b26dcb95bfba β DfltDisklessStorPool β 0 β 1027 β /dev/drbd1027 β β Unused β Diskless β β XCPNG01 β xcp-volume-707a0158-ad31-4b4b-af2b-20d89e5717de β DfltDisklessStorPool β 0 β 1026 β /dev/drbd1026 β β Unused β Diskless β β XCPNG02 β xcp-volume-707a0158-ad31-4b4b-af2b-20d89e5717de β xcp-sr-linstor_group β 0 β 1026 β /dev/drbd1026 β 24.06 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-707a0158-ad31-4b4b-af2b-20d89e5717de β xcp-sr-linstor_group β 0 β 1026 β /dev/drbd1026 β 24.06 GiB β Unused β UpToDate β β XCPNG04 β xcp-volume-707a0158-ad31-4b4b-af2b-20d89e5717de β DfltDisklessStorPool β 0 β 1026 β /dev/drbd1026 β β Unused β Diskless β β XCPNG05 β xcp-volume-707a0158-ad31-4b4b-af2b-20d89e5717de β xcp-sr-linstor_group β 0 β 1026 β /dev/drbd1026 β 24.06 GiB β Unused β UpToDate β β XCPNG02 β xcp-volume-7aaa7a6e-98c4-4a57-a4f1-4fea0a36b17a β xcp-sr-linstor_group β 0 β 1011 β /dev/drbd1011 β 2.02 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-7aaa7a6e-98c4-4a57-a4f1-4fea0a36b17a β xcp-sr-linstor_group β 0 β 1011 β /dev/drbd1011 β 2.02 GiB β Unused β UpToDate β β XCPNG04 β xcp-volume-7aaa7a6e-98c4-4a57-a4f1-4fea0a36b17a β xcp-sr-linstor_group β 0 β 1011 β /dev/drbd1011 β 2.02 GiB β Unused β UpToDate β β XCPNG01 β xcp-volume-9320c158-489e-49e7-92b8-85c93c9e3eeb β xcp-sr-linstor_group β 0 β 1022 β /dev/drbd1022 β 2.02 GiB β Unused β UpToDate β β XCPNG02 β xcp-volume-9320c158-489e-49e7-92b8-85c93c9e3eeb β xcp-sr-linstor_group β 0 β 1022 β /dev/drbd1022 β 2.02 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-9320c158-489e-49e7-92b8-85c93c9e3eeb β DfltDisklessStorPool β 0 β 1022 β /dev/drbd1022 β β Unused β Diskless β β XCPNG04 β xcp-volume-9320c158-489e-49e7-92b8-85c93c9e3eeb β xcp-sr-linstor_group β 0 β 1022 β /dev/drbd1022 β 2.02 GiB β Unused β UpToDate β β XCPNG05 β xcp-volume-9320c158-489e-49e7-92b8-85c93c9e3eeb β DfltDisklessStorPool β 0 β 1022 β /dev/drbd1022 β β Unused β Diskless β β XCPNG02 β xcp-volume-b341848b-01d1-4019-a62f-85c6108a53e3 β xcp-sr-linstor_group β 0 β 1006 β /dev/drbd1006 β 24.06 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-b341848b-01d1-4019-a62f-85c6108a53e3 β xcp-sr-linstor_group β 0 β 1006 β /dev/drbd1006 β 24.06 GiB β Unused β UpToDate β β XCPNG05 β xcp-volume-b341848b-01d1-4019-a62f-85c6108a53e3 β xcp-sr-linstor_group β 0 β 1006 β /dev/drbd1006 β 24.06 GiB β Unused β UpToDate β β XCPNG01 β xcp-volume-bccefe12-9ff5-4317-b05c-515cb44a5710 β DfltDisklessStorPool β 0 β 1014 β /dev/drbd1014 β β Unused β Diskless β β XCPNG02 β xcp-volume-bccefe12-9ff5-4317-b05c-515cb44a5710 β xcp-sr-linstor_group β 0 β 1014 β /dev/drbd1014 β 2.02 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-bccefe12-9ff5-4317-b05c-515cb44a5710 β xcp-sr-linstor_group β 0 β 1014 β /dev/drbd1014 β 2.02 GiB β Unused β UpToDate β β XCPNG04 β xcp-volume-bccefe12-9ff5-4317-b05c-515cb44a5710 β xcp-sr-linstor_group β 0 β 1014 β /dev/drbd1014 β 2.02 GiB β Unused β UpToDate β β XCPNG05 β xcp-volume-bccefe12-9ff5-4317-b05c-515cb44a5710 β DfltDisklessStorPool β 0 β 1014 β /dev/drbd1014 β β Unused β Diskless β β XCPNG01 β xcp-volume-cdc051ae-bc39-4012-9ce0-6e4f855a5063 β xcp-sr-linstor_group β 0 β 1012 β /dev/drbd1012 β 2.02 GiB β Unused β UpToDate β β XCPNG02 β xcp-volume-cdc051ae-bc39-4012-9ce0-6e4f855a5063 β xcp-sr-linstor_group β 0 β 1012 β /dev/drbd1012 β 2.02 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-cdc051ae-bc39-4012-9ce0-6e4f855a5063 β DfltDisklessStorPool β 0 β 1012 β /dev/drbd1012 β β Unused β Diskless β β XCPNG04 β xcp-volume-cdc051ae-bc39-4012-9ce0-6e4f855a5063 β xcp-sr-linstor_group β 0 β 1012 β /dev/drbd1012 β 2.02 GiB β Unused β UpToDate β β XCPNG01 β xcp-volume-d5a744ec-d1a1-4116-a576-38608b9dd790 β DfltDisklessStorPool β 0 β 1019 β /dev/drbd1019 β β Unused β Diskless β β XCPNG02 β xcp-volume-d5a744ec-d1a1-4116-a576-38608b9dd790 β xcp-sr-linstor_group β 0 β 1019 β /dev/drbd1019 β 2.02 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-d5a744ec-d1a1-4116-a576-38608b9dd790 β xcp-sr-linstor_group β 0 β 1019 β /dev/drbd1019 β 2.02 GiB β Unused β UpToDate β β XCPNG04 β xcp-volume-d5a744ec-d1a1-4116-a576-38608b9dd790 β xcp-sr-linstor_group β 0 β 1019 β /dev/drbd1019 β 2.02 GiB β Unused β UpToDate β β XCPNG05 β xcp-volume-d5a744ec-d1a1-4116-a576-38608b9dd790 β DfltDisklessStorPool β 0 β 1019 β /dev/drbd1019 β β Unused β Diskless β β XCPNG01 β xcp-volume-f9cf9143-829d-4246-9051-9102f2c4709c β DfltDisklessStorPool β 0 β 1017 β /dev/drbd1017 β β Unused β Diskless β β XCPNG02 β xcp-volume-f9cf9143-829d-4246-9051-9102f2c4709c β xcp-sr-linstor_group β 0 β 1017 β /dev/drbd1017 β 2.02 GiB β Unused β UpToDate β β XCPNG03 β xcp-volume-f9cf9143-829d-4246-9051-9102f2c4709c β xcp-sr-linstor_group β 0 β 1017 β /dev/drbd1017 β 2.02 GiB β Unused β UpToDate β β XCPNG04 β xcp-volume-f9cf9143-829d-4246-9051-9102f2c4709c β xcp-sr-linstor_group β 0 β 1017 β /dev/drbd1017 β 2.02 GiB β Unused β UpToDate β β XCPNG05 β xcp-volume-f9cf9143-829d-4246-9051-9102f2c4709c β DfltDisklessStorPool β 0 β 1017 β /dev/drbd1017 β β Unused β Diskless β
From the VM disks tab if I try to Attach the disks, two of the disks created on XOSTOR are missing (data1 and data4).
Finally if I go to storage and bring up the XOSTOR storage and then press "Rescan all disks" I get this error:
sr.scan { "id": "cf896912-cd71-d2b2-488a-5792b7147c87" } { "code": "SR_BACKEND_FAILURE_46", "params": [ "", "The VDI is not available [opterr=Could not load 735fc2d7-f1f0-4cc6-9d35-42a049d8ec6c because: ['XENAPI_PLUGIN_FAILURE', 'getVHDInfo', 'CommandException', 'No such file or directory']]", "" ], "task": { "uuid": "4dcac885-dfaa-784a-eb2d-02335efde0fb", "name_label": "Async.SR.scan", "name_description": "", "allowed_operations": [], "current_operations": {}, "created": "20220527T16:27:36Z", "finished": "20220527T16:27:50Z", "status": "failure", "resident_on": "OpaqueRef:a1e9a8f3-0a79-4824-b29f-d81b3246d190", "progress": 1, "type": "<none/>", "result": "", "error_info": [ "SR_BACKEND_FAILURE_46", "", "The VDI is not available [opterr=Could not load 735fc2d7-f1f0-4cc6-9d35-42a049d8ec6c because: ['XENAPI_PLUGIN_FAILURE', 'getVHDInfo', 'CommandException', 'No such file or directory']]", "" ], "other_config": {}, "subtask_of": "OpaqueRef:NULL", "subtasks": [], "backtrace": "(((process xapi)(filename lib/backtrace.ml)(line 210))((process xapi)(filename ocaml/xapi/storage_access.ml)(line 32))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 128))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 231))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 103)))" }, "message": "SR_BACKEND_FAILURE_46(, The VDI is not available [opterr=Could not load 735fc2d7-f1f0-4cc6-9d35-42a049d8ec6c because: ['XENAPI_PLUGIN_FAILURE', 'getVHDInfo', 'CommandException', 'No such file or directory']], )", "name": "XapiError", "stack": "XapiError: SR_BACKEND_FAILURE_46(, The VDI is not available [opterr=Could not load 735fc2d7-f1f0-4cc6-9d35-42a049d8ec6c because: ['XENAPI_PLUGIN_FAILURE', 'getVHDInfo', 'CommandException', 'No such file or directory']], ) at Function.wrap (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/_XapiError.js:16:12) at _default (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/_getTaskResult.js:11:29) at Xapi._addRecordToCache (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/index.js:949:24) at forEach (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/index.js:983:14) at Array.forEach (<anonymous>) at Xapi._processEvents (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/index.js:973:12) at Xapi._watchEvents (/opt/xo/xo-builds/xen-orchestra-202204291839/packages/xen-api/src/index.js:1139:14)" }
-
@ronan-a said in XOSTOR hyperconvergence preview:
Okay so it's probably not related to the driver itself, I will take a look to the logs after reception.
Did you get chance to look at the logs I sent?
-
@geoffbland So, I didn't notice useful info outside of:
FIXME drbd_a_xcp-volu[24302] op clear, bitmap locked for 'set_n_write sync_handshake' by drbd_r_xcp-volu[24231] ... FIXME drbd_a_xcp-volu[24328] op clear, bitmap locked for 'demote' by drbd_w_xcp-volu[24188]
Like I said in my e-mail, maybe there are more details in another log file. I hope.
-
@ronan-a said in [XOSTOR hyperconvergence preview]
Like I said in my e-mail, maybe there are more details in another log file. I hope.
In the end I realised I was more trying to "use" XOSTOR whilst testing rather than properly test it. So I decided to rip it all down and start again and retest it again - this time properly recording each step so any issues can be replicated. I will let you know how this goes.