-
@ronan-a I will copy those logs soon - Do you have a way I can provide you the logs off forum since it's a production systems?
-
Not sure what we're doing wrong - Attempted to add a new host to the linstor SR and it's failing. I've run the install command with the disks we want on the host, but when running the "addHost" function, it fails.
[13:25 ovbh-pprod-xen13 ~]# xe host-call-plugin host-uuid=6e845981-1c12-4e70-b0f7-54431959d630 plugin=linstor-manager fn=addHost args:groupName=linstor_group/thin_device There was a failure communicating with the plug-in. status: addHost stdout: Failure stderr: ['VDI_IN_USE', 'OpaqueRef:f25cd94b-c948-4c3a-a410-aa29a3749943']
Edit : So it's not documented, but it looks like it's failing because the SR is in use? Does that mean that we can't add or remove hosts from linstor without unmounting all VDIs?
-
@Maelstrom96 No you can add a host with running VMs.
I suppose there is a small issue here... Please send me a new time your logs (SMlog of each host). -
We were able to finally add our new #4 host to the linstor SR after killing all VMs with attached VDIs. However, we've hit a new bug that we're not sure how to fix.
Once we added the new host, we were curious to see if a live migration to it would work - It did not. It actually just resulted in the VM being in a zombie state and we had to manually destroy the domains on both the source and destination servers, and reset the power state of the VM.
That first bug most likely was caused by our custom linstor configuration that we use where we have setup another linstor node interface on each nodes, and changed their PrefNics. It wasn't applied to the new host so the drbd connection wouldn't have worked.
[16:51 ovbh-pprod-xen10 lib]# linstor --controllers=10.2.0.19,10.2.0.20,10.2.0.21 node interface list ovbh-pprod-xen12 โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ ovbh-pprod-xen12 โ NetInterface โ IP โ Port โ EncryptionType โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโก โ + StltCon โ default โ 10.2.0.21 โ 3366 โ PLAIN โ โ + โ stornet โ 10.2.4.12 โ โ โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ [16:41 ovbh-pprod-xen10 lib]# linstor --controllers=10.2.0.19,10.2.0.20,10.2.0.21 node list-properties ovbh-pprod-xen12 โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ Key โ Value โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโก โ Aux/xcp-ng.node โ ovbh-pprod-xen12 โ โ Aux/xcp-ng/node โ ovbh-pprod-xen12 โ โ CurStltConnName โ default โ โ NodeUname โ ovbh-pprod-xen12 โ โ PrefNic โ stornet โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
However, once the VM was down and all the linstor configuration was updated to match the rest of the cluster, I've tried to manually start that VM on the new host but it's not working. It seems like if linstor is not called to add the volume to the host as a diskless volume, since it's not on that host.
SMLog:
Feb 28 17:01:31 ovbh-pprod-xen13 SM: [25108] lock: opening lock file /var/lock/sm/a8b860a9-5246-0dd2-8b7f-4806604f219a/sr Feb 28 17:01:31 ovbh-pprod-xen13 SM: [25108] lock: acquired /var/lock/sm/a8b860a9-5246-0dd2-8b7f-4806604f219a/sr Feb 28 17:01:31 ovbh-pprod-xen13 SM: [25108] call-plugin on ff631fff-1947-4631-a35d-9352204f98d9 (linstor-manager:lockVdi with {'groupName': 'linstor_group/thin_device', 'srUuid': 'a8b860a9-5246-0dd2-8b7f-4806604f219a', 'vdiUuid': '02ca1b5b-fef4-47d4-8736-40908385739c', 'locked': 'True'}) returned: True Feb 28 17:01:33 ovbh-pprod-xen13 SM: [25108] ['/usr/bin/vhd-util', 'query', '--debug', '-vsfp', '-n', '/dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0'] Feb 28 17:01:33 ovbh-pprod-xen13 SM: [25108] FAILED in util.pread: (rc 2) stdout: 'error opening /dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0: -2 Feb 28 17:01:33 ovbh-pprod-xen13 SM: [25108] ', stderr: '' Feb 28 17:01:33 ovbh-pprod-xen13 SM: [25108] Got exception: No such file or directory. Retry number: 0 Feb 28 17:01:35 ovbh-pprod-xen13 SM: [25108] ['/usr/bin/vhd-util', 'query', '--debug', '-vsfp', '-n', '/dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0'] Feb 28 17:01:35 ovbh-pprod-xen13 SM: [25108] FAILED in util.pread: (rc 2) stdout: 'error opening /dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0: -2 Feb 28 17:01:35 ovbh-pprod-xen13 SM: [25108] ', stderr: '' Feb 28 17:01:35 ovbh-pprod-xen13 SM: [25108] Got exception: No such file or directory. Retry number: 1 Feb 28 17:01:37 ovbh-pprod-xen13 SM: [25108] ['/usr/bin/vhd-util', 'query', '--debug', '-vsfp', '-n', '/dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0'] Feb 28 17:01:37 ovbh-pprod-xen13 SM: [25108] FAILED in util.pread: (rc 2) stdout: 'error opening /dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0: -2 Feb 28 17:01:37 ovbh-pprod-xen13 SM: [25108] ', stderr: '' Feb 28 17:01:37 ovbh-pprod-xen13 SM: [25108] Got exception: No such file or directory. Retry number: 2 Feb 28 17:01:39 ovbh-pprod-xen13 SM: [25108] ['/usr/bin/vhd-util', 'query', '--debug', '-vsfp', '-n', '/dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0'] Feb 28 17:01:39 ovbh-pprod-xen13 SM: [25108] FAILED in util.pread: (rc 2) stdout: 'error opening /dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0: -2 Feb 28 17:01:39 ovbh-pprod-xen13 SM: [25108] ', stderr: '' Feb 28 17:01:39 ovbh-pprod-xen13 SM: [25108] Got exception: No such file or directory. Retry number: 3 Feb 28 17:01:41 ovbh-pprod-xen13 SM: [25108] ['/usr/bin/vhd-util', 'query', '--debug', '-vsfp', '-n', '/dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0'] Feb 28 17:01:41 ovbh-pprod-xen13 SM: [25108] FAILED in util.pread: (rc 2) stdout: 'error opening /dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0: -2 Feb 28 17:01:41 ovbh-pprod-xen13 SM: [25108] ', stderr: '' Feb 28 17:01:41 ovbh-pprod-xen13 SM: [25108] Got exception: No such file or directory. Retry number: 4 Feb 28 17:01:41 ovbh-pprod-xen13 SM: [25108] ['/usr/bin/vhd-util', 'query', '--debug', '-vsfp', '-n', '/dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0'] Feb 28 17:01:41 ovbh-pprod-xen13 SM: [25108] FAILED in util.pread: (rc 2) stdout: 'error opening /dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0: -2 Feb 28 17:01:41 ovbh-pprod-xen13 SM: [25108] ', stderr: '' Feb 28 17:01:41 ovbh-pprod-xen13 SM: [25108] failed to execute locally vhd-util (sys 2) Feb 28 17:01:42 ovbh-pprod-xen13 SM: [25108] call-plugin (getVHDInfo with {'devicePath': '/dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0', 'groupName': 'linstor_group/thin_device', 'includeParent': 'True'}) returned: {"uuid": "02ca1b5b-fef4-47d4-8736-40908385739c", "parentUuid": "1ad76dd3-14af-4636-bf5d-6822b81bfd0c", "sizeVirt": 53687091200, "sizePhys": 1700033024, "parentPath": "/dev/drbd/by-res/xcp-v$ Feb 28 17:01:42 ovbh-pprod-xen13 SM: [25108] VDI 02ca1b5b-fef4-47d4-8736-40908385739c loaded! (path=/dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0, hidden=0) Feb 28 17:01:42 ovbh-pprod-xen13 SM: [25108] lock: released /var/lock/sm/a8b860a9-5246-0dd2-8b7f-4806604f219a/sr Feb 28 17:01:42 ovbh-pprod-xen13 SM: [25108] vdi_epoch_begin {'sr_uuid': 'a8b860a9-5246-0dd2-8b7f-4806604f219a', 'subtask_of': 'DummyRef:|3f01e26c-0225-40e1-9683-bffe5bb69490|VDI.epoch_begin', 'vdi_ref': 'OpaqueRef:f25cd94b-c948-4c3a-a410-aa29a3749943', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': '02ca1b5b-fef4-47d4-8736-40908385739c', 'host_ref': 'OpaqueRef:3cd7e97c-4b79-473e-b925-c25f8cb393d8', 'session_ref': '$ Feb 28 17:01:42 ovbh-pprod-xen13 SM: [25108] call-plugin on ff631fff-1947-4631-a35d-9352204f98d9 (linstor-manager:lockVdi with {'groupName': 'linstor_group/thin_device', 'srUuid': 'a8b860a9-5246-0dd2-8b7f-4806604f219a', 'vdiUuid': '02ca1b5b-fef4-47d4-8736-40908385739c', 'locked': 'False'}) returned: True Feb 28 17:01:42 ovbh-pprod-xen13 SM: [25278] lock: opening lock file /var/lock/sm/a8b860a9-5246-0dd2-8b7f-4806604f219a/sr Feb 28 17:01:42 ovbh-pprod-xen13 SM: [25278] lock: acquired /var/lock/sm/a8b860a9-5246-0dd2-8b7f-4806604f219a/sr Feb 28 17:01:43 ovbh-pprod-xen13 SM: [25278] call-plugin on ff631fff-1947-4631-a35d-9352204f98d9 (linstor-manager:lockVdi with {'groupName': 'linstor_group/thin_device', 'srUuid': 'a8b860a9-5246-0dd2-8b7f-4806604f219a', 'vdiUuid': '02ca1b5b-fef4-47d4-8736-40908385739c', 'locked': 'True'}) returned: True Feb 28 17:01:44 ovbh-pprod-xen13 SM: [25278] ['/usr/bin/vhd-util', 'query', '--debug', '-vsfp', '-n', '/dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0'] Feb 28 17:01:44 ovbh-pprod-xen13 SM: [25278] FAILED in util.pread: (rc 2) stdout: 'error opening /dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0: -2 Feb 28 17:01:44 ovbh-pprod-xen13 SM: [25278] ', stderr: '' Feb 28 17:01:44 ovbh-pprod-xen13 SM: [25278] Got exception: No such file or directory. Retry number: 0 Feb 28 17:01:46 ovbh-pprod-xen13 SM: [25278] ['/usr/bin/vhd-util', 'query', '--debug', '-vsfp', '-n', '/dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0'] Feb 28 17:01:46 ovbh-pprod-xen13 SM: [25278] FAILED in util.pread: (rc 2) stdout: 'error opening /dev/drbd/by-res/xcp-volume-fb565237-b169-434d-b694-4707e6f51f4c/0: -2 Feb 28 17:01:46 ovbh-pprod-xen13 SM: [25278] ', stderr: '' [...]
The folder
/dev/drbd/by-res/
doesn't exist currently.Also, not sure why, but it seems like when adding the new host, a new storage pool
linstor_group_thin_device
for it's local storage wasn't provisioned automatically, but we can see that there is a diskless storage pool that was provisionned.[17:26 ovbh-pprod-xen10 lib]# linstor --controllers=10.2.0.19,10.2.0.20,10.2.0.21 storage-pool list โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ StoragePool โ Node โ Driver โ PoolName โ FreeCapacity โ TotalCapacity โ CanSnapshots โ State โ SharedName โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโก โ DfltDisklessStorPool โ ovbh-pprod-xen10 โ DISKLESS โ โ โ โ False โ Ok โ โ โ DfltDisklessStorPool โ ovbh-pprod-xen11 โ DISKLESS โ โ โ โ False โ Ok โ โ โ DfltDisklessStorPool โ ovbh-pprod-xen12 โ DISKLESS โ โ โ โ False โ Ok โ โ โ DfltDisklessStorPool โ ovbh-pprod-xen13 โ DISKLESS โ โ โ โ False โ Ok โ โ โ DfltDisklessStorPool โ ovbh-vprod-k8s04-worker01.floatplane.com โ DISKLESS โ โ โ โ False โ Ok โ โ โ DfltDisklessStorPool โ ovbh-vprod-k8s04-worker02.floatplane.com โ DISKLESS โ โ โ โ False โ Ok โ โ โ DfltDisklessStorPool โ ovbh-vprod-k8s04-worker03.floatplane.com โ DISKLESS โ โ โ โ False โ Ok โ โ โ DfltDisklessStorPool โ ovbh-vtest-k8s02-worker01.floatplane.com โ DISKLESS โ โ โ โ False โ Ok โ โ โ DfltDisklessStorPool โ ovbh-vtest-k8s02-worker02.floatplane.com โ DISKLESS โ โ โ โ False โ Ok โ โ โ DfltDisklessStorPool โ ovbh-vtest-k8s02-worker03.floatplane.com โ DISKLESS โ โ โ โ False โ Ok โ โ โ xcp-sr-linstor_group_thin_device โ ovbh-pprod-xen10 โ LVM_THIN โ linstor_group/thin_device โ 3.00 TiB โ 3.49 TiB โ True โ Ok โ โ โ xcp-sr-linstor_group_thin_device โ ovbh-pprod-xen11 โ LVM_THIN โ linstor_group/thin_device โ 3.03 TiB โ 3.49 TiB โ True โ Ok โ โ โ xcp-sr-linstor_group_thin_device โ ovbh-pprod-xen12 โ LVM_THIN โ linstor_group/thin_device โ 3.06 TiB โ 3.49 TiB โ True โ Ok โ โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
[17:32 ovbh-pprod-xen13 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 3.5T 0 disk โโnvme0n1p1 259:1 0 1T 0 part โ โโmd128 9:128 0 1023.9G 0 raid1 โโnvme0n1p2 259:2 0 2.5T 0 part โโlinstor_group-thin_device_tdata 252:1 0 5T 0 lvm โ โโlinstor_group-thin_device 252:2 0 5T 0 lvm โโlinstor_group-thin_device_tmeta 252:0 0 80M 0 lvm โโlinstor_group-thin_device 252:2 0 5T 0 lvm sdb 8:16 1 447.1G 0 disk โโmd127 9:127 0 447.1G 0 raid1 โโmd127p5 259:10 0 4G 0 md /var/log โโmd127p3 259:8 0 405.6G 0 md โ โโXSLocalEXT--ea64a6f6--9ef2--408a--039f--33b119fbd7e8-ea64a6f6--9ef2--408a--039f--33b119fbd7e8 252:3 0 405.6G 0 lvm /run/sr-mount/ea64a6f6-9ef2-408a-039f-33b119fbd7e8 โโmd127p1 259:6 0 18G 0 md / โโmd127p6 259:11 0 1G 0 md [SWAP] โโmd127p4 259:9 0 512M 0 md /boot/efi โโmd127p2 259:7 0 18G 0 md nvme1n1 259:3 0 3.5T 0 disk โโnvme1n1p2 259:5 0 2.5T 0 part โ โโlinstor_group-thin_device_tdata 252:1 0 5T 0 lvm โ โโlinstor_group-thin_device 252:2 0 5T 0 lvm โโnvme1n1p1 259:4 0 1T 0 part โโmd128 9:128 0 1023.9G 0 raid1 sda 8:0 1 447.1G 0 disk โโmd127 9:127 0 447.1G 0 raid1 โโmd127p5 259:10 0 4G 0 md /var/log โโmd127p3 259:8 0 405.6G 0 md โ โโXSLocalEXT--ea64a6f6--9ef2--408a--039f--33b119fbd7e8-ea64a6f6--9ef2--408a--039f--33b119fbd7e8 252:3 0 405.6G 0 lvm /run/sr-mount/ea64a6f6-9ef2-408a-039f-33b119fbd7e8 โโmd127p1 259:6 0 18G 0 md / โโmd127p6 259:11 0 1G 0 md [SWAP] โโmd127p4 259:9 0 512M 0 md /boot/efi โโmd127p2 259:7 0 18G 0 md
-
@Maelstrom96 said in XOSTOR hyperconvergence preview:
The folder /dev/drbd/by-res/ doesn't exist currently.
You're lucky, I just produced a fix yesterday to fix this kind of problem on pools with more than 3 machines: https://github.com/xcp-ng/sm/commit/f916647f44223206b24cf70d099637882c53fee8
Unfortunately, I can't release a new version right away, but I think this change can be applied to your pool.
In the worst case I'll see if I can release a new version without all the fixes in progress... -
@ronan-a said in XOSTOR hyperconvergence preview:
You're lucky, I just produced a fix yesterday to fix this kind of problem on pools with more than 3 machines: https://github.com/xcp-ng/sm/commit/f916647f44223206b24cf70d099637882c53fee8
Unfortunately, I can't release a new version right away, but I think this change can be applied to your pool.
In the worst case I'll see if I can release a new version without all the fixes in progress...Thanks, that does look like it would fix the missing
drbd/by-res/
volumes.Do you have an idea about the missing StoragePool for the new host that was added using
linstor-manager.addHost
? I've checked the code and it seems like it might just provision the SP on sr.create?Also, I'm not sure how feasible it would be for SM but having a nightly-style build process for those cases seems like it would be really useful for hotfix testing.
-
Hello guys,
Awesome realisation, works like a charm! But security in low level. Anyone who wants break down disk cluster/HC storage does it. I installed and as I see Linstor Controller ports opened to whole world. Any solution have you to close external port (when management in global IP), and communicate through Storage Network? -
Hmm in theory I would say it should only listen on the management network (or the storage network, but not everything)
-
@ronan-a Any news on when the new version of linstor SM will be released? We're actually hard blocked by the behavior with 4 nodes right now so we can't move forward with a lot of other tests we want to do.
We also worked on doing a custom build of linstor-controller and linstor-satellite to allow support of Centos 7 with it's lack of
sedsid -w
support, and we might want to see if we could get a satisfactory PR that could be merged into linstor-server master so that people using XCP-ng can also use linstor's built-in snapshot shipping. Since K8s linstor snapshotter uses that functionality to provide volume backups, it means that using K8s with linstor on XCP-ng is not really possible unless this is fixed.Would that be something that you guys could help us push to linstor?
-
Do you have an idea about the missing StoragePool for the new host that was added using linstor-manager.addHost? I've checked the code and it seems like it might just provision the SP on sr.create?
If I remember correctly, this script is only here for adding the PBDs of the new host and configuring the services. If you want to add a new device, it is necessary to manually create a new VG LVM, and add it via a linstor command.
Also, I'm not sure how feasible it would be for SM but having a nightly-style build process for those cases seems like it would be really useful for hotfix testing.
The branch was in a bad state (many fixes to test, regressions, etc). I was able to clean all that up, it should be easier to do releases now.
Any news on when the new version of linstor SM will be released?
Today.
-
WARNING: I just pushed new packages (they should be available in our repo in few minutes) and I made an important change in the driver which requires manual intervention.
minidrbdcluster
is no longer used to start the controller, instead we usedrbd-reactor
which is more robust.
To update properly, you must:- Disable minidrbdcluster on each host:
systemctl disable --now minidrbdcluster
. - Install new LINSTOR packages using
yum
:
- blktap-3.37.4-1.0.1.0.linstor.1.xcpng8.2.x86_64.rpm
- xcp-ng-linstor-1.0-1.xcpng8.2.noarch.rpm
- xcp-ng-release-linstor-1.2-1.xcpng8.2.noarch.rpm
- http-nbd-transfer-1.2.0-1.xcpng8.2.x86_64.rpm
- sm-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64.rpm
- On each host, edit
/etc/drbd-reactor.d/sm-linstor.toml
. (Note: it's probably necessary to create the folder/etc/drbd-reactor.d/
with mkdir.) And add this content:
[[promoter]] [promoter.resources.xcp-persistent-database] start = [ "var-lib-linstor.service", "linstor-controller.service" ]
- After that you can manually start drbd-reactor on each machine:
systemctl enable --now drbd-reactor
.
You can reuse your SR again.
- Disable minidrbdcluster on each host:
-
@ronan-a Just to be sure: IF you install it from scratch you can still use the installation instructions from the top of this thread, correct?
-
@Swen Yes you can still use the installation script. I just changed a line to install the new blktap, so redownload it if necessary.
-
@ronan-a perfect, thx! Is this the new release Olivier was talking about? Can you provide some information when to expect the first stable release?
-
@ronan-a Just to be clear on what I have to do..
-
Diable minidrbdcluster on each host:
systemctl disable --now minidrbdcluster
no issue here... -
Install new LINSTOR packages. How do we do that? Do we run the installer again by running:
wget https://gist.githubusercontent.com/Wescoeur/7bb568c0e09e796710b0ea966882fcac/raw/1707fbcfac22e662c2b80c14762f2c7d937e677c/gistfile1.txt -O install && chmod +x install
or
./install update
Or do I simply installed the new RPM without running the installer?
wget --no-check-certificate blktap-3.37.4-1.0.1.0.linstor.1.xcpng8.2.x86_64.rpm wget --no-check-certificate xcp-ng-linstor-1.0-1.xcpng8.2.noarch.rpm wget --no-check-certificate xcp-ng-release-linstor-1.2-1.xcpng8.2.noarch.rpm wget --no-check-certificate http-nbd-transfer-1.2.0-1.xcpng8.2.x86_64.rpm wget --no-check-certificate sm-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64.rpm yum install *.rpm
Where do we get this files from? what is the URL?
- On each host, edit /etc/drbd-reactor.d/sm-linstor.toml no problem here...
Can you please confirm which procedure to use for steps 2?
Thank you.
-
-
@ronan-a is that the correct URL?
-
@Swen said in XOSTOR hyperconvergence preview:
@ronan-a perfect, thx! Is this the new release Olivier was talking about? Can you provide some information when to expect the first stable release?
If we don't have a new critical bug, normally in few weeks.
Or do I simply installed the new RPM without running the installer?
You can update the packages just using yum if you already have the xcp-ng-linstor yum repo config. There is no reason to download manually the packages from koji.
-
@ronan-a ok,
I managed to install using yumyum install blktap.x86_64 yum install xcp-ng-linstor.noarch yum install xcp-ng-release-linstor.noarch yum install http-nbd-transfer.x86_64
But I cannot find sm-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64.rpm.
Is ityum install sm-core-libs.noarch
? -
@fred974 What's the output of
yum update sm
? -
systemctl enable --now drbd-reactor
Job for drbd-reactor.service failed because the control process exited with error code. See "systemctl status drbd-reactor.service" and "journalctl -xe" for details.
systemctl status drbd-reactor.service
[12:21 uk xostortmp]# systemctl status drbd-reactor.service * drbd-reactor.service - DRBD-Reactor Service Loaded: loaded (/usr/lib/systemd/system/drbd-reactor.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Thu 2023-03-23 12:12:33 GMT; 9min ago Docs: man:drbd-reactor man:drbd-reactorctl man:drbd-reactor.toml Process: 8201 ExecStart=/usr/sbin/drbd-reactor (code=exited, status=1/FAILURE) Main PID: 8201 (code=exited, status=1/FAILURE)
journalctl -xe has no usefull information but the SMlog log file has the following:
Mar 23 12:29:27 uk SM: [17122] Raising exception [47, The SR is not available [opterr=Unable to find controller uri...]] Mar 23 12:29:27 uk SM: [17122] lock: released /var/lock/sm/a20ee08c-40d0-9818-084f-282bbca1f217/sr Mar 23 12:29:27 uk SM: [17122] ***** generic exception: sr_scan: EXCEPTION <class 'SR.SROSError'>, The SR is not available [opterr=Unable to find controller uri...] Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 110, in run Mar 23 12:29:27 uk SM: [17122] return self._run_locked(sr) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked Mar 23 12:29:27 uk SM: [17122] rv = self._run(sr, target) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 364, in _run Mar 23 12:29:27 uk SM: [17122] return sr.scan(self.params['sr_uuid']) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 634, in wrap Mar 23 12:29:27 uk SM: [17122] return load(self, *args, **kwargs) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 560, in load Mar 23 12:29:27 uk SM: [17122] raise xs_errors.XenError('SRUnavailable', opterr=str(e)) Mar 23 12:29:27 uk SM: [17122] Mar 23 12:29:27 uk SM: [17122] ***** LINSTOR resources on XCP-ng: EXCEPTION <class 'SR.SROSError'>, The SR is not available [opterr=Unable to find controller uri...] Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 378, in run Mar 23 12:29:27 uk SM: [17122] ret = cmd.run(sr) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 110, in run Mar 23 12:29:27 uk SM: [17122] return self._run_locked(sr) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked Mar 23 12:29:27 uk SM: [17122] rv = self._run(sr, target) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 364, in _run Mar 23 12:29:27 uk SM: [17122] return sr.scan(self.params['sr_uuid']) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 634, in wrap Mar 23 12:29:27 uk SM: [17122] return load(self, *args, **kwargs) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 560, in load Mar 23 12:29:27 uk SM: [17122] raise xs_errors.XenError('SRUnavailable', opterr=str(e))
Is it normal that the XOSTOR SR is still visible in XO?