-
@ronan-a perfect, thx! Is this the new release Olivier was talking about? Can you provide some information when to expect the first stable release?
-
@ronan-a Just to be clear on what I have to do..
-
Diable minidrbdcluster on each host:
systemctl disable --now minidrbdcluster
no issue here... -
Install new LINSTOR packages. How do we do that? Do we run the installer again by running:
wget https://gist.githubusercontent.com/Wescoeur/7bb568c0e09e796710b0ea966882fcac/raw/1707fbcfac22e662c2b80c14762f2c7d937e677c/gistfile1.txt -O install && chmod +x install
or
./install update
Or do I simply installed the new RPM without running the installer?
wget --no-check-certificate blktap-3.37.4-1.0.1.0.linstor.1.xcpng8.2.x86_64.rpm wget --no-check-certificate xcp-ng-linstor-1.0-1.xcpng8.2.noarch.rpm wget --no-check-certificate xcp-ng-release-linstor-1.2-1.xcpng8.2.noarch.rpm wget --no-check-certificate http-nbd-transfer-1.2.0-1.xcpng8.2.x86_64.rpm wget --no-check-certificate sm-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64.rpm yum install *.rpm
Where do we get this files from? what is the URL?
- On each host, edit /etc/drbd-reactor.d/sm-linstor.toml no problem here...
Can you please confirm which procedure to use for steps 2?
Thank you.
-
-
@ronan-a is that the correct URL?
-
@Swen said in XOSTOR hyperconvergence preview:
@ronan-a perfect, thx! Is this the new release Olivier was talking about? Can you provide some information when to expect the first stable release?
If we don't have a new critical bug, normally in few weeks.
Or do I simply installed the new RPM without running the installer?
You can update the packages just using yum if you already have the xcp-ng-linstor yum repo config. There is no reason to download manually the packages from koji.
-
@ronan-a ok,
I managed to install using yumyum install blktap.x86_64 yum install xcp-ng-linstor.noarch yum install xcp-ng-release-linstor.noarch yum install http-nbd-transfer.x86_64
But I cannot find sm-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64.rpm.
Is ityum install sm-core-libs.noarch
? -
@fred974 What's the output of
yum update sm
? -
systemctl enable --now drbd-reactor
Job for drbd-reactor.service failed because the control process exited with error code. See "systemctl status drbd-reactor.service" and "journalctl -xe" for details.
systemctl status drbd-reactor.service
[12:21 uk xostortmp]# systemctl status drbd-reactor.service * drbd-reactor.service - DRBD-Reactor Service Loaded: loaded (/usr/lib/systemd/system/drbd-reactor.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Thu 2023-03-23 12:12:33 GMT; 9min ago Docs: man:drbd-reactor man:drbd-reactorctl man:drbd-reactor.toml Process: 8201 ExecStart=/usr/sbin/drbd-reactor (code=exited, status=1/FAILURE) Main PID: 8201 (code=exited, status=1/FAILURE)
journalctl -xe has no usefull information but the SMlog log file has the following:
Mar 23 12:29:27 uk SM: [17122] Raising exception [47, The SR is not available [opterr=Unable to find controller uri...]] Mar 23 12:29:27 uk SM: [17122] lock: released /var/lock/sm/a20ee08c-40d0-9818-084f-282bbca1f217/sr Mar 23 12:29:27 uk SM: [17122] ***** generic exception: sr_scan: EXCEPTION <class 'SR.SROSError'>, The SR is not available [opterr=Unable to find controller uri...] Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 110, in run Mar 23 12:29:27 uk SM: [17122] return self._run_locked(sr) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked Mar 23 12:29:27 uk SM: [17122] rv = self._run(sr, target) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 364, in _run Mar 23 12:29:27 uk SM: [17122] return sr.scan(self.params['sr_uuid']) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 634, in wrap Mar 23 12:29:27 uk SM: [17122] return load(self, *args, **kwargs) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 560, in load Mar 23 12:29:27 uk SM: [17122] raise xs_errors.XenError('SRUnavailable', opterr=str(e)) Mar 23 12:29:27 uk SM: [17122] Mar 23 12:29:27 uk SM: [17122] ***** LINSTOR resources on XCP-ng: EXCEPTION <class 'SR.SROSError'>, The SR is not available [opterr=Unable to find controller uri...] Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 378, in run Mar 23 12:29:27 uk SM: [17122] ret = cmd.run(sr) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 110, in run Mar 23 12:29:27 uk SM: [17122] return self._run_locked(sr) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked Mar 23 12:29:27 uk SM: [17122] rv = self._run(sr, target) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 364, in _run Mar 23 12:29:27 uk SM: [17122] return sr.scan(self.params['sr_uuid']) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 634, in wrap Mar 23 12:29:27 uk SM: [17122] return load(self, *args, **kwargs) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 560, in load Mar 23 12:29:27 uk SM: [17122] raise xs_errors.XenError('SRUnavailable', opterr=str(e))
Is it normal that the XOSTOR SR is still visible in XO?
-
@ronan-a said in XOSTOR hyperconvergence preview:
What's the output of yum update sm?
[12:33 uk ~]# yum update sm Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Excluding mirror: updates.xcp-ng.org * xcp-ng-base: mirrors.xcp-ng.org Excluding mirror: updates.xcp-ng.org * xcp-ng-linstor: mirrors.xcp-ng.org Excluding mirror: updates.xcp-ng.org * xcp-ng-updates: mirrors.xcp-ng.org No packages marked for update
-
@fred974 And
rpm -qa | grep sm
? Because the sm LINSTOR package update is in our repo. So I suppose you already installed it using koji URLs. -
@ronan-a said in XOSTOR hyperconvergence preview:
@fred974 And
sudo rpm -qa | grep sm
? Because the sm LINSTOR package update is in our repo. So I suppose you already installed it using koji URLs.microsemi-smartpqi-1.2.10_025-2.xcpng8.2.x86_64 smartmontools-6.5-1.el7.x86_64 sm-rawhba-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64 ssmtp-2.64-14.el7.x86_64 sm-cli-0.23.0-7.xcpng8.2.x86_64 sm-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64 libsmbclient-4.10.16-15.el7_9.x86_64 psmisc-22.20-15.el7.x86_64
Yes, I installed it from the koji URLs before seeing your rely
-
@fred974 I just repaired your pool, there was a small error in the conf that I gave in my previous post.
-
@ronan-a said in XOSTOR hyperconvergence preview:
I just repaired your pool, there was a small error in the conf that I gave in my previous post.
Thank you very much. I really appreciate you fixing this for me
-
@ronan-a said in XOSTOR hyperconvergence preview:
@Swen said in XOSTOR hyperconvergence preview:
@ronan-a perfect, thx! Is this the new release Olivier was talking about? Can you provide some information when to expect the first stable release?
If we don't have a new critical bug, normally in few weeks.
Fingers crossed!
-
@ronan-a: After doing the installation from scratch with new installed xcp-ng hosts, all uptodate, I need to do a repair (via xcp-ng center) of the SR after doing the xe sr-create, because the SR is in state: Broken and the pool-master is in state: Unplugged.
I am not really sure waht xcp-ng center is doing when I click repair, but it works.
I can reproduce this issue, it happens every installation.
regards,
Swen -
I don't remember if
sr-create
is also pluging the PBD by defaultRepair is just a
xe pbd-plug
IIRC. -
@olivierlambert it looks like sr-create is doing it, because on all other nodes the SR is attached, only on pool-master (or maybe the node you do the sr-create from) the pdb-plug does not work.
-
What's the error message when you try to plug it?
-
@olivierlambert I need to me more clear about this: When doing the sr-create for the linstor storage no error is shown, but the pbd will not be plugged at the pool-master. On every other host in the cluster it works automatically. After doing a pdb-plug for the pool-master the SR will be plugged. No error is shown at all.
-
Okay I see, thanks
-
Is there an easy way to map the linstor resource volume to the virtual disk on xcp-ng? When doing linstor volume list I get a Resource name back from linstor like this:
xcp-volume-23d07d99-9990-4046-8e7d-020bd61c1883
The last part looks like an uuid to me, but I am unable to find this uuid when using some xe commands.