-
@ronan-a Any news on when the new version of linstor SM will be released? We're actually hard blocked by the behavior with 4 nodes right now so we can't move forward with a lot of other tests we want to do.
We also worked on doing a custom build of linstor-controller and linstor-satellite to allow support of Centos 7 with it's lack of
sedsid -w
support, and we might want to see if we could get a satisfactory PR that could be merged into linstor-server master so that people using XCP-ng can also use linstor's built-in snapshot shipping. Since K8s linstor snapshotter uses that functionality to provide volume backups, it means that using K8s with linstor on XCP-ng is not really possible unless this is fixed.Would that be something that you guys could help us push to linstor?
-
Do you have an idea about the missing StoragePool for the new host that was added using linstor-manager.addHost? I've checked the code and it seems like it might just provision the SP on sr.create?
If I remember correctly, this script is only here for adding the PBDs of the new host and configuring the services. If you want to add a new device, it is necessary to manually create a new VG LVM, and add it via a linstor command.
Also, I'm not sure how feasible it would be for SM but having a nightly-style build process for those cases seems like it would be really useful for hotfix testing.
The branch was in a bad state (many fixes to test, regressions, etc). I was able to clean all that up, it should be easier to do releases now.
Any news on when the new version of linstor SM will be released?
Today.
-
WARNING: I just pushed new packages (they should be available in our repo in few minutes) and I made an important change in the driver which requires manual intervention.
minidrbdcluster
is no longer used to start the controller, instead we usedrbd-reactor
which is more robust.
To update properly, you must:- Disable minidrbdcluster on each host:
systemctl disable --now minidrbdcluster
. - Install new LINSTOR packages using
yum
:
- blktap-3.37.4-1.0.1.0.linstor.1.xcpng8.2.x86_64.rpm
- xcp-ng-linstor-1.0-1.xcpng8.2.noarch.rpm
- xcp-ng-release-linstor-1.2-1.xcpng8.2.noarch.rpm
- http-nbd-transfer-1.2.0-1.xcpng8.2.x86_64.rpm
- sm-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64.rpm
- On each host, edit
/etc/drbd-reactor.d/sm-linstor.toml
. (Note: it's probably necessary to create the folder/etc/drbd-reactor.d/
with mkdir.) And add this content:
[[promoter]] [promoter.resources.xcp-persistent-database] start = [ "var-lib-linstor.service", "linstor-controller.service" ]
- After that you can manually start drbd-reactor on each machine:
systemctl enable --now drbd-reactor
.
You can reuse your SR again.
- Disable minidrbdcluster on each host:
-
@ronan-a Just to be sure: IF you install it from scratch you can still use the installation instructions from the top of this thread, correct?
-
@Swen Yes you can still use the installation script. I just changed a line to install the new blktap, so redownload it if necessary.
-
@ronan-a perfect, thx! Is this the new release Olivier was talking about? Can you provide some information when to expect the first stable release?
-
@ronan-a Just to be clear on what I have to do..
-
Diable minidrbdcluster on each host:
systemctl disable --now minidrbdcluster
no issue here... -
Install new LINSTOR packages. How do we do that? Do we run the installer again by running:
wget https://gist.githubusercontent.com/Wescoeur/7bb568c0e09e796710b0ea966882fcac/raw/1707fbcfac22e662c2b80c14762f2c7d937e677c/gistfile1.txt -O install && chmod +x install
or
./install update
Or do I simply installed the new RPM without running the installer?
wget --no-check-certificate blktap-3.37.4-1.0.1.0.linstor.1.xcpng8.2.x86_64.rpm wget --no-check-certificate xcp-ng-linstor-1.0-1.xcpng8.2.noarch.rpm wget --no-check-certificate xcp-ng-release-linstor-1.2-1.xcpng8.2.noarch.rpm wget --no-check-certificate http-nbd-transfer-1.2.0-1.xcpng8.2.x86_64.rpm wget --no-check-certificate sm-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64.rpm yum install *.rpm
Where do we get this files from? what is the URL?
- On each host, edit /etc/drbd-reactor.d/sm-linstor.toml no problem here...
Can you please confirm which procedure to use for steps 2?
Thank you.
-
-
@ronan-a is that the correct URL?
-
@Swen said in XOSTOR hyperconvergence preview:
@ronan-a perfect, thx! Is this the new release Olivier was talking about? Can you provide some information when to expect the first stable release?
If we don't have a new critical bug, normally in few weeks.
Or do I simply installed the new RPM without running the installer?
You can update the packages just using yum if you already have the xcp-ng-linstor yum repo config. There is no reason to download manually the packages from koji.
-
@ronan-a ok,
I managed to install using yumyum install blktap.x86_64 yum install xcp-ng-linstor.noarch yum install xcp-ng-release-linstor.noarch yum install http-nbd-transfer.x86_64
But I cannot find sm-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64.rpm.
Is ityum install sm-core-libs.noarch
? -
@fred974 What's the output of
yum update sm
? -
systemctl enable --now drbd-reactor
Job for drbd-reactor.service failed because the control process exited with error code. See "systemctl status drbd-reactor.service" and "journalctl -xe" for details.
systemctl status drbd-reactor.service
[12:21 uk xostortmp]# systemctl status drbd-reactor.service * drbd-reactor.service - DRBD-Reactor Service Loaded: loaded (/usr/lib/systemd/system/drbd-reactor.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Thu 2023-03-23 12:12:33 GMT; 9min ago Docs: man:drbd-reactor man:drbd-reactorctl man:drbd-reactor.toml Process: 8201 ExecStart=/usr/sbin/drbd-reactor (code=exited, status=1/FAILURE) Main PID: 8201 (code=exited, status=1/FAILURE)
journalctl -xe has no usefull information but the SMlog log file has the following:
Mar 23 12:29:27 uk SM: [17122] Raising exception [47, The SR is not available [opterr=Unable to find controller uri...]] Mar 23 12:29:27 uk SM: [17122] lock: released /var/lock/sm/a20ee08c-40d0-9818-084f-282bbca1f217/sr Mar 23 12:29:27 uk SM: [17122] ***** generic exception: sr_scan: EXCEPTION <class 'SR.SROSError'>, The SR is not available [opterr=Unable to find controller uri...] Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 110, in run Mar 23 12:29:27 uk SM: [17122] return self._run_locked(sr) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked Mar 23 12:29:27 uk SM: [17122] rv = self._run(sr, target) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 364, in _run Mar 23 12:29:27 uk SM: [17122] return sr.scan(self.params['sr_uuid']) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 634, in wrap Mar 23 12:29:27 uk SM: [17122] return load(self, *args, **kwargs) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 560, in load Mar 23 12:29:27 uk SM: [17122] raise xs_errors.XenError('SRUnavailable', opterr=str(e)) Mar 23 12:29:27 uk SM: [17122] Mar 23 12:29:27 uk SM: [17122] ***** LINSTOR resources on XCP-ng: EXCEPTION <class 'SR.SROSError'>, The SR is not available [opterr=Unable to find controller uri...] Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 378, in run Mar 23 12:29:27 uk SM: [17122] ret = cmd.run(sr) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 110, in run Mar 23 12:29:27 uk SM: [17122] return self._run_locked(sr) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked Mar 23 12:29:27 uk SM: [17122] rv = self._run(sr, target) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/SRCommand.py", line 364, in _run Mar 23 12:29:27 uk SM: [17122] return sr.scan(self.params['sr_uuid']) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 634, in wrap Mar 23 12:29:27 uk SM: [17122] return load(self, *args, **kwargs) Mar 23 12:29:27 uk SM: [17122] File "/opt/xensource/sm/LinstorSR", line 560, in load Mar 23 12:29:27 uk SM: [17122] raise xs_errors.XenError('SRUnavailable', opterr=str(e))
Is it normal that the XOSTOR SR is still visible in XO?
-
@ronan-a said in XOSTOR hyperconvergence preview:
What's the output of yum update sm?
[12:33 uk ~]# yum update sm Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Excluding mirror: updates.xcp-ng.org * xcp-ng-base: mirrors.xcp-ng.org Excluding mirror: updates.xcp-ng.org * xcp-ng-linstor: mirrors.xcp-ng.org Excluding mirror: updates.xcp-ng.org * xcp-ng-updates: mirrors.xcp-ng.org No packages marked for update
-
@fred974 And
rpm -qa | grep sm
? Because the sm LINSTOR package update is in our repo. So I suppose you already installed it using koji URLs. -
@ronan-a said in XOSTOR hyperconvergence preview:
@fred974 And
sudo rpm -qa | grep sm
? Because the sm LINSTOR package update is in our repo. So I suppose you already installed it using koji URLs.microsemi-smartpqi-1.2.10_025-2.xcpng8.2.x86_64 smartmontools-6.5-1.el7.x86_64 sm-rawhba-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64 ssmtp-2.64-14.el7.x86_64 sm-cli-0.23.0-7.xcpng8.2.x86_64 sm-2.30.7-1.3.0.linstor.7.xcpng8.2.x86_64 libsmbclient-4.10.16-15.el7_9.x86_64 psmisc-22.20-15.el7.x86_64
Yes, I installed it from the koji URLs before seeing your rely
-
@fred974 I just repaired your pool, there was a small error in the conf that I gave in my previous post.
-
@ronan-a said in XOSTOR hyperconvergence preview:
I just repaired your pool, there was a small error in the conf that I gave in my previous post.
Thank you very much. I really appreciate you fixing this for me
-
@ronan-a said in XOSTOR hyperconvergence preview:
@Swen said in XOSTOR hyperconvergence preview:
@ronan-a perfect, thx! Is this the new release Olivier was talking about? Can you provide some information when to expect the first stable release?
If we don't have a new critical bug, normally in few weeks.
Fingers crossed!
-
@ronan-a: After doing the installation from scratch with new installed xcp-ng hosts, all uptodate, I need to do a repair (via xcp-ng center) of the SR after doing the xe sr-create, because the SR is in state: Broken and the pool-master is in state: Unplugged.
I am not really sure waht xcp-ng center is doing when I click repair, but it works.
I can reproduce this issue, it happens every installation.
regards,
Swen -
I don't remember if
sr-create
is also pluging the PBD by defaultRepair is just a
xe pbd-plug
IIRC.