@dslauter You can test the new RPMs using the testing repository, FYI: sm-3.2.3-1.14.xcpng8.3 and http-nbd-transfer-1.5.0-1.xcpng8.3.
Posts
-
RE: Unable to enable HA with XOSTOR
-
RE: Unable to enable HA with XOSTOR
@dslauter Just for your information, I will update the http-nbd-transfer + sm in a few weeks. I fixed many issues regarding HA activation in 8.3 due to bad migration of specific python code from version 2 to version 3.
-
RE: XOSTOR on 8.3?
@fatek We have to release important fixes before the end of the year concerning problems with the HA which I corrected + other changes on the smapi side. I recently discussed that we should release a stable XOSTOR 8.3 version early next year in order to move forward with important projects regarding smapi v3. But I can't be categorical on a date. We lack user feedback.
-
RE: Unable to enable HA with XOSTOR
@dslauter Are you using XCP-ng 8.3? If this is the case I think there is a porting problem concerning python 3...
-
RE: Unable to enable HA with XOSTOR
@dslauter said in Unable to enable HA with XOSTOR:
I don't see any error here. Can you check the other hosts? And how did you create the SR? Is the shared=true flag set?
-
RE: Unable to enable HA with XOSTOR
@dslauter Can you check the SMlog/kernel.log/daemon.log traces? Without these details, it is not easy to investigate. Thanks!
-
RE: XOSTOR hyperconvergence preview
@olivierlambert @Jonathon Unfortunately we don't maintain this package, so it's not available in our repositories, the simplest thing is that you address this problem directly to linbit. Maybe there is a regression or something else?
-
RE: XOSTOR on 8.3?
@fatek Use directly the XOA method, it correctly installs the dependencies + is more secure regarding the disk selection.
-
RE: XOSTOR on 8.3?
@fatek Is it ok now? It can be a yum cache issue, the RPM is in the right repo: https://updates.xcp-ng.org/8/8.3/base/x86_64/Packages/
-
RE: XOSTOR on 8.3?
@fatek Just for information, the current version 8.3 is not usable without major problems. Hower, I rebased recently all the LINSTOR sm changes from XCP-ng 8.2 to 8.3 in a new package: sm-3.2.3-1.7.xcpng8.3.x86_64.rpm, we passed the driver tests without too many problems. This RPM should be available during the month of October. Even after its release, we consider that it is not stable enough for production use until we have enough user feedback (but of course this new RPM is synchronized on all fixes and improvements of version 8.2).
EDIT: Released on October 25, we originally planned to wait a bit.
-
RE: XOSTOR and mdadm software RAID
@OhSoNoob I don't see a good reason to continue using RAID10 below DRBD. Your disks will be best used in a linear LVM volume.
-
RE: XOSTOR hyperconvergence preview
@ferrao said in XOSTOR hyperconvergence preview:
May I ask now a licensing issue: if we upgrade to Vates VM, does the deployment mode on the first message is considered supported or everything will need to be done again from XOA?
Regarding XOSTOR Support Licenses: In general, we prefer our users to use a trial license through XOA. And if they are interested, they can subscribe to a commercial license.
To be more precise: the manual steps in this thread are still valid to configure an SR LINSTOR, no difference with the XOA commands. However, if you wish to suscribe to a support license from a pool without XOA nor trial license, we are quite strict on the fact that the infrastructure must be in a stable state. -
RE: XOSTOR hyperconvergence preview
@ferrao said in XOSTOR hyperconvergence preview:
Does XOSTOR needs a fully functional DNS setup to work? Or the failure was local due to the local change of the hostname?
No. But your LINSTOR node name must match the hostname. We use IPs to communicate between nodes and in our driver.
-
RE: Three-node Networking for XOSTOR
@ha_tu_su
Regarding your previous message, for an SR LINSTOR to be functional:- Each node in the pool must have a PBD attached.
- A node may not have a local physical disk.
ip route
should not be used manually. LINSTOR has an API for using dedicated network interfaces.- XOSTOR effectively supports configurations with hosts without disks and quorum can still be used.
Could you list the interfaces using
linstor node interface list <NODE>
? -
RE: XOSTOR hyperconvergence preview
@Maelstrom96 Well there is no simple helper to do that using the CLI.
So you can create a new node using:
linstor node create --node-type Combined <NAME> <IP>
Then you must evacuate the old node to preserve the replication count:
linstor node evacuate <OLD_NAME>
Next, you can change your hostname an restart the services on each host:
systemctl stop linstor-controller systemctl restart linstor-satellites
Finally you can delete the node:
linstor node delete <OLD_NAME>
After that you must recreate the diskless resources if necessary. Exec
linstor advise r
to see the commands to execute. -
RE: XOSTOR hyperconvergence preview
@Maelstrom96 Oh! This explanation makes sense, thank you. Yes in case of change of hostname, the LINSTOR node name must also be modified, otherwise the path to the database resource will not be found.
-
RE: XOSTOR hyperconvergence preview
@Maelstrom96 It sounds like a race condition or a bad mount of the database. But I'm not sure, so I will add more logs for the next RPM. We plan to release it in a few weeks.
-
RE: XOSTOR hyperconvergence preview
@Maelstrom96 Thank you for the logs, I'm trying to understand the issue.
For the moment I don't see a problem regarding the status of the services. -
RE: XOSTOR hyperconvergence preview
@Theoi-Meteoroi said in XOSTOR hyperconvergence preview:
You lost quorum.
Not a quorum issue:
exists device name:xcp-persistent-database volume:0 minor:1000 backing_dev:/dev/linstor_group/xcp-persistent-database_00000 disk:UpToDate client:no quorum:yes
-
RE: XOSTOR hyperconvergence preview
@Maelstrom96 said in XOSTOR hyperconvergence preview:
However, after rebooting the master, it seems like the SR doesn't want to allow any disk migration, and manual Scan are failing.
What's the status of these commands on each host?
systemctl status linstor-controller systemctl status linstor-satellite systemctl status drbd-reactor mountpoint /var/lib/linstor drbdsetup events2
Also please share your SMlog files.