@dslauter Are you using XCP-ng 8.3? If this is the case I think there is a porting problem concerning python 3...
Posts made by ronan-a
-
RE: Unable to enable HA with XOSTOR
-
RE: Unable to enable HA with XOSTOR
@dslauter said in Unable to enable HA with XOSTOR:
I don't see any error here. Can you check the other hosts? And how did you create the SR? Is the shared=true flag set?
-
RE: Unable to enable HA with XOSTOR
@dslauter Can you check the SMlog/kernel.log/daemon.log traces? Without these details, it is not easy to investigate. Thanks!
-
RE: XOSTOR hyperconvergence preview
@olivierlambert @Jonathon Unfortunately we don't maintain this package, so it's not available in our repositories, the simplest thing is that you address this problem directly to linbit. Maybe there is a regression or something else?
-
RE: XOSTOR on 8.3?
@fatek Use directly the XOA method, it correctly installs the dependencies + is more secure regarding the disk selection.
-
RE: XOSTOR on 8.3?
@fatek Is it ok now? It can be a yum cache issue, the RPM is in the right repo: https://updates.xcp-ng.org/8/8.3/base/x86_64/Packages/
-
RE: XOSTOR on 8.3?
@fatek Just for information, the current version 8.3 is not usable without major problems. Hower, I rebased recently all the LINSTOR sm changes from XCP-ng 8.2 to 8.3 in a new package: sm-3.2.3-1.7.xcpng8.3.x86_64.rpm, we passed the driver tests without too many problems. This RPM should be available during the month of October. Even after its release, we consider that it is not stable enough for production use until we have enough user feedback (but of course this new RPM is synchronized on all fixes and improvements of version 8.2).
EDIT: Released on October 25, we originally planned to wait a bit.
-
RE: XOSTOR and mdadm software RAID
@OhSoNoob I don't see a good reason to continue using RAID10 below DRBD. Your disks will be best used in a linear LVM volume.
-
RE: XOSTOR hyperconvergence preview
@ferrao said in XOSTOR hyperconvergence preview:
May I ask now a licensing issue: if we upgrade to Vates VM, does the deployment mode on the first message is considered supported or everything will need to be done again from XOA?
Regarding XOSTOR Support Licenses: In general, we prefer our users to use a trial license through XOA. And if they are interested, they can subscribe to a commercial license.
To be more precise: the manual steps in this thread are still valid to configure an SR LINSTOR, no difference with the XOA commands. However, if you wish to suscribe to a support license from a pool without XOA nor trial license, we are quite strict on the fact that the infrastructure must be in a stable state. -
RE: XOSTOR hyperconvergence preview
@ferrao said in XOSTOR hyperconvergence preview:
Does XOSTOR needs a fully functional DNS setup to work? Or the failure was local due to the local change of the hostname?
No. But your LINSTOR node name must match the hostname. We use IPs to communicate between nodes and in our driver.
-
RE: Three-node Networking for XOSTOR
@ha_tu_su
Regarding your previous message, for an SR LINSTOR to be functional:- Each node in the pool must have a PBD attached.
- A node may not have a local physical disk.
ip route
should not be used manually. LINSTOR has an API for using dedicated network interfaces.- XOSTOR effectively supports configurations with hosts without disks and quorum can still be used.
Could you list the interfaces using
linstor node interface list <NODE>
? -
RE: XOSTOR hyperconvergence preview
@Maelstrom96 Well there is no simple helper to do that using the CLI.
So you can create a new node using:
linstor node create --node-type Combined <NAME> <IP>
Then you must evacuate the old node to preserve the replication count:
linstor node evacuate <OLD_NAME>
Next, you can change your hostname an restart the services on each host:
systemctl stop linstor-controller systemctl restart linstor-satellites
Finally you can delete the node:
linstor node delete <OLD_NAME>
After that you must recreate the diskless resources if necessary. Exec
linstor advise r
to see the commands to execute. -
RE: XOSTOR hyperconvergence preview
@Maelstrom96 Oh! This explanation makes sense, thank you. Yes in case of change of hostname, the LINSTOR node name must also be modified, otherwise the path to the database resource will not be found.
-
RE: XOSTOR hyperconvergence preview
@Maelstrom96 It sounds like a race condition or a bad mount of the database. But I'm not sure, so I will add more logs for the next RPM. We plan to release it in a few weeks.
-
RE: XOSTOR hyperconvergence preview
@Maelstrom96 Thank you for the logs, I'm trying to understand the issue.
For the moment I don't see a problem regarding the status of the services. -
RE: XOSTOR hyperconvergence preview
@Theoi-Meteoroi said in XOSTOR hyperconvergence preview:
You lost quorum.
Not a quorum issue:
exists device name:xcp-persistent-database volume:0 minor:1000 backing_dev:/dev/linstor_group/xcp-persistent-database_00000 disk:UpToDate client:no quorum:yes
-
RE: XOSTOR hyperconvergence preview
@Maelstrom96 said in XOSTOR hyperconvergence preview:
However, after rebooting the master, it seems like the SR doesn't want to allow any disk migration, and manual Scan are failing.
What's the status of these commands on each host?
systemctl status linstor-controller systemctl status linstor-satellite systemctl status drbd-reactor mountpoint /var/lib/linstor drbdsetup events2
Also please share your SMlog files.
-
RE: XOSTOR hyperconvergence preview
@fatek No. I removed this param, it's useless now.
-
RE: Hosts fencing after latest 8.2.1 update
@jasonmap No recent changes regarding the HA on our repositories. So it can be a connection issue. Like said by Olivier, you can have more info using dmesg. Or even daemon.log/SMlog (it will depend on the shared SR used).
-
RE: Multiple Volumes Alternate Primary
@David I think the complexity is to be able to offer a simple interface / API way for users to configure multiple storages. Maybe through smapi v3.
In any case we currently only support one storage pool, the sm driver would have to be reworked to support several. It also probably requires visibility from XOA's point of view. Lots of points to discuss. I will create a card on our internal Kanban.Regarding multiple XOSTOR SRs, we must:
- Add a way to move the controller volume on a specific storage pool.
- Ensure the controller it is still accessible for remaining SRs despite the destruction of an SR.
- Use a lock mechanism to protect the LINSTOR env.