-
@ronan-a - I've been lurking on this Forum Subject for too long, and I've finally implemented the scripts across three of my hosts, and also added the "Storage" network modifications explained by @Swen and it is working beautifully. Failover is handled by XCP-ng bonded networking if a switch fails, hosts can reboot without any loss in speed or data.
You may recall several years ago I was interested in seeing CEPH implemented natively, but your LINSTOR implementation is so much simpler to manage. Thanks and keep up the good work.
-
I've also been looking at this thread for a while, I noticed there was an impending launch of an RC version of this. I am actively looking for a hyperconverged solution for the corp I am engaged with. I am looking to get off a SPOF san into a multi-node cluster. Our corp is looking to implement this change very soon (couple months) regardless of what use - but after much research this seems highly anticipated and exactly what im looking for... thank you!!
-
@Swen said in XOSTOR hyperconvergence preview:
@andersonalipio said in XOSTOR hyperconvergence preview:
Hello all,
I got it working just fine so far on all lab tests, but one thing a couldn't find in here or other post related to using a dedicated storage network other then managemente network. Can it be modified? In this lab, we have 2 hosts with 2 network cards each, one for mgmt and vm external traffic, and the second should be exclusive for storage, since is way faster.
We are using a separate network in our lab. What we do is this:
- get the node list from the running controller via
linstor node list
- take a look at the node interface list via
linstor node interface list <node name>
- modify each nodes interface via
linstor node interface modify <node name> default --ip <ip>
- check addresses via this
linstor node list
Hope that helps!
Another option:
- Create additional interface
linstor node interface create <node name> storage_nic <ip>
- Set preferred interface for each node
linstor node set-property <node name> PrefNic storage_nic
-
@olivierlambert any update on this? thank you
-
We started to work on the initial UI The CLI works pretty well now, so almost there We can make you a demo install inside your infrastructure if you want.
-
@olivierlambert Thank you, I have many questions - is there a call/demo you could do?
-
Go there and ask for a preview access on your hardware: https://vates.tech/contact/
-
@olivierlambert Thank you for pointing that direction! I went ahead and made a request.
-
I am working for years with XenServer/Citrix Hypervisor and Citrix products like Virtual Apps.
Meanwhile I also have XCP-NG running on an test server for a while.
Well, I decided now to build a new small cluster with XCP-NG. One reason is also the XOSTOR option.This new pool is planned with 3 nodes and multiple SSD disks (not yet NVMe) in each host.
I am wondering how XOSTOR creates the LV on a VG with let's say 4 physical drives:
Will it be a linear LV? Is there any option for striping or other raid levels available/planned?Looking forward to your reply.
Thanks a lot for all the good work in a challenging environment. -
We don't need/want to have RAID levels or things like this, since it's already replicated to other hosts, this will make it too redundant. So it will be like a linear LV, yes
-
@olivierlambert thank you for the quick answer.
To be on the real safe side this means then a replication count not lower than 3 would be useful (from my perspective).What would happen if a node of a 3 node cluster with replication count 3 (so all nodes have a copy) fails?
Would everything stop because replication count is higher than available nodes?
(I refer to post https://xcp-ng.org/forum/post/54086) -
@JensH No. You can continue to use your pool. New resources can still be created and LINSTOR can sync volumes when the connection to the lost node is recreated.
As long as there is no split brain, and you have 3 hosts online, it's ok, that's why we recommend using 4 machines.
With a pool of 3 machines, and if you lose a node, you increase the risk of split brain on a resource but you can continue to create and use them. -
Also, keep in mind the LINSTOR put things in read only as soon you are under your replication target.
It means, on a 3 hosts scenario:
- if you have a replication 3, any host that is unreachable will trigger read only on the 2 others
- if you have a replication 2, you can lose one host without any consequence
So for 3 machines, replication 2 is a sweet spot in terms of availability.
-
Hi,
I've run the install script on a XCP-ng 8.2.1 host. The output of the following command:
rpm -qa | grep -E "^(sm|xha)-.linstor."
sm-2.30.8-2.1.0.linstor.5.xcpng8.2.x86_64
xha-10.1.0-2.2.0.linstor.1.xcpng8.2.x86_64
is missing, because it is already installed in version:
xha-10.1.0-2.1.xcpng8.2.x86_64
from XCP-ng itself.
Is this packace still needed from the linstor repo?
Should I uninstall it an re-run the install script?BR,
Wilken -
question for @ronan-a
-
@Wilken The modified version of the xha package is no longer needed. You can use the latest version without the linstor tag.
It's not necessary to reinstall your XOSTOR SR.
-
Thank you @olivierlambert and @ronan-a for the quick answer and clarification!
BR,
Wilken -
-
Hi !
Before I test this, I have a small question:
If the VM is encrypted, and XOSTOR SR is enabled, is the VM + Memory replicated or just the VDI ?
Once the 1st node is down, will the 2nd node take over as-is or will the 2nd node go to 'boot' stage where is asks for decryption password ?Thanks
-
@gb-123 How the VM is encrypted? Only the VDIs are replicated.
-
VMs would be using LUKS encryption.
So if only VDI is replicated and hypothetically, if I loose the master node or any other node actually having the VM, then I will have to create the VM again using the replicated disk? Or would it be something like DRBD where there are actually 2 VMs running in Active/Passive mode and there is an automatic switchover ? Or would it be that One VM is running and the second gets automatically started when 1st is down ?
Sorry for the noob questions. I just wanted to be sure of the implementation.