Multiple Volumes Alternate Primary
-
This looks really cool, I'm looking forward to testing it. A quick question...
I was wondering in a two node system if there is performance benefit to VMs hosted on a node where the DRBD volume is master and if so can I have multiple XOSTOR volumes using separate disks on each nodes e.g. linstor_group1 / linstor_group 2 where node1 is primary for linstor_group1 and node2 is primary for linstor_group2?
In this case I could organise VM home servers according to where their DRDB master is.
So I guess the questions are:
- Can I have multiple XOSTOR volumes
- Can I have alternate primaries
Thanks
-
-
Set up a two node cluster, hv-01 & hv-02, created a VM.
With the VM on hv-01
hv-01 shows:
xcp-volume-a5fd7961-af4c-47ed-a24b-99d432330107 role:Primary disk:UpToDate hv-02 role:Secondary peer-disk:UpToDate
hv-02 shows:
xcp-volume-a5fd7961-af4c-47ed-a24b-99d432330107 role:Secondary disk:UpToDate hv-01 role:Primary peer-disk:UpToDate
Migrated VM to hv-02
hv-01 shows:
xcp-volume-a5fd7961-af4c-47ed-a24b-99d432330107 role:Secondary disk:UpToDate hv-02 role:Primary peer-disk:UpToDate
hv-02 shows:
xcp-volume-a5fd7961-af4c-47ed-a24b-99d432330107 role:Primary disk:UpToDate hv-01 role:Secondary peer-disk:UpToDate
It looks like the VMs disk is automagically assigned primary to it's local node, mind blown! Am I reading this right?
-
-
Question for @ronan-a
-
@David I'm not sure to totally understand your questions. A DRBD "master" (so a primary) is never fixed, it's just a state at a given time.
When a DRBD passes primary it is because it's opened by a process on a machine and nothing more. In the case of XCP-ng, the volume can be opened by a tapdisk instance today, and the next day opened on another host. However, regarding performance, there is a feature not used at the moment to reduce network usage to force diskful usage locally: https://linbit.com/drbd-user-guide/linstor-guide-1_0-en/#s-linstor-auto-diskfulIn this case I could organise VM home servers according to where their DRDB master is.
Considering what I said, this is not a good idea. It would be more interesting to use the auto-diskful functionality but it becomes complex to use when VHD snapshots are used...
For each snapshot, a diskful/diskless DRBD is created on any host, there is no use choice during the creation.It looks like the VMs disk is automagically assigned primary to it's local node, mind blown! Am I reading this right?
In both logs, a5fd7961is open on the host the VM is running on, no surprise.
-
@ronan-a Thanks for taking the time to explain, it is appreciated. I will have a further read to help my understanding more.
Do you know if it is possible to add multiple XOSTOR SRs so that I have the option of separating disk types e.g. one XOSTOR for NVMe SSD and one for SATA SSD?
-
@David For the moment, only one XOSTOR SR can be used in a pool. We currently have no plans to lift this limitation, at least not while we have tickets relating to bugs or performance issues. I think it will come one day.
-
@ronan-a I think it would be a really useful option, hopefully we'll see it in the future. Thanks again for taking the time to answer my questions.
-
@David I think the complexity is to be able to offer a simple interface / API way for users to configure multiple storages. Maybe through smapi v3.
In any case we currently only support one storage pool, the sm driver would have to be reworked to support several. It also probably requires visibility from XOA's point of view. Lots of points to discuss. I will create a card on our internal Kanban.Regarding multiple XOSTOR SRs, we must:
- Add a way to move the controller volume on a specific storage pool.
- Ensure the controller it is still accessible for remaining SRs despite the destruction of an SR.
- Use a lock mechanism to protect the LINSTOR env.