Ceph (qemu-dp) in XCP-ng 7.6
-
Hi,
the last week, I tried to understand how the RBDSR plugin is working (or at the moment not working) in XCP-ng togther with qemu-dp.
I see how the plugin is working together with ceph, and how it communicates with qemu-dp, but I don't understand the current state of the qemu-dp package.
At the moment I'm able to create a disk image in ceph, but starting a VM with this image results in a qemu error that the rbd protocol is unknown.
Maybe someone can explain to me the current state of the qemu-dp package in XCP-ng 7.6? I see that there is a version in the xcp-ng-extras repo, which I
have installed at the moment. But is seems that it don't support rbd protocol. For 7.5 there was a extra qemu-dp package build for ceph, should this be the
same package as in the 7.6 extras repo?I also see that there is work done in a smapiv3-changes branch in this repo:
https://github.com/xcp-ng/qemu-dp/tree/smapiv3-changesAs there is a file block/rbd.c I would assume this is a qemu-dp version which supports the rbd protocol, is this correct?
Would be great if someone could give me a explanation how the qemu-dp is handled in XCP-ng and what your plans are.
-
Hi. See https://github.com/xcp-ng-rpms/qemu-dp/issues/5 for issues about rbd support in XCP-ng 7.6. I did not merge the patch because there were unsolved issues and I was waiting for more information. It would likely involve patching glibc which I'm not sure I'm ready to do in XCP-ng 7.6 either. In the end, nothing was done due to the lack of answers to my questions.
About the future of qemu-dp and SMAPI v3, one of our developers is working on understanding it so that we are able to use it in the future and provide our own drivers, including one for Ceph sooner or later. See https://xcp-ng.org/forum/topic/1036/dev-diaries-1-analyzing-storage-perf-smapiv3. The version of qemu provided in 7.6 has performance issues, but version 8.0 will see various packages updated so it's likely that it performs better and we can move on.
-
Thanks for the update. Then I will go with a LVMoRBDSR as a workaround at the moment. Maybe I find some time to build a development environment and do some tests with the smapiv3-changes branch.
-
XS 8.0 (well, Citrix Hypervisor 8.0) release is imminent, and with it, a new SMAPIv3 code. As soon it's out, we'll test/bench it, and if perfs are OK, we'll probably start to roll more drivers for it. So Ceph integration will be clearly something doable.
-
I'm looking forward for the SMAPIv3 integration. I already took a look on the concept and I really like the separation of volume and datapath plugin. I think this will make a lot of things easier in the future.
-
You are right, this is clearly the future. It's still incomplete but we have hope to get something really great at the medium/long run.
-
Any update on this? Did you test with the non-patched qemu-dp package?
With xcp-ng 8 everything should be ok for rbd support: kernel version, etc. -
I don't know if anyone in the community is actively working on this. On our side, we are still working on SMAPIv3.
Feel free to contribute
-
@Emmenemoi Are you looking for RBDSR plugin or want to use Ceph any possible way?
-
RBD any possible way. I'll try with qemu-rbd directly which might work with upstream and latest kernels... But didn't try yet
-
You can map the RBD using standard commands
rbd map ..
and then create a SR on it usingtype=lvm shared=true
and using device from/dev/rbd/...
Since the kernel is 4.19, most of ceph rbd features are supported. And, by using LVM backend, you would retain XCP functionality.
However, you will loose native snapshots or other rbd based image manipulation features due to absence of focussed RBDSR plugin.
Edit: I don't know if such SR can be created from XO but you can surely create it using CLI
xe sr-create
command. -
I did contribute on the RBDSR plugin (v1). Creating a LVMoRBD would be easy. The other good point using LVM over RBD is the native fsfreeze when doing snaps and co which is not the case with rbd snaps. VM has to be paused (which issues fsfreeze) before rbd snaps.
But LVMoRBD would put all VM on 1 single RBD. Difficult to maintain regarding dev size.
Best would be qemu-rbd. I did use it like 10 years ago over native xen and it did work fine (ensures fsfreeze etc).
There were a tech preview using libvirt with xenserver. Didn't test if it works on upstream version 8.