CEPH FS Storage Driver
-
@jmccoy555 I have
rpcbind
service running. Can you check on your ceph node? -
@r1 yep, that's it,
rpcbind
is needed. I have a very minimal Debian 10 VM hosting my Ceph (dockers) as is now the way with Octopus.Also had to swap the
host-uuid=
withshared=true
for it to connect to all hosts within the pool (might be useful for the notes).Will test and also check that everything is good after a reboot and report back.
-
Just to report back..... So far so good. I've moved over a few VDIs and not had any problems.
I've rebooted hosts and Ceph nodes and all is good.
NFS is also all good now.
Hope this gets merges soon so I don't have to worry about updates
On a side note, I've also set up two pools, one of SSDs and one of HDDs using File Layouts to assign different directories (VM SRs) to different pools.
-
@jmccoy555 glad to know. I don't have much knowledge on File Layouts but that looks good.
NFS edits won't be merged as that was just for POC. Working on a dedicated CephFS SR driver which hopefully won't be impacted due to
sm
or other upgrades. Keep watching this space. -
We can write a simple "driver" like we did for Gluster
-
@olivierlambert With (experimental) CephFS driver added in 8.2.0. Reading the documentation
WARNING This way of using Ceph requires installing ceph-common inside dom0 from outside the official XCP-ng repositories. It is reported to be working by some users, but isn't recommended officially (see Additional packages). You will also need to be **careful about system updates and upgrades.**
are there any plans to put ceph-common into the official XCP-ng repositories to make updates less scary?
I have been testing this for almost 8 months now. First with only one or two VMs, now with about 8-10 smaller VMs. The ceph cluster itself is running as 3 VMs(themself not stored on CephFS) with SATA-controllers passedthoughed on 3 different hosts.
This has been working great with the exception of in situations when the XCP-ng hosts are unable to reach the ceph cluster. At one time the ceph nodes had crashed(my fault) but I was unable to restart them because all VM operations were blocked taking forever without ever suceeding eventhough the ceph nodes them selfes are not stored on the inaccessable SR. To me it seems the XCP-ng hosts are endlessly trying to connect never timing out which makes them non-responsive.
-
Hi,
Short term: no. Longer term when we have SMAPIv3: very likely, yes, at least as a community driver.
What about perfs? Can you describe more your setup and config?
-
I'm about to deploy the latest ceph on 45drives hardware and will use 8.2 with finally a decent amount of network backbone to start building a new virtual world. I've been using nfs over cephfs on single gigabit public and single gigabit privates and it performs ok for what we do but cannot do any failover or moving of virtuals live. This should alleviate those issues as well as give me lots more options for snapshots and recovery.
So in latest 8.2 patches and updates what do I need to do other than install ceph-common? Will the ceph repository show up in the xcp-ng center or orchestra?
I've had power outages and UPS failures and this stuff just self heals and the only issue has to be with mounting the cephfs after boot and then restarting nfs to recover the nfs repositories and it just comes up. Its scalable and way less trouble to deal with than fiber sans or iscsi.
-
@scboley https://xcp-ng.org/docs/storage.html#cephfs
Once you do the manual stuff it will show up like any other SR in Xen Orchestra etc.
-
@jmccoy555 should I go ahead and update the 8.2 to latest patches first before doing this? I have yet to run a single patch on xcp-ng over many years and is it straightforward?
-
@scboley I would assume so, but I can't say yes. I don't think it was available before 8.2 without following the above.
-
@jmccoy555 I'm talking about 8.2.1 and 8.2.2 and so forth. Is that a simple yum update on the system? I've just left it default version and never updated I was on 7.6 for a long time and just took it all to 8.2 with one straggler xenserver 6.5 still in production. I've loved the stability I've had with xcp-ng not even messing with it at all.
-
@scboley Yes, it's mostly just a matter of doing yum update: https://xcp-ng.org/docs/updates.html
-
Ok I see the package listed in the documentation is still nautilus has that been updated to any newer ceph versions as of yet? @olivierlambert
-
I don't think we updated anything on that aspect, since Ceph isn't a "main" supported SR
-
@olivierlambert what are the plans to elevate it? I have a feeling its really starting to gain traction in the storage world.
-
Very likely when the platform will be more modern (upgrading the kernel, platform and using SMAPIv3)
-
@olivierlambert Ok I see even with 8.x you are still based on centos 7 when is it going up to 8 and I'd assume rocky would be the choice since the redhat streaming snafu cough cough.
-
No, not really, see https://xcp-ng.org/blog/2020/12/17/centos-and-xcpng-future/ (so no biggie)
-
@olivierlambert so what are your plans for going to a streams 8 version which would give the updated kernel platform and hopefully soon after SMAPIv3? IO throughput on 8 over 7 is vastly superior and not near as big as the 6 to 7 changes were.