CEPH FS Storage Driver
Short term: no. Longer term when we have SMAPIv3: very likely, yes, at least as a community driver.
What about perfs? Can you describe more your setup and config?
I'm about to deploy the latest ceph on 45drives hardware and will use 8.2 with finally a decent amount of network backbone to start building a new virtual world. I've been using nfs over cephfs on single gigabit public and single gigabit privates and it performs ok for what we do but cannot do any failover or moving of virtuals live. This should alleviate those issues as well as give me lots more options for snapshots and recovery.
So in latest 8.2 patches and updates what do I need to do other than install ceph-common? Will the ceph repository show up in the xcp-ng center or orchestra?
I've had power outages and UPS failures and this stuff just self heals and the only issue has to be with mounting the cephfs after boot and then restarting nfs to recover the nfs repositories and it just comes up. Its scalable and way less trouble to deal with than fiber sans or iscsi.
@jmccoy555 should I go ahead and update the 8.2 to latest patches first before doing this? I have yet to run a single patch on xcp-ng over many years and is it straightforward?
@scboley I would assume so, but I can't say yes. I don't think it was available before 8.2 without following the above.
@jmccoy555 I'm talking about 8.2.1 and 8.2.2 and so forth. Is that a simple yum update on the system? I've just left it default version and never updated I was on 7.6 for a long time and just took it all to 8.2 with one straggler xenserver 6.5 still in production. I've loved the stability I've had with xcp-ng not even messing with it at all.
Ok I see the package listed in the documentation is still nautilus has that been updated to any newer ceph versions as of yet? @olivierlambert
I don't think we updated anything on that aspect, since Ceph isn't a "main" supported SR
@olivierlambert what are the plans to elevate it? I have a feeling its really starting to gain traction in the storage world.
Very likely when the platform will be more modern (upgrading the kernel, platform and using SMAPIv3)
@olivierlambert Ok I see even with 8.x you are still based on centos 7 when is it going up to 8 and I'd assume rocky would be the choice since the redhat streaming snafu cough cough.
No, not really, see https://xcp-ng.org/blog/2020/12/17/centos-and-xcpng-future/ (so no biggie)
@olivierlambert so what are your plans for going to a streams 8 version which would give the updated kernel platform and hopefully soon after SMAPIv3? IO throughput on 8 over 7 is vastly superior and not near as big as the 6 to 7 changes were.
We don't use any kernel from CentOS project (nor the Xen package). We only use "the rest".
So in order, it will be:
- newer Xen version (easiest thing)
- more recent kernel (some patches are needed at different places)
- more recent user space/base distro (bigger work, but started already, like migrating all Python 2 stuff to Python 3!)
SMAPIv3 is done in parallel and with XS teams too
We use an officially supported kernel (4.19 in LTS) and yes, sometimes we even backport stuff to it specifically for XCP-ng
A kernel isn't "linked" to a distro, it's all about the distro maintainers to choose which kernel they want. We do that for XCP-ng and XenServer (with Citrix).
In short: we make our own choices regarding Xen and the kernel, entirely outside CentOS project.
Ok I've got this setup and I have a cluster serving the cephfs and here's my errors:
xe sr-create type=cephfs name-label=ceph device-config:server=172.30.254.23,172.30.254.24,172.30.254.25 device-config:serverport=6789 device-config:serverpath=/fsgw/xcpsr device-config:options=name=admin,secretfile=/etc/ceph/admin.secret
Error code: SR_BACKEND_FAILURE_111
Error parameters: , CephFS mount error [opterr=mount failed with return code 1],
@scboley I figured it out finally. I used another key created by the cluster and got it to connect and mount the ceph.
@olivierlambert adding another host to the pool and it fails to connect to the ceph shared storage:
Nov 21 09:57:48 xcp4-1 xapi: [debug||116026 /var/lib/xcp/xapi|SR.scan R:05af02328263|helpers] Waiting for up to 12.902806 seconds before retrying...
Nov 21 09:57:59 xcp4-1 xapi: [debug||116027 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:79aefd48b34b created by task D:f32e5efdeec8
Nov 21 09:57:59 xcp4-1 xapi: [ info||116027 /var/lib/xcp/xapi|session.logout D:67032978d90c|xapi_session] Session.destroy trackid=c8a5d1fe7e932298b267edb677909a4b
Nov 21 09:57:59 xcp4-1 xapi: [debug||116028 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.slave_login D:0366d884ee46 created by task D:f32e5efdeec8
Nov 21 09:57:59 xcp4-1 xapi: [ info||116028 /var/lib/xcp/xapi|session.slave_login D:b39585e0b07e|xapi_session] Session.create trackid=fc78c651286146c61742b0ca74212bb9 pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
Nov 21 09:57:59 xcp4-1 xapi: [debug||116029 /var/lib/xcp/xapi||dummytaskhelper] task dis
Nov 21 09:59:34 xcp4-1 xapi: [ info||116009 HTTPS 192.168.254.101->|Async.PBD.plug R:631710626e67|xapi_session] Session.destroy trackid=726402fee499e51bb72de7fd054a93d0
Nov 21 09:59:34 xcp4-1 xapi: [debug||116009 HTTPS 192.168.254.101->|Async.PBD.plug R:631710626e67|message_forwarding] Unmarking SR after PBD.plug (task=OpaqueRef:63171062-6e67-4cbd-b3be-91bb534a94bf)
Nov 21 09:59:34 xcp4-1 xapi: [error||116009 ||backtrace] Async.PBD.plug R:631710626e67 failed with exception Server_error(SR_BACKEND_FAILURE_12, [ ; mount failed with return code 32; ])
Nov 21 09:59:34 xcp4-1 xapi: [error||116009 ||backtrace] Raised Server_error(SR_BACKEND_FAILURE_12, [ ; mount failed with return code 32; ])
Nov 21 09:59:34 xcp4-1 xapi: [error||116009 ||backtrace] 1/1 xapi Raised at file (Thread 116009 has no backtrace table. Was with_backtraces called?, line 0
Nov 21 09:59:34 xcp4-1 xapi: [error||116009 ||backtrace]