CEPH FS Storage Driver

  • As a side experiment - I could extract and test CEPH FS kernel module for XCP-NG 7.4+ and mount a multi-mon ceph cluster (luminous).

    Once ceph.ko module is in place, XCP-NG can mount the ceph fs mount point similar to NFS.

    e.g. In NFSSR we use #mount.nfs4 addr:remotepath localpath

    while for CEPHSR we can use #mount.ceph addr1,addr2,addr3,addr4:remotepath localpath

    I'm currently looking at NFSSR.py to create CEPHFSSR.py - and would share once ready, meanwhile if anyone wants to help me in this by testing ceph.ko, developing CEPHFSSR.py, kindly ping here. Like EXT4 and XFS, this too will have some cleanup issues..

    If this works as expected, I'd request @olivierlambert and @borzel to look at possibilities of integrating CEPH FS option in XO/XC near NFS SR.

    Note: This is completely different than RBDSR which is worked by @rposudnevskiy and uses rbd image protocol of ceph.

    Bonus : If someone knows how nfs-ganesha fits in this - buzzz... e.g. Each CEPH FS node can have a NFS server while each XCP-NG host mounts all of them using NFS 4.1 (pNFS). Bypassing need of ceph.ko. We can also rebuild NFS module as CONFIG_NFSD_PNFS is not set by default.

    Caveats : CEPH FS caching is not fully explored.

    State: Ready. See post below for instructions on how to use.

  • So taking inspiration and doing small modifications to existing NFS driver, patch allows you to use both NFS and Ceph FS same time.

    Note : The patch is applied against XCP-NG 7.6.0

    # cd /
    # wget "https://gist.githubusercontent.com/rushikeshjadhav/af53bb5747365875f0ab21bd3a64c6fe/raw/59ef7a4b54574e4163da1ac39acd640554bd0d24/ceph.patch"
    # patch -p0 < ceph.patch

    Apart from the patch, ceph.ko & ceph-common would be needed.

    To install ceph-common on XCP-NG
    # yum install centos-release-ceph-luminous --enablerepo=extras
    # yum install ceph-common

    To install and load ceph.ko
    # wget -O ceph-4.4.52-4.0.12.x86_64.rpm "https://github.com/rushikeshjadhav/ceph-4.4.52/blob/master/RPMS/x86_64/ceph-4.4.52-4.0.12.x86_64.rpm?raw=true"
    # yum install ceph-4.4.52-4.0.12.x86_64.rpm
    # modprobe -v ceph (Optional)

    User experience:

    Screen Shot 2019-02-24 at 2.27.12 AM.png

    Screen Shot 2019-02-24 at 2.27.31 AM.png

    In Share Name, you can put multiple addresses from your CEPH mon.
    In Advanced Options, you need to add your ceph authx user details, such as name and its secret file location. These will be used by mount.ceph.

    Note: Keep secret file in /etc/ceph/ with permissions 600.

    Current SR detection patch replies on presence of word "ceph" in advance options to discriminate between NFS and CEPH FS.

    Screen Shot 2019-02-24 at 2.34.01 AM.png

    Screen Shot 2019-02-24 at 2.39.55 AM.png

    Screen Shot 2019-02-24 at 2.40.14 AM.png

    Log Location : /var/log/SMlog

    Edit: Created RPM installer for ceph.ko

  • [Edited to remove huge quote]

    WOW @r1
    This sound really good.

    CephFS is the one with the worst performance.
    But at least if it works it's better than nothing.
    This also doesn't seems so disruptive but a small patch.
    So I guess, less changes, less issues with the standard installation of XCP.

    good job!
    I'll start to test it at beginning of march.
    Count on me.

  • Thanks @maxcuttins. Let me know how it goes..

  • RBD is faster because of client side caching.. Ceph FS does not do it but can be made faster by enabling fscache. Once you have your numbers we will retest with fscache 🙂

  • Ok I have mine in a very restricted environment and do not have any logins defined for the ceph access.
    So saying that if I don't put any user authentication in the options will it still work? I had issues with your plugin around that same problem.
    I'm also still on 7.5 will that be an issue and my ceph is mimic.

  • Can you show your mount command by doing some example mount? e.g. #mount.ceph add1:remotepath localpath /mnt? Does this work as it is?

    The advanced options are necessary for now to discriminate between NFS and Ceph. But I think once I know your working command, I will be able to generalize it.

    BTW - I hope you installed the ceph.ko as mentioned in earlier steps.

  • @r1 I'm in planning phase right now since these boxes are actually production and I'll have to figure how to do this without it being a possible danger to my production enviro. The box I have that needs it the worst I still haven't upgraded from xenserver 6.5. I need to setup a fully dev box for this. I've got plans for upgrading my infrastructure for the ceph and virtual public network that this would help tremendously.

  • Sadly XS 6.5 uses 3.10 kernel which does not support Ceph outright and you would most likely to update this host to a recent XCP version.

  • @r1 Yeah I just haven't had time yet since it has one of our heaviest used virtuals on it and not good to upset the masses lol.

  • @scboley said in CEPH FS Storage Driver:

    and not good to upset the masses

    this is the reason I worked at night the last two weeks at work 🌃

  • @r1 sudo mount -t ceph sanadmin.nams.net:6789: / /mnt/nfsmigrate

  • @scboley nice. It should work straight as mentioned in 2nd post of this thread.

Log in to reply