XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    CEPH FS Storage Driver

    Scheduled Pinned Locked Moved Development
    86 Posts 10 Posters 34.0k Views 8 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • S Offline
      scboley @r1
      last edited by

      @r1 Yeah I just haven't had time yet since it has one of our heaviest used virtuals on it and not good to upset the masses lol.

      borzelB 1 Reply Last reply Reply Quote 0
      • borzelB Offline
        borzel XCP-ng Center Team @scboley
        last edited by borzel

        @scboley said in CEPH FS Storage Driver:

        and not good to upset the masses

        this is the reason I worked at night the last two weeks at work 🌃

        1 Reply Last reply Reply Quote 0
        • S Offline
          scboley @r1
          last edited by scboley

          @r1 sudo mount -t ceph sanadmin.nams.net:6789: / /mnt/nfsmigrate

          1 Reply Last reply Reply Quote 0
          • R Offline
            r1 XCP-ng Team
            last edited by

            @scboley nice. It should work straight as mentioned in 2nd post of this thread.

            1 Reply Last reply Reply Quote 0
            • M Offline
              maxcuttins
              last edited by maxcuttins

              Ok,

              i start my adventure but I get soon an issue.

              This command install the nautilus release Repo:

              yum install centos-release-ceph-nautilus --enablerepo=extras
              

              and worked fine.

              This one should installed ceph-common but cannot find many dependencies

              yum install ceph-common
              Loaded plugins: fastestmirror
              centos-ceph-nautilus                                                                                                                                | 2.9 kB  00:00:00
              centos-ceph-nautilus/7/x86_64/primary_db                                                                                                            |  62 kB  00:00:00
              Loading mirror speeds from cached hostfile
               * centos-ceph-nautilus: mirrors.prometeus.net
              Resolving Dependencies
              --> Running transaction check
              ---> Package ceph-common.x86_64 2:14.2.0-1.el7 will be installed
              --> Processing Dependency: python-rgw = 2:14.2.0-1.el7 for package: 2:ceph-common-14.2.0-1.el7.x86_64
              
              ... [CUT] ...
              
              Error: Package: 2:librgw2-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: libibverbs.so.1()(64bit)
              Error: Package: leveldb-1.12.0-5.el7.1.x86_64 (centos-ceph-nautilus)
                         Requires: libsnappy.so.1()(64bit)
              Error: Package: 2:libcephfs2-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: librdmacm.so.1()(64bit)
              Error: Package: 2:ceph-common-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: liblz4.so.1()(64bit)
              Error: Package: 2:libradosstriper1-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: libibverbs.so.1()(64bit)
              Error: Package: 2:librbd1-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: libibverbs.so.1()(64bit)
              Error: Package: 2:ceph-common-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: python-requests
              Error: Package: 2:librgw2-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: librdmacm.so.1()(64bit)
              Error: Package: 2:librados2-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: librdmacm.so.1()(64bit)
              Error: Package: 2:ceph-common-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: libibverbs.so.1()(64bit)
              Error: Package: 2:libcephfs2-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: libibverbs.so.1()(64bit)
              Error: Package: 2:ceph-common-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: libfuse.so.2()(64bit)
              Error: Package: 2:ceph-common-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: libsnappy.so.1()(64bit)
              Error: Package: 2:librbd1-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: librdmacm.so.1()(64bit)
              Error: Package: 2:librados2-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: librdmacm.so.1(RDMACM_1.0)(64bit)
              Error: Package: 2:ceph-common-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: librabbitmq.so.4()(64bit)
              Error: Package: 2:libradosstriper1-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: librdmacm.so.1()(64bit)
              Error: Package: 2:librgw2-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: librabbitmq.so.4()(64bit)
              Error: Package: 2:librados2-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: libibverbs.so.1()(64bit)
              Error: Package: 2:ceph-common-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: libtcmalloc.so.4()(64bit)
              Error: Package: 2:librados2-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: libibverbs.so.1(IBVERBS_1.0)(64bit)
              Error: Package: 2:librados2-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: libibverbs.so.1(IBVERBS_1.1)(64bit)
              Error: Package: 2:ceph-common-14.2.0-1.el7.x86_64 (centos-ceph-nautilus)
                         Requires: librdmacm.so.1()(64bit)
               You could try using --skip-broken to work around the problem
               You could try running: rpm -Va --nofiles --nodigest
              
              

              Of course I'm using a higher version of Ceph than your post.
              Any hint anyway?

              M 1 Reply Last reply Reply Quote 0
              • M Offline
                maxcuttins @maxcuttins
                last edited by

                Ok, solved.
                Of course Ceph need Epel and Base repo enabled.

                This worked:

                yum install epel-release -y --enablerepo=extras
                yum install ceph-common --enablerepo='centos-ceph-nautilus' --enablerepo='epel' --enablerepo='base'
                
                1 Reply Last reply Reply Quote 0
                • M Offline
                  maxcuttins
                  last edited by

                  @r1 said in CEPH FS Storage Driver:

                  patch -p0 < ceph.patch

                  This went not so smooth, reporting "unexpected end":

                  patch -p0 < ceph.patch
                  patching file /opt/xensource/sm/nfs.py
                  patching file /opt/xensource/sm/NFSSR.py
                  patch unexpectedly ends in middle of line
                  Hunk #3 succeeded at 197 with fuzz 1.
                  
                  1 Reply Last reply Reply Quote 0
                  • R Offline
                    r1 XCP-ng Team
                    last edited by

                    Hi @maxcuttins - what's your XCP-NG version and can you share checksum for your NFSSR.py file?

                    1 Reply Last reply Reply Quote 0
                    • R Offline
                      Rainer
                      last edited by Rainer

                      Hi,

                      I also play around with xcp-ng and ceph and just tried to use CephFS as a storage repos. I am using XCP-ng 8.0.0 with Ceph Nautilus. Before I tried to use lvm on top of a rbd which basically works but performance is really bad this way.

                      So I downloaded your patch and ran it on my test 8.0.0 xenserver, all Hunks succeeded. ceph is also installed, modprobe ceph works. Next I tried to add a new cephFs repository just as you described it above.

                      In the end I see an error box pooping up in XencCenter: "SM has thrown a generic python exception". And thats all.

                      In /var/log/SMlog I see the log attached below.

                      Thanks
                      Rainer

                      /var/log/SMLog:

                      Sep 26 09:42:54 rzinstal4 SM: [18075] lock: opening lock file /var/lock/sm/d63d0d49-522d-6160-a1eb-a9d5f34cec1e/sr
                      Sep 26 09:42:54 rzinstal4 SM: [18075] lock: acquired /var/lock/sm/d63d0d49-522d-6160-a1eb-a9d5f34cec1e/sr
                      Sep 26 09:42:54 rzinstal4 SM: [18075] sr_create {'sr_uuid': 'd63d0d49-522d-6160-a1eb-a9d5f34cec1e', 'subtask_of': 'DummyRef:|55cbc4ef-c6e9-4dba-aa5f-8f56b2473443|SR.create', 'args': ['0'], 'host_ref': 'OpaqueRef:0c03f516-7ed5-48ac-895b-51833177aed5', 'session_ref': 'OpaqueRef:d472846e-c6aa-467d-ac8b-0f401fbc698c', 'device_config': {'server': 'ip_of_mon', 'SRmaster': 'true', 'serverpath': '/sds', 'options': 'name=admin,secretfile=/etc/ceph/admin.secret'}, 'command': 'sr_create', 'sr_ref': 'OpaqueRef:9fb35d75-a45c-4ce4-b9f4-8ed8072ad6ad'}
                      Sep 26 09:42:54 rzinstal4 SM: [18075] ['/usr/sbin/rpcinfo', '-p', 'ip_of_mon']
                      Sep 26 09:42:54 rzinstal4 SM: [18075] FAILED in util.pread: (rc 1) stdout: '', stderr: 'rpcinfo: can't contact portmapper: RPC: Remote system error - Connection refused
                      Sep 26 09:42:54 rzinstal4 SM: [18075] '
                      Sep 26 09:42:54 rzinstal4 SM: [18075] Unable to obtain list of valid nfs versions
                      Sep 26 09:42:54 rzinstal4 SM: [18075] lock: released /var/lock/sm/d63d0d49-522d-6160-a1eb-a9d5f34cec1e/sr
                      Sep 26 09:42:54 rzinstal4 SM: [18075] ***** generic exception: sr_create: EXCEPTION <type 'exceptions.TypeError'>, not all arguments converted during string formatting
                      Sep 26 09:42:54 rzinstal4 SM: [18075]   File "/opt/xensource/sm/SRCommand.py", line 110, in run
                      Sep 26 09:42:54 rzinstal4 SM: [18075]     return self._run_locked(sr)
                      Sep 26 09:42:54 rzinstal4 SM: [18075]   File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked
                      Sep 26 09:42:54 rzinstal4 SM: [18075]     rv = self._run(sr, target)
                      Sep 26 09:42:54 rzinstal4 SM: [18075]   File "/opt/xensource/sm/SRCommand.py", line 323, in _run
                      Sep 26 09:42:54 rzinstal4 SM: [18075]     return sr.create(self.params['sr_uuid'], long(self.params['args'][0]))
                      Sep 26 09:42:54 rzinstal4 SM: [18075]   File "/opt/xensource/sm/NFSSR", line 216, in create
                      Sep 26 09:42:54 rzinstal4 SM: [18075]     raise exn
                      Sep 26 09:42:54 rzinstal4 SM: [18075]
                      Sep 26 09:42:54 rzinstal4 SM: [18075] ***** NFS VHD: EXCEPTION <type 'exceptions.TypeError'>, not all arguments converted during string formatting
                      Sep 26 09:42:54 rzinstal4 SM: [18075]   File "/opt/xensource/sm/SRCommand.py", line 372, in run
                      Sep 26 09:42:54 rzinstal4 SM: [18075]     ret = cmd.run(sr)
                      Sep 26 09:42:54 rzinstal4 SM: [18075]   File "/opt/xensource/sm/SRCommand.py", line 110, in run
                      Sep 26 09:42:54 rzinstal4 SM: [18075]     return self._run_locked(sr)
                      Sep 26 09:42:54 rzinstal4 SM: [18075]   File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked
                      Sep 26 09:42:54 rzinstal4 SM: [18075]     rv = self._run(sr, target)
                      Sep 26 09:42:54 rzinstal4 SM: [18075]   File "/opt/xensource/sm/SRCommand.py", line 323, in _run
                      Sep 26 09:42:54 rzinstal4 SM: [18075]     return sr.create(self.params['sr_uuid'], long(self.params['args'][0]))
                      Sep 26 09:42:54 rzinstal4 SM: [18075]   File "/opt/xensource/sm/NFSSR", line 216, in create
                      Sep 26 09:42:54 rzinstal4 SM: [18075]     raise exn
                      Sep 26 09:42:54 rzinstal4 SM: [18075]
                      Sep 26 09:42:54 rzinstal4 SM: [18075] lock: closed /var/lock/sm/d63d0d49-522d-6160-a1eb-a9d5f34cec1e/sr
                      
                      R 1 Reply Last reply Reply Quote 0
                      • olivierlambertO Offline
                        olivierlambert Vates 🪐 Co-Founder CEO
                        last edited by

                        Hi @Rainer and welcome 🙂

                        Please use Markdown syntax for console/code blocks:

                        ```
                        my test
                        ```
                        
                        R 1 Reply Last reply Reply Quote 0
                        • R Offline
                          Rainer @olivierlambert
                          last edited by

                          Hello oliverlambert,
                          thanks for the hint. I edited my post correcting the formatting.

                          1 Reply Last reply Reply Quote 1
                          • R Offline
                            r1 XCP-ng Team @Rainer
                            last edited by

                            @Rainer Please share your NFSSR.py to check. Are you able to manually mount the SR using #mount.ceph addr1,addr2,addr3,addr4:remotepath localpath?

                            1 Reply Last reply Reply Quote 0
                            • R Offline
                              Rainer
                              last edited by

                              Hello,

                              thanks for your answer. I can mount the the CephFS filesystem on the xenserver this way:

                              mount.ceph  1.2.3.4:/base /mnt -o "name=admin,secretfile=/etc/ceph/admin.secret"
                              

                              where 1.2.3.4 is the IP of the active ceph monitor.

                              In xencenters New-SR-dialog I put the "1.2.3.4:/base" part in the "Share Name" input field and the options name=admin,secretfile=/etc/ceph/admin.secret in the "Advanced Options" field .

                              Below I attached my patched /opt/xensource/sm/NFSSR.py file:
                              [0_1569574207746_NFSSR.py](Uploading 100%)

                              NFSSR.py.txt

                              1 Reply Last reply Reply Quote 0
                              • R Offline
                                r1 XCP-ng Team
                                last edited by

                                Just so you know - the original patch was for XCP-NG 7.6. I'm checking it for XCP-NG 8.

                                1 Reply Last reply Reply Quote 0
                                • R Offline
                                  r1 XCP-ng Team
                                  last edited by

                                  Can you share /opt/xensource/sm/nfs.py as well... it was also supposed to be patched.

                                  1 Reply Last reply Reply Quote 0
                                  • R Offline
                                    r1 XCP-ng Team
                                    last edited by

                                    Updated new patch for XCP-NG 8.

                                    1 Reply Last reply Reply Quote 0
                                    • olivierlambertO Offline
                                      olivierlambert Vates 🪐 Co-Founder CEO
                                      last edited by

                                      Note that all are hacks are won't survive any upgrade and aren't supported 😉 But it's fine for a proof of concept.

                                      1 Reply Last reply Reply Quote 0
                                      • R Offline
                                        r1 XCP-ng Team
                                        last edited by r1

                                        Agree - unless we have enough interest from community to pack it or make it maintainable, it will stay a hack.

                                        Also mentioning the command to apply patch # patch -d/ -p1 < cephfs-8.patch

                                        Edit : updated patch strip level

                                        1 Reply Last reply Reply Quote 0
                                        • R Offline
                                          Rainer
                                          last edited by Rainer

                                          Hello and thanks for the new patch. I understand that this is only a kind of hack or better a proof of concept. I do not want to use this for a productive system but only wanted to give it a try with nautilus and see what performance I can expect if this hack hopefully develops to a supported integration of Ceph.

                                          I hoped that the patch might work for XCP-ng 8.0 as well but it seems it does not that easily. So I could also install a xenserver 7.6 and use the original patch and test how it works for me. I do not want you to invest too much work in the 8.0 adoption. So if you think testing the patch 8.0 does not really help you in any way, then please do not hesitate and tell me. Else if you are as well interested to test if it works with 8.0 here is what I found out:

                                          The new patch does not work completely, there is a "patch unexpectedly ends in middle of line" -warning. On a fresh installed system:

                                          # cd /
                                          # patch --verbose -p0  < /cephfs-8.patch
                                          mm...  Looks like a unified diff to me...
                                          The text leading up to this was:
                                          --------------------------
                                          |--- /opt/xensource/sm/nfs.py   20199-27 16:45:11.918853933 +0530
                                          |+++ /opt/xensource/sm/nfs.py   20199-27 16:49:26.910379515 +0530
                                          --------------------------
                                          patching file /opt/xensource/sm/nfs.py
                                          Using Plan A...
                                          Hunk #1 succeeded at 137.
                                          Hunk #2 succeeded at 159.
                                          Hmm...  The next patch looks like a unified diff to me...
                                          The text leading up to this was:
                                          --------------------------
                                          |--- /opt/xensource/sm/NFSSR.py 20199-27 16:44:56.437557255 +0530
                                          |+++ /opt/xensource/sm/NFSSR.py 20199-27 17:07:45.651188670 +0530
                                          --------------------------
                                          patching file /opt/xensource/sm/NFSSR.py
                                          Using Plan A...
                                          Hunk #1 succeeded at 137.
                                          Hunk #2 succeeded at 160.
                                          patch unexpectedly ends in middle of line
                                          Hunk #3 succeeded at 197 with fuzz 1.
                                          done
                                          

                                          I can do a ceph.mount just like described before but when I instead try to create a new Ceph-FS-SR named "Ceph" in xencenter 8.0.0 I still get the error that SM has thrown a generic python exception.

                                          The SMlog error log indicates that creating a NFS repos is tried with the given parameters and not a CephFS one:

                                          ...
                                          Sep 30 13:03:05 rzinstal4 SM: [975] sr_create {'sr_uuid': '79cc55db-ee9f-9cc0-8d94-8cdb2915363b', 'subtask_of': 'DummyRef:|6456510f-b2dd-4dcd-b6ab-eb5eb40ceb9c|SR.create', 'args': ['0'], 'host_ref': 'OpaqueRef:72898e54-20b2-4046-9825-94553457fa60', 'session_ref': 'OpaqueRef:8a0819af-8b91-4086-a9cf-04ae37dc8e79', 'device_config': {'server': '1.2.3.4', 'SRmaster': 'true', 'serverpath': '/base', 'options': 'name=admin,secretfile=/etc/ceph/admin.secret'}, 'command': 'sr_create', 'sr_ref': 'OpaqueRef:91c592f6-47a0-40d6-8a33-7fe01831145f'}
                                          Sep 30 13:03:05 rzinstal4 SM: [975] ['/usr/sbin/rpcinfo', '-p', '1.2.3.4']
                                          Sep 30 13:03:05 rzinstal4 SM: [975] FAILED in util.pread: (rc 1) stdout: '', stderr: 'rpcinfo: can't contact portmapper: RPC: Remote system error - Connection refused
                                          Sep 30 13:03:05 rzinstal4 SM: [975] '
                                          Sep 30 13:03:05 rzinstal4 SM: [975] Unable to obtain list of valid nfs versions
                                          ...
                                          

                                          I attached my /opt/xensource/sm/nfs.py file as well.

                                          Thanks a lot

                                          Rainer
                                          nfs.py.txt

                                          1 Reply Last reply Reply Quote 0
                                          • R Offline
                                            r1 XCP-ng Team
                                            last edited by

                                            @r1 said in CEPH FS Storage Driver:

                                            patch -d/ -p0 < cephfs-8.patch

                                            Please try # patch -d/ -p0 < cephfs-8.patch

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post