XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. scboley
    3. Posts
    S
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 1
    • Posts 27
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Migrate from VXRail to xcp-ng and then install xcp-ng on the vxrail servers

      @Danp @olivierlambert so I'm going to build this out next week and how can I do the build out with the trials and after that how much would a 4 node xoa managed HCI with XOSTOR run for licensing?

      Also are there any guides for best practices for deployment of above configuration? IE what type of network configs for the XOSTOR and the pools.

      Also the vmware has a citrix deployment on it so do you have a connection aspect for citrix to interface with the pool same as vmware has?

      My R730 servers have 8 10gb fiber ports each so take that into consideration for the xostor configuration.

      posted in Migrate to XCP-ng
      S
      scboley
    • RE: Migrate from VXRail to xcp-ng and then install xcp-ng on the vxrail servers

      @Danp I currently run production 8.2 with a cephfs backend and a compiled XOA and have had zero issues. So if I did the same with 8.2.1 and XOSTOR and XOA what is the cost of per incident based support from Vates?

      posted in Migrate to XCP-ng
      S
      scboley
    • RE: Migrate from VXRail to xcp-ng and then install xcp-ng on the vxrail servers

      @olivierlambert ok I'm about to go to the parent office and will build a 3 or 4 node Dell R730 with XOSTOR as the HCI and a ton of 7.2k spinners for the storage. I will compile XOA and run it with that for this testing phase. Are there any actual guides for configuring the XOSTOR linstor HCI? This is strictly for a test for getting off vmware vxrail vsan system. So would you recommend 8.2.1 or 8.3 as I'm not sure what the status of XOSTOR and 8.3 is currently and will that be addressed in 8.3.1 or just updates?

      posted in Migrate to XCP-ng
      S
      scboley
    • RE: Migrate from VXRail to xcp-ng and then install xcp-ng on the vxrail servers

      @olivierlambert I guess I just need to understand what the difference in xosan and xostore is plus how that relates to vmwares shared storage. Good your busy keeps the lights on and growing.

      posted in Migrate to XCP-ng
      S
      scboley
    • RE: Migrate from VXRail to xcp-ng and then install xcp-ng on the vxrail servers

      Ok it gets better I have 4 R730 with 6 10ge fiber ports 512gb ram each with san backend to use for the migration. Then once I have the migration tested and working move off the vmware rebuild the dell p570f servers to xcp-ng with xostor and move it back then I have some serious playing space for snapshots and all kinds of fun. Does anyone see any gotchas for this project?

      posted in Migrate to XCP-ng
      S
      scboley
    • RE: Migrate from VXRail to xcp-ng and then install xcp-ng on the vxrail servers

      @scboley Ok been doing a little viewing as I haven't looked at the XCP-NG offerings in a while. So XOSTOR is hyperconverged vsan replacement. So would it be possible to take a 5 node Dell VXRail and purchase one extra and configure it with 8.2 XO and XOSTOR and then start moving things onto this take one of the 5 and load with XCP-NG and move some more and eventually replace all of the VMware? Has anyone done something like this? Is XOSTOR a dual type of VSAN offering since it's what is replacing XOSAN? @olivierlambert thoughts?

      posted in Migrate to XCP-ng
      S
      scboley
    • Migrate from VXRail to xcp-ng and then install xcp-ng on the vxrail servers

      I have a unique opportunity with my parent company to get them off a VXRail VSan/Vsphere and go to XCP-NG. I will build an interim pool to live migrate from the vsphere to xcp and then here's my question. Has anyone taken a vxrail system and used XCP-NG and how would you implement the shared storage aspects of it? In my little world I've been using XCP for several years with a 45drives CEPH cluster on the backend and love it.

      posted in Migrate to XCP-ng
      S
      scboley
    • RE: CEPH FS Storage Driver

      @olivierlambert so are you completely on your own release schedule now or are you still tied to citrix version releases? I've used you since 7.x versions and have had zero issues and still have one 6.5 I'm going to migrate a vm off of because it was on local storage and it's quite large and I didn't want to redo it because of the storage changes from 6 to 7 and then I'll have a pool with all 8.2 after I patch up my other 8.2 and then rebuild the 6.5 to 8.2.

      posted in Development
      S
      scboley
    • RE: CEPH FS Storage Driver

      @olivierlambert that is right and its up and running using the driver you guys built and taking a good load no issues. Hats off to you and your team for making this work. I used nfs for exporting off my old ceph system but the importing is strictly on your native cephfs drivers. On my old one I was on xcp-ng 7.6 and had nfs sitting over the cephfs and just gigabit network and it worked but no live movement at all and now I can live migrate no problems with bonded dual 10gb fiber and single 10gb fiber to the hosts.

      posted in Development
      S
      scboley
    • RE: CEPH FS Storage Driver

      @olivierlambert this is holding up quite well. I've pounded it good doing 3 exports and an import simultaneously and maintained about 60mb sec writes while doing reads and writes asynchronous as the imports and exports are on the same cephfs. Had an issue with the ceph repository moving to the pool when I created the pool and wound up hosing my pool manager trying to fix it but setting new pool manager and rebuilding the other and it has been flawless since. I've built the XO from source and it has come a long way from the last time I looked at it. We went with a small vendor out of canada called 45drives for the ceph hardware and deployment and they have a nice ansible derived delivery and config package if anyone is looking for a supported solution that is pretty slick.

      posted in Development
      S
      scboley
    • RE: CEPH FS Storage Driver

      @olivierlambert nevermind I fixed it, I had forgot to add the public side of my ceph network onto the new host and then it all scanned correctly. Thanks for being responsive and a little education as it always helps.

      posted in Development
      S
      scboley
    • RE: CEPH FS Storage Driver

      @olivierlambert

      Nov 21 10:20:11 xcp4-1 SM: [30250] vhd=/var/run/sr-mount/51b80ad1-820d-c29a-1f9c-a50d6454f927/.vhd scan-error=-5 error-message='failure scanning target'
      Nov 21 10:20:11 xcp4-1 SM: [30250] scan failed: -5
      Nov 21 10:20:11 xcp4-1 SM: [30250] ', stderr: ''
      Nov 21 10:20:12 xcp4-1 SM: [30250] ['/usr/bin/vhd-util', 'scan', '-f', '-m', '/var/run/sr-mount/51b80ad1-820d-c29a-1f9c-a50d6454f927/
      .vhd']
      Nov 21 10:20:12 xcp4-1 SM: [30250] FAILED in util.pread: (rc 5) stdout: 'vhd=/var/run/sr-mount/51b80ad1-820d-c29a-1f9c-a50d6454f927 scan-error=2 error-message='failure
      scanning target'
      Nov 21 10:20:12 xcp4-1 SM: [30250] vhd=/var/run/sr-mount/51b80ad1-820d-c29a-1f9c-a50d6454f927/*.vhd scan-error=-5 error-message='failure scanning target'
      Nov 21 10:20:12 xcp4-1 SM: [30250] scan failed: -5

      posted in Development
      S
      scboley
    • RE: CEPH FS Storage Driver

      @olivierlambert adding another host to the pool and it fails to connect to the ceph shared storage:

      Nov 21 09:57:48 xcp4-1 xapi: [debug||116026 /var/lib/xcp/xapi|SR.scan R:05af02328263|helpers] Waiting for up to 12.902806 seconds before retrying...
      Nov 21 09:57:59 xcp4-1 xapi: [debug||116027 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.logout D:79aefd48b34b created by task D:f32e5efdeec8
      Nov 21 09:57:59 xcp4-1 xapi: [ info||116027 /var/lib/xcp/xapi|session.logout D:67032978d90c|xapi_session] Session.destroy trackid=c8a5d1fe7e932298b267edb677909a4b
      Nov 21 09:57:59 xcp4-1 xapi: [debug||116028 /var/lib/xcp/xapi||dummytaskhelper] task dispatch:session.slave_login D:0366d884ee46 created by task D:f32e5efdeec8
      Nov 21 09:57:59 xcp4-1 xapi: [ info||116028 /var/lib/xcp/xapi|session.slave_login D:b39585e0b07e|xapi_session] Session.create trackid=fc78c651286146c61742b0ca74212bb9 pool=true uname= originator=xapi is_local_superuser=true auth_user_sid= parent=trackid=9834f5af41c964e225f24279aefe4e49
      Nov 21 09:57:59 xcp4-1 xapi: [debug||116029 /var/lib/xcp/xapi||dummytaskhelper] task dis

      Nov 21 09:59:34 xcp4-1 xapi: [ info||116009 HTTPS 192.168.254.101->|Async.PBD.plug R:631710626e67|xapi_session] Session.destroy trackid=726402fee499e51bb72de7fd054a93d0
      Nov 21 09:59:34 xcp4-1 xapi: [debug||116009 HTTPS 192.168.254.101->|Async.PBD.plug R:631710626e67|message_forwarding] Unmarking SR after PBD.plug (task=OpaqueRef:63171062-6e67-4cbd-b3be-91bb534a94bf)
      Nov 21 09:59:34 xcp4-1 xapi: [error||116009 ||backtrace] Async.PBD.plug R:631710626e67 failed with exception Server_error(SR_BACKEND_FAILURE_12, [ ; mount failed with return code 32; ])
      Nov 21 09:59:34 xcp4-1 xapi: [error||116009 ||backtrace] Raised Server_error(SR_BACKEND_FAILURE_12, [ ; mount failed with return code 32; ])
      Nov 21 09:59:34 xcp4-1 xapi: [error||116009 ||backtrace] 1/1 xapi Raised at file (Thread 116009 has no backtrace table. Was with_backtraces called?, line 0
      Nov 21 09:59:34 xcp4-1 xapi: [error||116009 ||backtrace]

      posted in Development
      S
      scboley
    • RE: CEPH FS Storage Driver

      @scboley I figured it out finally. I used another key created by the cluster and got it to connect and mount the ceph.

      posted in Development
      S
      scboley
    • RE: CEPH FS Storage Driver

      Ok I've got this setup and I have a cluster serving the cephfs and here's my errors:
      xe sr-create type=cephfs name-label=ceph device-config:server=172.30.254.23,172.30.254.24,172.30.254.25 device-config:serverport=6789 device-config:serverpath=/fsgw/xcpsr device-config:options=name=admin,secretfile=/etc/ceph/admin.secret
      Error code: SR_BACKEND_FAILURE_111
      Error parameters: , CephFS mount error [opterr=mount failed with return code 1],

      posted in Development
      S
      scboley
    • RE: CEPH FS Storage Driver

      @olivierlambert I know kernel.org maintains a lot of very new kernels for centos versions way newer than the locked and back ported mess the default kernels are so do you build off that base and change out the virtual parts and put them in your builds?

      posted in Development
      S
      scboley
    • RE: CEPH FS Storage Driver

      @olivierlambert so what are your plans for going to a streams 8 version which would give the updated kernel platform and hopefully soon after SMAPIv3? IO throughput on 8 over 7 is vastly superior and not near as big as the 6 to 7 changes were.

      posted in Development
      S
      scboley
    • RE: CEPH FS Storage Driver

      @olivierlambert Ok I see even with 8.x you are still based on centos 7 when is it going up to 8 and I'd assume rocky would be the choice since the redhat streaming snafu cough cough.

      posted in Development
      S
      scboley
    • RE: CEPH FS Storage Driver

      @olivierlambert what are the plans to elevate it? I have a feeling its really starting to gain traction in the storage world.

      posted in Development
      S
      scboley
    • RE: CEPH FS Storage Driver

      Ok I see the package listed in the documentation is still nautilus has that been updated to any newer ceph versions as of yet? @olivierlambert

      posted in Development
      S
      scboley