@olivierlambert this is holding up quite well. I've pounded it good doing 3 exports and an import simultaneously and maintained about 60mb sec writes while doing reads and writes asynchronous as the imports and exports are on the same cephfs. Had an issue with the ceph repository moving to the pool when I created the pool and wound up hosing my pool manager trying to fix it but setting new pool manager and rebuilding the other and it has been flawless since. I've built the XO from source and it has come a long way from the last time I looked at it. We went with a small vendor out of canada called 45drives for the ceph hardware and deployment and they have a nice ansible derived delivery and config package if anyone is looking for a supported solution that is pretty slick.
Best posts made by scboley
-
RE: CEPH FS Storage Driver
-
RE: CEPH FS Storage Driver
@olivierlambert nevermind I fixed it, I had forgot to add the public side of my ceph network onto the new host and then it all scanned correctly. Thanks for being responsive and a little education as it always helps.
Latest posts made by scboley
-
RE: Migrate from VXRail to xcp-ng and then install xcp-ng on the vxrail servers
@Danp @olivierlambert so I'm going to build this out next week and how can I do the build out with the trials and after that how much would a 4 node xoa managed HCI with XOSTOR run for licensing?
Also are there any guides for best practices for deployment of above configuration? IE what type of network configs for the XOSTOR and the pools.
Also the vmware has a citrix deployment on it so do you have a connection aspect for citrix to interface with the pool same as vmware has?
My R730 servers have 8 10gb fiber ports each so take that into consideration for the xostor configuration.
-
RE: Migrate from VXRail to xcp-ng and then install xcp-ng on the vxrail servers
@Danp I currently run production 8.2 with a cephfs backend and a compiled XOA and have had zero issues. So if I did the same with 8.2.1 and XOSTOR and XOA what is the cost of per incident based support from Vates?
-
RE: Migrate from VXRail to xcp-ng and then install xcp-ng on the vxrail servers
@olivierlambert ok I'm about to go to the parent office and will build a 3 or 4 node Dell R730 with XOSTOR as the HCI and a ton of 7.2k spinners for the storage. I will compile XOA and run it with that for this testing phase. Are there any actual guides for configuring the XOSTOR linstor HCI? This is strictly for a test for getting off vmware vxrail vsan system. So would you recommend 8.2.1 or 8.3 as I'm not sure what the status of XOSTOR and 8.3 is currently and will that be addressed in 8.3.1 or just updates?
-
RE: Migrate from VXRail to xcp-ng and then install xcp-ng on the vxrail servers
@olivierlambert I guess I just need to understand what the difference in xosan and xostore is plus how that relates to vmwares shared storage. Good your busy keeps the lights on and growing.
-
RE: Migrate from VXRail to xcp-ng and then install xcp-ng on the vxrail servers
Ok it gets better I have 4 R730 with 6 10ge fiber ports 512gb ram each with san backend to use for the migration. Then once I have the migration tested and working move off the vmware rebuild the dell p570f servers to xcp-ng with xostor and move it back then I have some serious playing space for snapshots and all kinds of fun. Does anyone see any gotchas for this project?
-
RE: Migrate from VXRail to xcp-ng and then install xcp-ng on the vxrail servers
@scboley Ok been doing a little viewing as I haven't looked at the XCP-NG offerings in a while. So XOSTOR is hyperconverged vsan replacement. So would it be possible to take a 5 node Dell VXRail and purchase one extra and configure it with 8.2 XO and XOSTOR and then start moving things onto this take one of the 5 and load with XCP-NG and move some more and eventually replace all of the VMware? Has anyone done something like this? Is XOSTOR a dual type of VSAN offering since it's what is replacing XOSAN? @olivierlambert thoughts?
-
Migrate from VXRail to xcp-ng and then install xcp-ng on the vxrail servers
I have a unique opportunity with my parent company to get them off a VXRail VSan/Vsphere and go to XCP-NG. I will build an interim pool to live migrate from the vsphere to xcp and then here's my question. Has anyone taken a vxrail system and used XCP-NG and how would you implement the shared storage aspects of it? In my little world I've been using XCP for several years with a 45drives CEPH cluster on the backend and love it.
-
RE: CEPH FS Storage Driver
@olivierlambert so are you completely on your own release schedule now or are you still tied to citrix version releases? I've used you since 7.x versions and have had zero issues and still have one 6.5 I'm going to migrate a vm off of because it was on local storage and it's quite large and I didn't want to redo it because of the storage changes from 6 to 7 and then I'll have a pool with all 8.2 after I patch up my other 8.2 and then rebuild the 6.5 to 8.2.
-
RE: CEPH FS Storage Driver
@olivierlambert that is right and its up and running using the driver you guys built and taking a good load no issues. Hats off to you and your team for making this work. I used nfs for exporting off my old ceph system but the importing is strictly on your native cephfs drivers. On my old one I was on xcp-ng 7.6 and had nfs sitting over the cephfs and just gigabit network and it worked but no live movement at all and now I can live migrate no problems with bonded dual 10gb fiber and single 10gb fiber to the hosts.
-
RE: CEPH FS Storage Driver
@olivierlambert this is holding up quite well. I've pounded it good doing 3 exports and an import simultaneously and maintained about 60mb sec writes while doing reads and writes asynchronous as the imports and exports are on the same cephfs. Had an issue with the ceph repository moving to the pool when I created the pool and wound up hosing my pool manager trying to fix it but setting new pool manager and rebuilding the other and it has been flawless since. I've built the XO from source and it has come a long way from the last time I looked at it. We went with a small vendor out of canada called 45drives for the ceph hardware and deployment and they have a nice ansible derived delivery and config package if anyone is looking for a supported solution that is pretty slick.