Switching to XCP-NG, want to hear your problems
-
@nikade
For the NFS thing, how's your performance compared to just iSCSI or whatever else you have underneath NFS. We thought it sounded a bit overcomplicated to put NFS servers in front of storage for that many VMs and that critical an infrastructure piece, and were concerned about performance. -
@crazyadm1n Its pretty good, im not able to max it out with XCP on either iSCSI or NFS so it doesnt really matter.
The Powerstore supports native NFS server, so it is really easy to setup a new filesystem with a NFS server so it is actually easier to setup than iSCSI with multipathing. -
I struggled a bit with network setup. Its super easy to setup a BOND but you have to remove it again to join hosts into a pool. This was a bit challenging for us as the juniper switches in our co-location dont support LACP fallback and we dont have direct switch access. There was also an issue where LACP was working in 8.2 but not in 8.3. It was fixed with a firmware.
We come from an setup where we had multiple 2-node vsan cluster in 3 sites with vReplication for DR. Running more windows VMs than anything else. XCP-NG with XOSTOR and build in backup was an 1:1 replacement for us. We are very happy how everything turned out. vmware migration was working fine as well. Support is great too and much more responsive compared to what we had at vmware.
-
Yeah some setups are easy to migrate, some are not. But I agree with the support, Vates is top class and it seems like they really understand that response time and quality is key.
VMware support is slow and sometimes I wonder if they even know english or if they are using some google translate... -
@crazyadm1n Over the past few months, we transitioned from VMware to XCP. It was a significant project with its challenges, but overall, I'm very satisfied with the move. We didn't utilize many complex VMware features, mainly using iSCSI storage. Within XCP, we tested both NFS and iSCSI, ultimately experiencing much better performance with iSCSI. There are still some improvements to be made in backup, but Vates is working hard on this. I agree with others that Vates' support makes a big difference—they respond quickly and provide the necessary partner support. So far, we are very pleased with the transition. If you have specific questions or issues regarding Windows VMs, let me know; we have gained considerable experience recently.
-
I moved four sites from Esxi to XCP-NG (there are about twenty ish virtual machines).
Once I figured out how to make XCP-NG work, it was relatively easy. I began by installing it on an unused old Dell.
My comments are from that standpoint of a general contractor (construction) that also does IT work so take some of my terminology with a grain (boulder) of salt.
Things that gave me some pause:
-
Figuring out how XOA works vs XO was somewhat confusing. I ended up watching two of Tom's videos at Lawrence Systems (shown below) to get me started.
https://lawrence.technology/xcp-ng-and-xen-orchestra-tutorials/
https://lawrence.technology/virtualization/getting-started-tutorial-building-an-open-source-xcp-ng-8-xen-orchestra-virtualization-lab/ -
NIC fallover - this was much easier in Esxi. It took me a night to figure out how to do the bonding thing.
-
The whole "NIC2:" has to be the same "NIC2:" on every machine was a pain in the a##. Again the way esxi does it is easier.
-
Figuring our the proper terminology to properly create a local repository
Find the disk ID of the “sdb” or “cciss/c0d1”disk
ll /dev/disk/by-id
use gdisk to create partions
"xe sr-create host-uuid=c691140b-966e-43b1-8022-1d1e05081b5b content-type=user name-label="Local EXT4 SR-SSD1" shared=false device-config:device=/dev/disk/by-id/scsi-364cd98f07c7ef8002d2c3c86296c4242-part1 type=ext"-
Expanding an existing drive (i.e. after you grow a raid drive) was a tough (I have post on this site that shows how I did it).
-
Moving a VM from esxi to XCP-NG was just long and a few vomited in the process and had to be re-done. In some cases I used the built in XCP-NG migration and, in others (the huge VMs) I figured out how to do it via Clonezilla (much, much faster once I got the hang of it).
-
list item Having to shut down a running VM to increase the disk size is a bit of a PITA but its not that big of a deal.
-
Over committing memory...I still don't have a great grasp on the one.
Before I made the move, I did a ton of speed tests of esxi vs XCP-NG. About 60% were slightly faster on Esxi and 40% were faster on XCP-NG. In the end, the differences were negligible.
With all that said, I think XCP-NG is much easier to use than esxi and I like it better. Vcenter seemed to last about six months and then always died and had to be rebuilt (and the restore utility was about as reliable as gas station sushi). With XOA, it always works and is much faster than Vcenter.
The backup is awesome. With esxi I was using Nakivo.
Just my two cents!
-
-
@archw The network issues might be a deal breaker for us. It's comical how there's no automation or even well written instructions on how to "match" network interfaces. To say that, "in order to have a pool, all the host networks must be identical" is just insane. We don't have identical hosts or network cards throughout our infrastructure, because we upgrade it throughout the years. I've been trying in a test environment to re-order and match up network interfaces, but it's proving impossible. I don't see how anyone could use this product outside of a home lab scenario, which also makes me believe that's where it's used most.
-
@fakenuze we had the same challenge but managed to take care of this by changing the nic names.
Check out this article
https://support.citrix.com/s/article/CTX135809-how-to-change-order-of-nics-in-xenserver?language=en_US -
XCP-NG support/licensing ended up being too expensive for us to justify it. We would have been paying about the same amount as we were paying for VMWare before the price hike.
To get the multiyear pricing you have to pay for all those years up front. VMWare did not have a same restriction pre price hike.
-
In the FWIW department, I've now done four sites. Network thing I mentioned above took about an hour to figure out and was a bit confusing. Once I did one (and made a little cheat sheet on how I did it), the rest of the sites were easy.
Arch
-
Our three main problems are:
snapshots snapshots snapshots
just a pain on a thick lvm iscsi sr.
-
@rfx77 said in Switching to XCP-NG, want to hear your problems:
Our three main problems are:
snapshots snapshots snapshots
just a pain on a thick lvm iscsi sr.
We switched from iSCSI to NFS and never looked back, the performance is pretty good, thin provisioning, snapshots coalescale pretty fast and life is good.
The setup is rather easy as well as there is no need to tweak the XCP's multipathing configuration. -
@rfx77 said in Switching to XCP-NG, want to hear your problems:
Our three main problems are:
snapshots snapshots snapshots
just a pain on a thick lvm iscsi sr.
Thin based SRs are recommended for a reason as they give better performance and also take up less storage space. You can get really good performance if using NFS 4.0 or higher (best when done using NFS 4.2 and using pNFS to its fullest extent).
With full flash storage target using NFS 4.2 and pNFS to its fullest extent and you'll have the capacity to benefit from features which have been developed, specifically to be paired with SSD storage.
-
Also CBT might help to reduce the coalesce work needed in general.
-
@john-c
NFS is no solution compared tu fast iSCSI Storage. To get in the performance-range of out ISCSI Flashsystem you have to buy Netapp which costs you three times as much.If you have to pay this much for a storage to get the same faetures as before you can stay on vmware.
-
@rfx77 said in Switching to XCP-NG, want to hear your problems:
@john-c
NFS is no solution compared tu fast iSCSI Storage. To get in the performance-range of out ISCSI Flashsystem you have to buy Netapp which costs you three times as much.If you have to pay this much for a storage to get the same faetures as before you can stay on vmware.
Does your current storage support NFS?
If yes, you should give it a try and atleast benchmark it, maybe you'll be suprised. -
@rfx77 use cbt backups and no coalesce and snapshot issues anymore.
-
@nikade No our current storage does not support nfs. but which enterprise grade-storage beside top end Netapp and Dell PowerStor really do??
-
CBT is not supported wit CommVault
CBT is Beta in XOOther Backup vendors cannot compete when you have a broad spectrum of Agent needs.
-
@rfx77 said in Switching to XCP-NG, want to hear your problems:
@nikade No our current storage does not support nfs. but which enterprise grade-storage beside top end Netapp and Dell PowerStor really do??
The TrueNAS storage products either the physical TrueNAS hardware products from iXSystems or using TrueNAS Scale download installation on a server of your own choosing. The recent release of TrueNAS Scale 24.04.2 is likely to really do well as SR for XCP-ng. It's performance is second to none and also will protect data integrity and also can do both SCSI and NFS.
https://www.truenas.com/truenas-scale/
https://www.truenas.com/f-series/
https://www.truenas.com/m-series/
https://www.truenas.com/r-series/