@nikade Thanks again for your input, much appreciated.
Best posts made by TS79
-
RE: Please review - XCP-ng Reference Architecture
-
RE: Please review - XCP-ng Reference Architecture
@olivierlambert Thank you - all makes sense
-
RE: Please review - XCP-ng Reference Architecture
@nikade Thanks for your comments and thoughts. We're repurposing existing HP DL380 servers for the hosts, and was going to try repurpose our Nimble AF40 arrays, but they only do iSCSI, which means thick provisioning, which creates a capacity challenge for us (some of our VMs have been provisioned with 2-4TB virtual disks, but only using 100-300GB... so recreating smaller disks and data-cloning would be tedious but necessary).
TrueNAS is my 'gold prize', assuming it provides enough uptime and performance. Our IOPS and throughput requirements aren't huge; they only hit anywhere over 500MB/sec and a few thousand IOPS during backup jobs.
Replicating XOA is definitely a 'default'. But from my lab tests, redeploying and restoring config is to quick too, so I'm not too fussed about 'losing' XOA. I'd backup the config to on-premises 'remotes' and to cloud-based object storage.
Much appreciate your time and feedback, thank you!
-
RE: n100 based hosts? AMD 5800h?
Hi @Greg_E. I've setup a few homelabs with XCP-ng using older and newer mini PCs, so thought I'd share some of my experiences.
First pass, I used the Lenovo Tiny M710q PCs, bought for around £100 each on eBay. They had either the i5-6400T or i5-6500T processor. I added 32GB of Crucial RAM, added the SATA drive tray for a boot drive, and added a 1TB NVMe in each for storage. Since I don't use Wifi on these, I removed the M.2 wifi card and added in a cheap 2.5GbE NIC (https://www.amazon.co.uk/gp/product/B09YG8J7BP)
XCP-ng 8.2.1 works perfectly, no customisation or challenges. I did see the exact same storage performance trends as you, and see that @CJ has already correctly pointed out the current limitation in the current storage API (SMAPIv1).I've also built a homelab with the Trigkey G5 N100 mini PCs. Again, XCP-ng 8.2.1 works perfectly on the 4-core E-cores of the N100. This G5 model has dual 2.5GbE NICs which is perfect for giving VMs a 2.5GbE link to the world, and a separate 2.5GbE link for the host to use for storage. Be aware, if you split networking this way, Xen Orchestra needs to be present on both networks (management to talk to the XCP-ng hosts over HTTPS, and storage to talk to NFS and/or CIFS for backups/replication).
I've not measured the power draw much, but typically the Lenovos are using around 15-25W, and the Trigkey G5s about 10-18W. Fan noise on both are very low - I have them on a shelf in my desk, so I sit next to them all day. My daily driver is a dead-silent Mac Mini M2, so I'm very aware of surrounding noise, and there's nearly none.
The only challenge I had with the N100 was that Windows VMs seemed to think they only had a clock speed of 800MHz - so performance was poor. I did not get around to trying any performance settings in the BIOS to force higher clock speeds : in my view this would trigger additional power usage, unwanted additional heat and additional fan noise.
If you build a homelab with 3 XCP-ng hosts, slap a 1TB NVME in each and trial the XOSTOR as an alternative to network shared storage. In my case, I went down to running my workloads on a single Lenovo M710q, stored locally on NVME. Xen Orchestra (VM on the Lenovo) which backs up and replicates VMs to an NFS hosts (another Trigkey G5 with Ubuntu Server, a 4TB NVME, and running Ubuntu-native NFS)
Typical network performance during backups / DR is around 150-200MB/sec on the 2.5GbE.
Hope that helps!
-
RE: Introduce yourself!
Hi. I'm a cloud solutions architect, with around 25 years of working experience in servers, storage, networking (your typical infrastructure stuff) and about 20 years of virtualisation. I started up a homelab many years ago, and through (too) many evolutions, I've ended up with Lenovo M710q mini PCs running XCP-ng, with another mini PC providing NFS storage (with backup and replication to cater for problems and failures).
Absolutely love XCP-ng and am promoting it wherever I can. I've architected and kicked off a project at my employer to replace VMware with XCP-ng, so I'm keen to use the forum to read other people's real-world experiences with storage and host specs, hurdles to avoid, and any tips & tricks.
Looking forward to interacting with the community more and more.