XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. TS79
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 2
    • Posts 35
    • Groups 0

    TS79

    @TS79

    10
    Reputation
    9
    Profile views
    35
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online
    Location Hertfordshire, UK

    TS79 Unfollow Follow

    Best posts made by TS79

    • RE: How to do Simple Backup to Local USB Drive?

      EDIT: DISCLAIMER: I'm using the below config in a home lab, using spare parts, and where there are no financial or service-level consequences if it all burns down.

      @TechGrips From one of your posts I see your XO is separate from the XCP-ng host (running on VirtualBox). I tried something similar, and went through some of the same pains as you when it came to backup remote (BR) setups. In the end, I used a spare mini-PC running Ubuntu, installed XO from sources, attached a USB drive formatted as EXT4, mounted it (/mnt/USB1 or something) and used that as a 'Local BR'.

      I then noticed that I sometimes saw 0 bytes free - turned out that the USB drive was going to sleep. Without properly spending any time on power management, I wrote a cron job to simply write the date to a text file on /mnt/USB1 every 5 minutes... Lazy but it kept the USB drive awake and XO backups worked great.

      The biggest risk is that, if the USB drive disconnects / fails / unmounts, the /mnt/USB1 folder still exists on the root filesystem, and could fill up if a backup job consumes all the free space - so definitely look into controls for that (perhaps quotes or something more intelligent than the lazy keepawake method I have) 😂

      Side note: please remember that many people on this forum speak languages other than English, so sometimes their written messages don't carry the full context or intention. Sorry to see you've felt condescended to. I've found it helps me keep positive to try assume that people's intentions are to help, even if the person's message seems blunt or odd, or if they're asking questions that seem to blame (they're probably just trying to get more info to help). We're all here to help each other, and I've only had good, helpful service from the Vates team (@danp specifically) to date.

      posted in Backup
      TS79T
      TS79
    • RE: Please review - XCP-ng Reference Architecture

      @nikade Thanks again for your input, much appreciated.

      posted in Share your setup!
      TS79T
      TS79
    • RE: Please review - XCP-ng Reference Architecture

      @olivierlambert Thank you - all makes sense

      posted in Share your setup!
      TS79T
      TS79
    • RE: Please review - XCP-ng Reference Architecture

      @nikade Thanks for your comments and thoughts. We're repurposing existing HP DL380 servers for the hosts, and was going to try repurpose our Nimble AF40 arrays, but they only do iSCSI, which means thick provisioning, which creates a capacity challenge for us (some of our VMs have been provisioned with 2-4TB virtual disks, but only using 100-300GB... so recreating smaller disks and data-cloning would be tedious but necessary).

      TrueNAS is my 'gold prize', assuming it provides enough uptime and performance. Our IOPS and throughput requirements aren't huge; they only hit anywhere over 500MB/sec and a few thousand IOPS during backup jobs.

      Replicating XOA is definitely a 'default'. But from my lab tests, redeploying and restoring config is to quick too, so I'm not too fussed about 'losing' XOA. I'd backup the config to on-premises 'remotes' and to cloud-based object storage.

      Much appreciate your time and feedback, thank you!

      posted in Share your setup!
      TS79T
      TS79
    • RE: n100 based hosts? AMD 5800h?

      Hi @Greg_E. I've setup a few homelabs with XCP-ng using older and newer mini PCs, so thought I'd share some of my experiences.
      First pass, I used the Lenovo Tiny M710q PCs, bought for around £100 each on eBay. They had either the i5-6400T or i5-6500T processor. I added 32GB of Crucial RAM, added the SATA drive tray for a boot drive, and added a 1TB NVMe in each for storage. Since I don't use Wifi on these, I removed the M.2 wifi card and added in a cheap 2.5GbE NIC (https://www.amazon.co.uk/gp/product/B09YG8J7BP)
      XCP-ng 8.2.1 works perfectly, no customisation or challenges. I did see the exact same storage performance trends as you, and see that @CJ has already correctly pointed out the current limitation in the current storage API (SMAPIv1).

      I've also built a homelab with the Trigkey G5 N100 mini PCs. Again, XCP-ng 8.2.1 works perfectly on the 4-core E-cores of the N100. This G5 model has dual 2.5GbE NICs which is perfect for giving VMs a 2.5GbE link to the world, and a separate 2.5GbE link for the host to use for storage. Be aware, if you split networking this way, Xen Orchestra needs to be present on both networks (management to talk to the XCP-ng hosts over HTTPS, and storage to talk to NFS and/or CIFS for backups/replication).

      I've not measured the power draw much, but typically the Lenovos are using around 15-25W, and the Trigkey G5s about 10-18W. Fan noise on both are very low - I have them on a shelf in my desk, so I sit next to them all day. My daily driver is a dead-silent Mac Mini M2, so I'm very aware of surrounding noise, and there's nearly none.

      The only challenge I had with the N100 was that Windows VMs seemed to think they only had a clock speed of 800MHz - so performance was poor. I did not get around to trying any performance settings in the BIOS to force higher clock speeds : in my view this would trigger additional power usage, unwanted additional heat and additional fan noise.

      If you build a homelab with 3 XCP-ng hosts, slap a 1TB NVME in each and trial the XOSTOR as an alternative to network shared storage. In my case, I went down to running my workloads on a single Lenovo M710q, stored locally on NVME. Xen Orchestra (VM on the Lenovo) which backs up and replicates VMs to an NFS hosts (another Trigkey G5 with Ubuntu Server, a 4TB NVME, and running Ubuntu-native NFS)

      Typical network performance during backups / DR is around 150-200MB/sec on the 2.5GbE.

      Hope that helps!

      posted in Hardware
      TS79T
      TS79
    • RE: Introduce yourself!

      Hi. I'm a cloud solutions architect, with around 25 years of working experience in servers, storage, networking (your typical infrastructure stuff) and about 20 years of virtualisation. I started up a homelab many years ago, and through (too) many evolutions, I've ended up with Lenovo M710q mini PCs running XCP-ng, with another mini PC providing NFS storage (with backup and replication to cater for problems and failures).

      Absolutely love XCP-ng and am promoting it wherever I can. I've architected and kicked off a project at my employer to replace VMware with XCP-ng, so I'm keen to use the forum to read other people's real-world experiences with storage and host specs, hurdles to avoid, and any tips & tricks.

      Looking forward to interacting with the community more and more.

      posted in Off topic
      TS79T
      TS79

    Latest posts made by TS79

    • RE: Recommended CPU & RAM for physical hypervisor node

      @yzgulec as people have rightly said above, no hard and fast rule. But when it comes to sizing compute for mixed workloads (or unknown workloads) I have used a 1:4 (pCPU:vCPU) contention ratio.

      I also try and size 2GB of memory per vCPU (so 8GB RAM per pCPU), and with this 'rough guide' I find that my CPU and memory consumption scale nicely.

      Obviously it gets difficult when someone has a workload that needs very low CPU but high memory (e.g. I've seen a 2-core VM with 64GB memory) and conversely high CPU and low relative memory (e.g. 8-core VM with 4GB RAM)... but these are typically exceptions.

      Please share your decisions and build configs here, I'd be interested to see how it worked out in terms of compute:memory, costs, etc.

      posted in Hardware
      TS79T
      TS79
    • RE: Question on NDB backups

      @archw

      Hi. The only input I have is around your first question.

      When anything stores multiple smaller files (especially thousands of files per GB of data), you will have filesystem overhead on your storage device (backup target) and 'wasted space'. This could mean that a lot of additional capacity will be used up, and if you're already near capacity, that problem could scale up to where it becomes a big problem.

      As a dumbed-down version using round numbers, for example:

      • on a filesystem with 4K block size, a 1-byte file (e.g. a TXT file with the letter "a" in it) will consume 4K of disk capacity.
      • If you scale this up to a thousand files, you are consuming 4,000,000 bytes of disk capacity, with only 1,000 bytes of data.

      Also, if you are using any other apps/utils to scan, monitor, or sync at the filesystem level (for example a sync tool, or anti-malware, or checksums) - it will need to process many thousands of files instead of just a hundred or so. This will add latency to processing files.

      Again, depends on scale, so another round-number example:

      • assume an app/util needs 200 milliseconds to open and close each file per operation
      • if you have 100 files, you have 20 seconds of 'wait time'.
      • If you have 1,000,000 files, you are looking at about 55 hours of 'wait time'.

      Not a very realistic example, but just something to be aware of when you explode data into many, many smaller file containers.

      posted in Backup
      TS79T
      TS79
    • RE: All drop-down options are empty

      @Andrew said in All drop-down options are empty:

      XO install video

      thanks Andrew - 100% agree 🙂
      Tom's video is the exact one that introduced me to the ronivay script, which is what I've used for all XO-from-source installs without problems.

      posted in Xen Orchestra
      TS79T
      TS79
    • RE: Question on Mirror backups

      @manilx Sorry, I haven’t played around with sequences and retention yet, so I can’t help… Still learning 🙂 But I’ll try play with it on the weekend and if I find an answer I’ll post here.

      posted in Backup
      TS79T
      TS79
    • RE: Question on Mirror backups

      @manilx Hi - I can answer this quickly as was just helping someone else with mirror backups. Always check the official doc 🙂 But yes, you will need to create a mirror job for replicating full backups, and another mirror job for replicating incremental backups.

      Official tip (straight from the doc) is:
      If you have full and incremental backups on a remote, you must configure 2 mirror backup jobs, one full and one incremental.

      Hope that helps!

      posted in Backup
      TS79T
      TS79
    • RE: Backup failed with "Body Timeout Error"

      @archw Actually I think XO will clean up a failed backup - I’ve had a few failures but haven’t noticed any ‘junk’ backup clutter on my remote.

      posted in Backup
      TS79T
      TS79
    • RE: Backup failed with "Body Timeout Error"

      @archw That’s a good question - unfortunately I don’t know the answers as I’ve not used this mirror job before, and the XO documentation doesn’t explain it in a lot of technical detail. But definitely have a read at mirror backup doc

      I’ll try setup a mirror job on the weekend to test how data flows, and how failed jobs would impact. I assume all data will flow via XO appliance and NFS remotes. Which is good, as it removes load from XCP hosts and their NICs (assuming XO appliance is installed outside that pool)

      The positive aspect is that, even if the mirror fails AND somehow corrupts the remote-2 copy, you’ve still got the remote-1 healthy copy, and the actual source VM (so 2 healthy copies of your data).

      Retrying a mirror job is much easier and lower impact than rerunning the actual VM backup 🙂

      posted in Backup
      TS79T
      TS79
    • RE: Backup failed with "Body Timeout Error"

      @archw One suggestion would be to break your backup jobs down into smaller units.

      For example, set your existing backup jobs to write to only one of the two NFS remotes. This could reduce runtime and hopefully avoid any timeouts.

      Then setup 'mirror' jobs to copy the backups from the first remote to the second remote. This way, your backup job can hopefully complete sooner, and you can optimise the traffic flow between hosts & remotes.

      Example of the Mirror backup job below (please note that if you use incremental backups, you'll likely need a job to mirror Full backups, and another job to mirror Incremental backups). I don't use this feature myself, as my homelab isn't critical.

      7af646b2-3675-4fbd-a6bc-b04251bebe64-image.png

      posted in Backup
      TS79T
      TS79
    • RE: Backup failed with "Body Timeout Error"

      @archw can you share how large this VM's virtual disks are? It could be that the backup is timing out because of how long that specific VM/disk is taking to complete.

      You mentioned NFS targets for the backup remote: what network throughput and NFS storage throughput do you have?

      As a very rough indicator, I backup around 50GB of VMs, stored on host NVMe storage repository, over 2.5GbE to a TrueNAS NFS target and that takes around 10-12minutes.

      Do you have a timeout set in the XO backup job's config? There is a field for that. If that's empty, then perhaps Vates can confirm if there's a default 4-hour timeout built into XO's code (sorry I can't check myself, I'm not code- or API-savvy).

      posted in Backup
      TS79T
      TS79
    • RE: All drop-down options are empty

      @Andrew @JamfoFL - another fan of the ronivay script here, as it offers a rollback feature.

      I noticed the same 'empty dropdown' problem; for me it was while trying to create a new backup job. I ran the script and rolled back, which fixed the issue and allowed me to complete the backup job.

      I waited about a week, noticed that the number of commits behind had increased, then updated again. This time it all went through fine.

      posted in Xen Orchestra
      TS79T
      TS79