XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XCP-ng 7.5 -> 8.3: Best VM Migration Path?

    Scheduled Pinned Locked Moved Solved Migrate to XCP-ng
    5 Posts 2 Posters 67 Views 2 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • L Online
      Laytman
      last edited by

      I have a 3-node cluster running an old version: XCP-ng 7.5.0 with Xen Orchestra (from source, commit d217c master, commit a4ab2).

      We bought a new server that should replace all three old hosts. It’s installed with XCP-ng 8.3 and XO (commit 2592f master, commit a4ab2). It’s configured as a separate standalone host, because the software versions are not compatible, so I can’t simply add it to the current cluster. Installing the old XCP-ng version first, joining the cluster, and then upgrading also doesn’t work because the old version doesn’t support the new storage controller.

      On the old cluster there are some local internal storages, and a few VMs use them. But most VM disks are stored on a connected NFS storage (Dell PowerEdge T420). The goal is to gradually migrate VMs from the old cluster to the new server carefully and with minimal downtime, because the VMs are in постоянна use.

      As far as I understand, connecting the same SR is not an option, because the same SR cannot be used by two different clusters/servers at the same time.

      Creating a new NFS export/path is also not an option because there is almost no free space on the external storage.

      The only idea I currently have is: create a new SR on a local RAID1 volume on the new server (8 TB) and migrate VMs using snapshots to the new server. The old external storage is 6.4 TB. Considering that some VMs also use local disks on the old cluster, I probably won’t be able to migrate 100% of everything this way, but it should be enough for most of the migration process. The new server has a separate volume reserved for the virtualization system itself.

      Could you, please, share advice/experience and suggest the best migration strategy in this situation?

      1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        Hi,

        IMHO, the less risky strategy with an acceptable downtime is to use warm migration to put everything on the new host, regardless which storage to use (local or shared).

        L 1 Reply Last reply Reply Quote 0
        • L Online
          Laytman @olivierlambert
          last edited by

          @olivierlambert Thanks for the tip — it’s a very interesting mechanism. I’m going to read the docs now 🙂

          1 Reply Last reply Reply Quote 0
          • L Online
            Laytman
            last edited by

            @olivierlambert thanks for the great advice 👍 And thanks to the developers for such an awesome migration mechanism 🙏 everything went smoothly and quickly 😊

            The only thing I ran into (might be useful for others) is that the disk name label matters. The label [NOBAK] lets you exclude a disk from backups, which can be handy for cache disks, for example. Warm migration also respects this label.

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              Ah indeed, labels are honored even in the case of a warm migration.

              We used it ourselves when we migrated from Intel to AMD CPUs, it was really the best compromise betweeen live and cold (especially when you change the CPU vendor).

              Glad to see it's helping many people and companies now 🙂

              1 Reply Last reply Reply Quote 0
              • olivierlambertO olivierlambert marked this topic as a question
              • olivierlambertO olivierlambert has marked this topic as solved
              • First post
                Last post