XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    NOT_SUPPORTED_DURING_UPGRADE()

    Scheduled Pinned Locked Moved Management
    7 Posts 3 Posters 60 Views 3 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • P Offline
      paco
      last edited by

      I am running into the same issue as in this post. But I'm confused as to how one upgrades a cluster of hosts from 8.2.1 to 8.3 without massive downtime. I have 3 hosts, A, B, and C. A is the master. I moved all the workloads off of A, and then upgraded it to 8.3.

      I'd like to move workloads off one of the slaves, so the slave can take as long as necessary to upgrade. The upgrade is not quick.

      The only way to upgrade from 8.2.1 to 8.3 is to boot from the ISO, which is fine. But once a node is upgraded, I can't migrate workloads to it from the non-upgraded nodes. How do I roll this upgrade through the cluster without just taking an entire host and all its workloads offline for 45 minutes while it upgrades?

      I have been able to move workloads from old to new by shutting down a VM on an old node, using the copy function in Xen Orchestra to copy it to the upgraded master, and then booting the new copy. But that takes the VM offline for the duration of the copy. A few of my VMs can tolerate that, but not many.

      What am I missing?

      1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        The upgrade process doesn't involve downtime.

        1. You start with the master: evacuate all VMs to slaves
        2. Upgrade the master to 8.3 via the iSO
        3. When done, target one slave: move all VMs elsewhere (including the up to date master), and repeat the process.

        In short, you always migrate VMs BEFORE upgrading a node, and always starting with the master. The order of slaves doesn't matter.

        P 1 Reply Last reply Reply Quote 0
        • P Offline
          paco @olivierlambert
          last edited by paco

          @olivierlambert Are you saying that I will be able to move from an 8.2.1 slave to an 8.3 slave, but I can't move from an 8.2.1 slave to the 8.3 master?

          My context (I was too brief) is that I upgraded the pool master to 8.3. The pool is up and mostly seems normal. When I try to move a VM from an 8.2.1 slave in the same pool to the 8.3 master, I get this NOT_SUPPORTED_DURING_UPGRADE() error. I'm clicking migrate in Xen Orchestra on a stopped or a running VM on an 8.2.1 slave, targeting the 8.3, master. If I could migrate to the master, I would be fine. Any 2 of my 3 hosts can run everything while the third does its upgrade.

          Maybe something is bugged in my setup. It sounds like this is unexpected.

          My Xen Orchestra is from open source, Xen Orchestra, commit 88b88, Master, commit 1640a
          The master XCP-ng host is running 8.3.0 fully patched as of yesterday
          Both slaves XCP-ng hosts are running 8.2.1 fully patched

          What should I check?

          P 1 Reply Last reply Reply Quote 0
          • P Offline
            paco @paco
            last edited by

            One more bit of data. It might be that a specific host has a problem. Because I'm changing racks, I was only trying to evacuate one specific host. That's one host is failing. But the other 8.2.1 slave in the cluster can migrate to the 8.3 master just fine. If B (8.2.1) => A (8.3) and C (8.2.1)=> B (8.2.1), then C is empty and can be upgraded. It's convoluted, but if it works, that's fine. I'll know in a couple hours whether this at least gives me a path forward.

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              Live migrate should work from old to new, so as soon as your master is on 8.3, if you have a slave in 8.2.1 for example, you can evacuate VM from this host to any host.

              So you should be able to live migrate all VMs from a slave on 8.2.1 toward the master on 8.3.

              NOT_SUPPORTED_DURING_UPGRADE() means there's something else in play, I would try to live migrate with xe CLI to get more details on the issue.

              P 1 Reply Last reply Reply Quote 0
              • P Offline
                paco @olivierlambert
                last edited by

                @olivierlambert Thanks.

                I found this Citrix page ("Live migration within a pool that doesn't have shared storage by using the xe CLI") that seems to correspond to what I'm doing. (The 2 hosts have no shared storage)

                I ran:
                xe vm-migrate uuid=00b4cf39-f954-6ab3-9977-c4c2809c5324 remote-master=<A> remote-username=root remote-password='stuff' host-uuid=<A's uuid>

                I got the following results:

                Performing a storage live migration. Your VM's VDIs will be migrated with the VM.
                
                Will migrate to remote host: A, using remote network: internal. Here is the VDI mapping:
                VDI 8e8a2679-cf0d-44c1-a3dd-f69edc82d849 -> SR 5bb37e13-61d7-69b3-7de3-091a7866c4d8
                VDI 3c9f7815-0547-4237-949d-27ac3d80b4a6 -> SR 5bb37e13-61d7-69b3-7de3-091a7866c4d8
                
                The requested operation is not allowed for VDIs with CBT enabled or VMs having such VDIs, and CBT is enabled for the specified VDI.
                vdi: 8e8a2679-cf0d-44c1-a3dd-f69edc82d849 (XO CloudConfigDrive omd)
                

                So I ran: xe vdi-list uuid=8e8a2679-cf0d-44c1-a3dd-f69edc82d849

                uuid ( RO)                : 8e8a2679-cf0d-44c1-a3dd-f69edc82d849
                          name-label ( RW): XO CloudConfigDrive omd
                    name-description ( RW):
                             sr-uuid ( RO): e258dec5-d1b1-ceef-b489-f2a2d219bf9b
                        virtual-size ( RO): 10485760
                            sharable ( RO): false
                           read-only ( RO): false
                

                The VDI does have CBT enabled. The VM has 2 VDIs. Both have CBT enabled. Neither VDI has any current snapshots.

                I ran xe vdi-disable-cbt uuid=8e8a2679-cf0d-44c1-a3dd-f69edc82d849 (and for the other VDI). For both VDIs I get

                This operation is not supported during an upgrade.
                

                Any thoughts?

                P 1 Reply Last reply Reply Quote 0
                • P Online
                  Pilow @paco
                  last edited by

                  @paco seems to be the 10Mb cloudconfig drive leftover after template deployement

                  you could delete it, if it is not in use anymore (you forgot it ?)

                  beware do not delete anything before being sure what you are deleting.

                  1 Reply Last reply Reply Quote 0
                  • First post
                    Last post