XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    NOT_SUPPORTED_DURING_UPGRADE()

    Scheduled Pinned Locked Moved Management
    11 Posts 4 Posters 123 Views 4 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates 🪐 Co-Founder CEO
      last edited by

      The upgrade process doesn't involve downtime.

      1. You start with the master: evacuate all VMs to slaves
      2. Upgrade the master to 8.3 via the iSO
      3. When done, target one slave: move all VMs elsewhere (including the up to date master), and repeat the process.

      In short, you always migrate VMs BEFORE upgrading a node, and always starting with the master. The order of slaves doesn't matter.

      P 1 Reply Last reply Reply Quote 0
      • P Offline
        paco @olivierlambert
        last edited by paco

        @olivierlambert Are you saying that I will be able to move from an 8.2.1 slave to an 8.3 slave, but I can't move from an 8.2.1 slave to the 8.3 master?

        My context (I was too brief) is that I upgraded the pool master to 8.3. The pool is up and mostly seems normal. When I try to move a VM from an 8.2.1 slave in the same pool to the 8.3 master, I get this NOT_SUPPORTED_DURING_UPGRADE() error. I'm clicking migrate in Xen Orchestra on a stopped or a running VM on an 8.2.1 slave, targeting the 8.3, master. If I could migrate to the master, I would be fine. Any 2 of my 3 hosts can run everything while the third does its upgrade.

        Maybe something is bugged in my setup. It sounds like this is unexpected.

        My Xen Orchestra is from open source, Xen Orchestra, commit 88b88, Master, commit 1640a
        The master XCP-ng host is running 8.3.0 fully patched as of yesterday
        Both slaves XCP-ng hosts are running 8.2.1 fully patched

        What should I check?

        P 1 Reply Last reply Reply Quote 0
        • P Offline
          paco @paco
          last edited by

          One more bit of data. It might be that a specific host has a problem. Because I'm changing racks, I was only trying to evacuate one specific host. That's one host is failing. But the other 8.2.1 slave in the cluster can migrate to the 8.3 master just fine. If B (8.2.1) => A (8.3) and C (8.2.1)=> B (8.2.1), then C is empty and can be upgraded. It's convoluted, but if it works, that's fine. I'll know in a couple hours whether this at least gives me a path forward.

          1 Reply Last reply Reply Quote 0
          • olivierlambertO Offline
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            Live migrate should work from old to new, so as soon as your master is on 8.3, if you have a slave in 8.2.1 for example, you can evacuate VM from this host to any host.

            So you should be able to live migrate all VMs from a slave on 8.2.1 toward the master on 8.3.

            NOT_SUPPORTED_DURING_UPGRADE() means there's something else in play, I would try to live migrate with xe CLI to get more details on the issue.

            P 1 Reply Last reply Reply Quote 0
            • P Offline
              paco @olivierlambert
              last edited by

              @olivierlambert Thanks.

              I found this Citrix page ("Live migration within a pool that doesn't have shared storage by using the xe CLI") that seems to correspond to what I'm doing. (The 2 hosts have no shared storage)

              I ran:
              xe vm-migrate uuid=00b4cf39-f954-6ab3-9977-c4c2809c5324 remote-master=<A> remote-username=root remote-password='stuff' host-uuid=<A's uuid>

              I got the following results:

              Performing a storage live migration. Your VM's VDIs will be migrated with the VM.
              
              Will migrate to remote host: A, using remote network: internal. Here is the VDI mapping:
              VDI 8e8a2679-cf0d-44c1-a3dd-f69edc82d849 -> SR 5bb37e13-61d7-69b3-7de3-091a7866c4d8
              VDI 3c9f7815-0547-4237-949d-27ac3d80b4a6 -> SR 5bb37e13-61d7-69b3-7de3-091a7866c4d8
              
              The requested operation is not allowed for VDIs with CBT enabled or VMs having such VDIs, and CBT is enabled for the specified VDI.
              vdi: 8e8a2679-cf0d-44c1-a3dd-f69edc82d849 (XO CloudConfigDrive omd)
              

              So I ran: xe vdi-list uuid=8e8a2679-cf0d-44c1-a3dd-f69edc82d849

              uuid ( RO)                : 8e8a2679-cf0d-44c1-a3dd-f69edc82d849
                        name-label ( RW): XO CloudConfigDrive omd
                  name-description ( RW):
                           sr-uuid ( RO): e258dec5-d1b1-ceef-b489-f2a2d219bf9b
                      virtual-size ( RO): 10485760
                          sharable ( RO): false
                         read-only ( RO): false
              

              The VDI does have CBT enabled. The VM has 2 VDIs. Both have CBT enabled. Neither VDI has any current snapshots.

              I ran xe vdi-disable-cbt uuid=8e8a2679-cf0d-44c1-a3dd-f69edc82d849 (and for the other VDI). For both VDIs I get

              This operation is not supported during an upgrade.
              

              Any thoughts?

              P 1 Reply Last reply Reply Quote 0
              • P Offline
                Pilow @paco
                last edited by

                @paco seems to be the 10Mb cloudconfig drive leftover after template deployement

                you could delete it, if it is not in use anymore (you forgot it ?)

                beware do not delete anything before being sure what you are deleting.

                P 1 Reply Last reply Reply Quote 0
                • P Offline
                  paco @Pilow
                  last edited by

                  @Pilow I detached that VDI from the VM and the command failed for the same reason, just complaining about the other VDI.

                  A 1 Reply Last reply Reply Quote 0
                  • A Offline
                    acebmxer @paco
                    last edited by

                    @paco Go to the vm in question and go to the storage and disable the CBT. Once migrated you can reneable or the backup job will reenable.

                    I had issues i could not move to new SR when CBT was enabled.

                    1 Reply Last reply Reply Quote 0
                    • olivierlambertO Offline
                      olivierlambert Vates 🪐 Co-Founder CEO
                      last edited by

                      Wait, you don't have a shared storage in your pool?

                      P 1 Reply Last reply Reply Quote 0
                      • P Offline
                        paco @olivierlambert
                        last edited by

                        @olivierlambert Sadly, no. And now that I'm in the middle of upgrading, I can't create storages. I could stand up an NFS server with some shared storage to help. Unfortunately, every attempt at creating a storage (NFS or otherwise) results in NOT_SUPPORTED_DURING_UPGRADE(). If I had created shared storage before I upgraded the master, I could use it. But now that I'm part-way through the upgrade process, I can't.

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post