XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    NOT_SUPPORTED_DURING_UPGRADE()

    Scheduled Pinned Locked Moved Management
    15 Posts 5 Posters 231 Views 5 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • P Offline
      paco @olivierlambert
      last edited by

      @olivierlambert Thanks.

      I found this Citrix page ("Live migration within a pool that doesn't have shared storage by using the xe CLI") that seems to correspond to what I'm doing. (The 2 hosts have no shared storage)

      I ran:
      xe vm-migrate uuid=00b4cf39-f954-6ab3-9977-c4c2809c5324 remote-master=<A> remote-username=root remote-password='stuff' host-uuid=<A's uuid>

      I got the following results:

      Performing a storage live migration. Your VM's VDIs will be migrated with the VM.
      
      Will migrate to remote host: A, using remote network: internal. Here is the VDI mapping:
      VDI 8e8a2679-cf0d-44c1-a3dd-f69edc82d849 -> SR 5bb37e13-61d7-69b3-7de3-091a7866c4d8
      VDI 3c9f7815-0547-4237-949d-27ac3d80b4a6 -> SR 5bb37e13-61d7-69b3-7de3-091a7866c4d8
      
      The requested operation is not allowed for VDIs with CBT enabled or VMs having such VDIs, and CBT is enabled for the specified VDI.
      vdi: 8e8a2679-cf0d-44c1-a3dd-f69edc82d849 (XO CloudConfigDrive omd)
      

      So I ran: xe vdi-list uuid=8e8a2679-cf0d-44c1-a3dd-f69edc82d849

      uuid ( RO)                : 8e8a2679-cf0d-44c1-a3dd-f69edc82d849
                name-label ( RW): XO CloudConfigDrive omd
          name-description ( RW):
                   sr-uuid ( RO): e258dec5-d1b1-ceef-b489-f2a2d219bf9b
              virtual-size ( RO): 10485760
                  sharable ( RO): false
                 read-only ( RO): false
      

      The VDI does have CBT enabled. The VM has 2 VDIs. Both have CBT enabled. Neither VDI has any current snapshots.

      I ran xe vdi-disable-cbt uuid=8e8a2679-cf0d-44c1-a3dd-f69edc82d849 (and for the other VDI). For both VDIs I get

      This operation is not supported during an upgrade.
      

      Any thoughts?

      P 1 Reply Last reply Reply Quote 0
      • P Offline
        Pilow @paco
        last edited by

        @paco seems to be the 10Mb cloudconfig drive leftover after template deployement

        you could delete it, if it is not in use anymore (you forgot it ?)

        beware do not delete anything before being sure what you are deleting.

        P 1 Reply Last reply Reply Quote 0
        • P Offline
          paco @Pilow
          last edited by

          @Pilow I detached that VDI from the VM and the command failed for the same reason, just complaining about the other VDI.

          A 1 Reply Last reply Reply Quote 0
          • A Offline
            acebmxer @paco
            last edited by

            @paco Go to the vm in question and go to the storage and disable the CBT. Once migrated you can reneable or the backup job will reenable.

            I had issues i could not move to new SR when CBT was enabled.

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              Wait, you don't have a shared storage in your pool?

              P 1 Reply Last reply Reply Quote 0
              • P Offline
                paco @olivierlambert
                last edited by

                @olivierlambert Sadly, no. And now that I'm in the middle of upgrading, I can't create storages. I could stand up an NFS server with some shared storage to help. Unfortunately, every attempt at creating a storage (NFS or otherwise) results in NOT_SUPPORTED_DURING_UPGRADE(). If I had created shared storage before I upgraded the master, I could use it. But now that I'm part-way through the upgrade process, I can't.

                1 Reply Last reply Reply Quote 0
                • olivierlambertO Offline
                  olivierlambert Vates 🪐 Co-Founder CEO
                  last edited by

                  Ah, indeed, having a pool is generally meant to use a shared SR. I wasn't suspecting you didn't have a shared SR.

                  Let me ask around.

                  1 Reply Last reply Reply Quote 0
                  • stormiS Offline
                    stormi Vates 🪐 XCP-ng Team
                    last edited by stormi

                    @paco I think it's the first time someone asks this, which is surprising to me, because CBT enabled + local storage may not be such a rare thing.

                    I wasn't aware of this blocking situation. We'll need to evaluate it, document it, and if possible find a way to avoid it.

                    In your situation, if all you've done is upgrading the pool master, I would advise to boot the upgrade ISO again and use it to restore the 8.2 backup that was made automatically during the upgrade. Then boot the master again, disable CBT on all your disks, and start again with the upgrade.

                    P 1 Reply Last reply Reply Quote 2
                    • P Offline
                      paco @stormi
                      last edited by paco

                      @stormi Thanks. I appreciate it. But unfortunately, I'm unable to move workloads off the master in order to take it offline because of this situation.

                      If the solution is to turn off a host while it has live workloads, then I'm just going to shutdown the 8.2.1 slave and upgrade it to 8.3. Then I have 2 members in the pool and it's fully upgraded.

                      Let me tell you another edge case I encountered. There are some clear mistakes in here that I made, but it is related to this issue. When I took C offline and upgraded it to 8.3, I took the opportunity to convert it to UEFI boot. That meant reformatting the boot drive, not upgrading it. I wasn't worried about that. I took it out of the pool, reformatted it, and created a one-node 8.3 pool that has just node C in it. No biggie, right? I'll just have it join the pool with the 8.3 master and all is well. No, that's not going to work. Can I at least move some workloads onto it? Nope.

                      When you do a fresh install, pool-enable-certificate-verification defaults to yes. When you upgrade a pool, pool-enable-certificate-verification defaults to no. So I have a half-upgraded pool with 2 nodes with certificate verification disabled, and a single-node 8.3 pool with certificate verification enabled.

                      If I try to enable certificate verification on my half-upgraded pool? Our good friend NOT_SUPPORTED_DURING_UPGRADE() comes back to say "hi". As far as I can tell, it is not possible to disable certificate verification on the single-node 8.3 pool.

                      So I have a one-node pool where I can't turn verification off and a 2-node, half-upgraded pool where I can't turn it on. That makes it really difficult for the two pools to interoperate.

                      If I have to be known for something, let me be known as a cautionary tale. 😀

                      1 Reply Last reply Reply Quote 1
                      • stormiS Offline
                        stormi Vates 🪐 XCP-ng Team
                        last edited by stormi

                        We do warn about the certificate situation in the 8.3 release notes, indeed, but it's easy to get caught by that.

                        There's a way to temporarily disable LTS verification on the new hosts in order to join it to the existing pool.

                        See https://docs.xcp-ng.org/releases/release-8-3/#certificate-verification-xs which in turns points to https://docs.xenserver.com/en-us/xenserver/8/hosts-pools/certificate-verification where you'll find that command.

                        Regarding your initial situation, I'm not 100% sure, but I think Warm Migration, might be a way to migrate your VMs off your slave hosts while minimizing downtime. I don't know how it plays with CBT and heterogenous pool state exactly though.

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post