XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    "NOT_SUPPORTED_DURING_UPGRADE()" error after yesterday's update

    Scheduled Pinned Locked Moved Backup
    23 Posts 7 Posters 508 Views 6 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • DanpD Offline
      Danp Pro Support Team @john.c
      last edited by

      @john.c said in "NOT_SUPPORTED_DURING_UPGRADE()" error after yesterday's update:

      Please be careful of tenses (past tense etc),

      Yeah... I was trying to be helpful, but I was clearly in too much of a hurry.

      Mea culpa

      J 1 Reply Last reply Reply Quote 0
      • J Offline
        john.c @Danp
        last edited by john.c

        @Danp said in "NOT_SUPPORTED_DURING_UPGRADE()" error after yesterday's update:

        @john.c said in "NOT_SUPPORTED_DURING_UPGRADE()" error after yesterday's update:

        Please be careful of tenses (past tense etc),

        Yeah... I was trying to be helpful, but I was clearly in too much of a hurry.

        Mea culpa

        It’s alright great that you’re willing to help even during your own time. I posted what I did in order to help you get better with the English language.

        Next time you think you’re in too much of a hurry say to yourself or in your mind, this old saying “Haste makes waste”.

        1 Reply Last reply Reply Quote 0
        • A Offline
          archw
          last edited by archw

          Mystery solved:
          There are numerous hosts under one master in the pool. I ran the yum update command on the master first, then rebooted, I then ran yum update on the rest and rebooted all but one. I could not reboot the last one until this morning (Saturday).

          I rebooted the last one a few minutes ago and all is well.

          Because I think its interesting, I'd love someone smart to explain the process of why other hosts could not do a snapshot because a different host in the pool had not been rebooted.

          1 Reply Last reply Reply Quote 0
          • olivierlambertO Online
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            Because doing an update without rebooting doesn't reload the updated main programs, like XAPI. A host in only updated after a full reboot.

            M 1 Reply Last reply Reply Quote 0
            • M Offline
              magicker @olivierlambert
              last edited by

              @olivierlambert said in "NOT_SUPPORTED_DURING_UPGRADE()" error after yesterday's update:

              Because doing an update without rebooting doesn't reload the updated main programs, like XAPI. A host in only updated after a full reboot.

              Reply

              Hi there
              Is it just me or is this a chicken and egg situation.

              you upgrade the master... how the pool is in NOT_SUPPORTED_DURING_UPGRADE() stage. You cant move vms off the master so all you can do is close down vms.. reboot.. pray

              then move the a non master.. you cant move the vms off here either NOT_SUPPORTED_DURING_UPGRADE(). So you have do the same..

              needless to say I hit issues on each reboot which caused 30- 60 min delays in getting vms back up and running.

              can you Warm migrate or is this dead also (to scared to test)

              M 1 Reply Last reply Reply Quote 0
              • M Offline
                MajorP93 @magicker
                last edited by MajorP93

                @magicker said in "NOT_SUPPORTED_DURING_UPGRADE()" error after yesterday's update:

                @olivierlambert said in "NOT_SUPPORTED_DURING_UPGRADE()" error after yesterday's update:

                Because doing an update without rebooting doesn't reload the updated main programs, like XAPI. A host in only updated after a full reboot.

                Reply

                Hi there
                Is it just me or is this a chicken and egg situation.

                you upgrade the master... how the pool is in NOT_SUPPORTED_DURING_UPGRADE() stage. You cant move vms off the master so all you can do is close down vms.. reboot.. pray

                then move the a non master.. you cant move the vms off here either NOT_SUPPORTED_DURING_UPGRADE(). So you have do the same..

                needless to say I hit issues on each reboot which caused 30- 60 min delays in getting vms back up and running.

                can you Warm migrate or is this dead also (to scared to test)

                For me this workflow worked every time there were upgrades available:

                -disable HA on pool level
                -disable load balancer plugin
                -upgrade master
                -upgrade all other nodes
                -restart toolstack on master
                -restart toolstack on all other nodes
                -live migrate all VMs running on master to other node(s)
                -reboot master
                -reboot next node (live migrate all VMs running on that particular node away before doing so)
                -repeat until all nodes have been rebooted (one node at a time)
                -re-enable HA on pool level
                -re-enable load balancer plugin

                Never had any issues with that. No downtime for none of the VMs.

                1 Reply Last reply Reply Quote 1
                • olivierlambertO Online
                  olivierlambert Vates 🪐 Co-Founder CEO
                  last edited by

                  Exactly and that's what does the Rolling Pool Update feature 🙂

                  S 1 Reply Last reply Reply Quote 1
                  • S Offline
                    shorian @olivierlambert
                    last edited by

                    As an observation ; I'm going to draw attention to @majorp93's point about rebooting the servers after ALL nodes have been upgraded.

                    Historically we would move all VMs off master, upgrade master, restart its toolstack, then reboot master, then move VMs from Node 1 to master so we could begin the upgrade on Node1. Normally works ok but last time around it caused all sorts of problems. Previously it had felt right to upgrade master in its entirety including the reboot before moving on to the next host and rinse, repeat - but this cost us a lot of time, corruptions and pain.

                    TLDR: Perhaps add a footnote to the docs that when upgrading a pool the reboots should take place as a final step across the pool only after all nodes have been updated.

                    M 1 Reply Last reply Reply Quote 0
                    • M Offline
                      MajorP93 @shorian
                      last edited by MajorP93

                      @shorian The documentation never stated otherwise… https://docs.xcp-ng.org/management/updates/#-how-to-apply-the-updates

                      Those steps that I mentioned previously in this thread were taken from official xcp ng documentation. If you pay attention to the numbers in front of the sentences in the document I just linked and follow them in numerical order you will end up exactly with my routine.

                      S 1 Reply Last reply Reply Quote 0
                      • S Offline
                        shorian @MajorP93
                        last edited by

                        @MajorP93 Yeah, that's me being daft. I confess I took "consider rebooting the hosts, starting with the pool master' as a parallel not serial task; my bad and my pain as a result.

                        Fair point sir 👍

                        1 Reply Last reply Reply Quote 0
                        • olivierlambertO Online
                          olivierlambert Vates 🪐 Co-Founder CEO
                          last edited by

                          We could probably make the doc even more precise. Adding @thomas-dkmt in the loop for that.

                          1 Reply Last reply Reply Quote 1
                          • First post
                            Last post