XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Live Migrate between two stand alone pools fails

    Scheduled Pinned Locked Moved Solved Xen Orchestra
    22 Posts 2 Posters 1.3k Views 1 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • D Offline
      DustinB @Danp
      last edited by

      @Danp xe

      vm-list uuid=dfecebe0-6ac1-b511-85ee-b43aa0223578
      uuid ( RO)           : dfecebe0-6ac1-b511-85ee-b43aa0223578
           name-label ( RW): Lab - Redhat8
          power-state ( RO): running
      1 Reply Last reply Reply Quote 0
      • D Offline
        DustinB @Danp
        last edited by

        @Danp Which logs in particular would you think might have some information? Nothing stands out, but the logs are pretty busy.

        I've not tried a warm migration

        D 1 Reply Last reply Reply Quote 0
        • D Offline
          DustinB @DustinB
          last edited by

          @DustinB said in Live Migrate between two stand alone pools fails:

          @Danp Which logs in particular would you think might have some information? Nothing stands out, but the logs are pretty busy.

          I've not tried a warm migration

          I also don't have a warm migration option, since these hosts are all standalone pools.

          DanpD 1 Reply Last reply Reply Quote 0
          • DanpD Offline
            Danp Pro Support Team
            last edited by

            I would check var/log/xensource.log first. If the error occurs immediately, then you should be able to tail the file and watch for the error to occur.

            D 1 Reply Last reply Reply Quote 0
            • DanpD Offline
              Danp Pro Support Team @DustinB
              last edited by

              @DustinB said in Live Migrate between two stand alone pools fails:

              I also don't have a warm migration option

              Did you look under the VM's Advanced tab in XO?

              D 1 Reply Last reply Reply Quote 0
              • D Offline
                DustinB @Danp
                last edited by

                @Danp said in Live Migrate between two stand alone pools fails:

                I would check var/log/xensource.log first. If the error occurs immediately, then you should be able to tail the file and watch for the error to occur.

                Yea it's not instant, it just kind of times out.

                All VM's are using an shared NFS storage repo as well. I might just tell the team that in order to complete my work I need a window that I can power everything down and start it on a different host.

                1 Reply Last reply Reply Quote 0
                • D Offline
                  DustinB @Danp
                  last edited by

                  @Danp said in Live Migrate between two stand alone pools fails:

                  @DustinB said in Live Migrate between two stand alone pools fails:

                  I also don't have a warm migration option

                  Did you look under the VM's Advanced tab in XO?

                  Yeah

                  1 Reply Last reply Reply Quote 0
                  • DanpD Offline
                    Danp Pro Support Team
                    last edited by

                    It should be there unless your XO is way out of date. 🤔

                    D 2 Replies Last reply Reply Quote 0
                    • D Offline
                      DustinB @Danp
                      last edited by

                      @Danp said in Live Migrate between two stand alone pools fails:

                      It should be there unless your XO is way out of date. 🤔

                      That is very possible, there are currently 3 different XO's running, for reason I won't go into.

                      I'm trying to patch the host all to the same level and consolidate everything under the latest XO, and ideally create a pool out of all 3 hosts. I'm not sure that I'll be able to pool them all if there are VM's running on any given host.

                      1 Reply Last reply Reply Quote 0
                      • D Offline
                        DustinB @Danp
                        last edited by

                        This post is deleted!
                        1 Reply Last reply Reply Quote 0
                        • DanpD Offline
                          Danp Pro Support Team
                          last edited by

                          You could always try searching for the error in the logs with something like the following --

                          cat /var/log/xensource.log | grep "VM_BAD_POWER_STATE"

                          D 1 Reply Last reply Reply Quote 0
                          • D Offline
                            DustinB @Danp
                            last edited by

                            @Danp

                            cat /var/log/xensource.log | grep "VM_BAD_POWER_STATE"
                            Jan 11 05:42:59 xen-vm xapi: [error|xen-vm|26628549 |Async.VM.migrate_send R:b60d2c889e8c|xapi] Caught Server_error(VM_BAD_POWER_STATE, [ OpaqueRef:92efd4b1-f1c4-4760-91e9-8c7f744cdeea; halted, suspended; running ]) while destroying VM uuid=4ea00c91-c306-c551-87a3-ba0c0fd8bf61 on destination host
                            Jan 11 08:28:03 xen-vm xapi: [error|xen-vm|26719641 |Async.VM.migrate_send R:cc817c26ddbd|backtrace] VM.assert_can_migrate D:f6152f70a65f failed with exception Server_error(VM_BAD_POWER_STATE, [ OpaqueRef:5008bd88-786a-4bde-a428-b137113a94d6; Halted; Running ])
                            Jan 11 08:28:03 xen-vm xapi: [error|xen-vm|26719641 |Async.VM.migrate_send R:cc817c26ddbd|backtrace] Raised Server_error(VM_BAD_POWER_STATE, [ OpaqueRef:5008bd88-786a-4bde-a428-b137113a94d6; Halted; Running ])
                            Jan 11 08:28:03 xen-vm xapi: [error|xen-vm|26719641 ||backtrace] Async.VM.migrate_send R:cc817c26ddbd failed with exception Server_error(VM_BAD_POWER_STATE, [ OpaqueRef:5008bd88-786a-4bde-a428-b137113a94d6; Halted; Running ])
                            Jan 11 08:28:03 xen-vm xapi: [error|xen-vm|26719641 ||backtrace] Raised Server_error(VM_BAD_POWER_STATE, [ OpaqueRef:5008bd88-786a-4bde-a428-b137113a94d6; Halted; Running ])
                            
                            
                            1 Reply Last reply Reply Quote 0
                            • D Offline
                              DustinB
                              last edited by

                              And searching for the UUID I'm migrating.

                              cat /var/log/xensource.log | grep "dfecebe0-6ac1-b511-85ee-b43aa0223578"
                              Jan 11 05:42:02 xen-vm xenopsd-xc: [debug|xen-vm|39 |Async.VM.migrate_send R:b60d2c889e8c|xenops] EVENT on other VM: dfecebe0-6ac1-b511-85ee-b43aa0223578
                              Jan 11 08:28:02 xen-vm xapi: [debug|xen-vm|26719641 |Async.VM.migrate_send R:cc817c26ddbd|audit] VM.migrate_send: VM = 'dfecebe0-6ac1-b511-85ee-b43aa0223578 (Lab - Redhat8)'
                              Jan 11 08:28:02 xen-vm xapi: [debug|xen-vm|26719641 |VM.assert_can_migrate D:f6152f70a65f|audit] VM.assert_can_migrate: VM = 'dfecebe0-6ac1-b511-85ee-b43aa0223578 (Lab - Redhat8)'
                              Jan 11 08:28:02 xen-vm xapi: [debug|xen-vm|587 |xapi events D:4a4835d5324c|xenops] Event on VM dfecebe0-6ac1-b511-85ee-b43aa0223578; resident_here = true
                              Jan 11 08:28:02 xen-vm xapi: [ info|xen-vm|26719650 ||export] VM.export_metadata: VM = 'dfecebe0-6ac1-b511-85ee-b43aa0223578' ('Lab - Redhat8'); with_snapshot_metadata = 'true'; include_vhd_parents = 'false'; preserve_power_state = 'true
                              Jan 11 08:28:02 xen-vm xapi: [debug|xen-vm|587 |xapi events D:4a4835d5324c|xenops] Event on VM dfecebe0-6ac1-b511-85ee-b43aa0223578; resident_here = true
                              Jan 11 08:28:03 xen-vm xapi: [debug|xen-vm|587 |xapi events D:4a4835d5324c|xenops] Event on VM dfecebe0-6ac1-b511-85ee-b43aa0223578; resident_here = true
                              Jan 11 09:36:07 xen-vm xapi: [debug|xen-vm|26725037 UNIX /var/lib/xcp/xapi||cli] xe vm-list uuid=dfecebe0-6ac1-b511-85ee-b43aa0223578 username=root password=(omitted)
                              Jan 11 09:36:14 xen-vm xapi: [debug|xen-vm|26725079 UNIX /var/lib/xcp/xapi||cli] xe vm-list uuid=dfecebe0-6ac1-b511-85ee-b43aa0223578 username=root password=(omitted)
                              
                              
                              1 Reply Last reply Reply Quote 0
                              • DanpD Offline
                                Danp Pro Support Team
                                last edited by

                                Server_error(VM_BAD_POWER_STATE, [ OpaqueRef:5008bd88-786a-4bde-a428-b137113a94d6; Halted; Running ]

                                Based on the description here, it appears to be detecting a halted VM, but the VM is expected to be running.

                                Are you able to location a VM with the UUID of 5008bd88-786a-4bde-a428-b137113a94d6 on either server?

                                P.S. You could also try this from one of your other XO VMs or even spin up a new one to see if that solves the problem.

                                D 1 Reply Last reply Reply Quote 0
                                • D Offline
                                  DustinB @Danp
                                  last edited by

                                  @Danp said in Live Migrate between two stand alone pools fails:

                                  Server_error(VM_BAD_POWER_STATE, [ OpaqueRef:5008bd88-786a-4bde-a428-b137113a94d6; Halted; Running ]

                                  Based on the description here, it appears to be detecting a halted VM, but the VM is expected to be running.

                                  Are you able to location a VM with the UUID of 5008bd88-786a-4bde-a428-b137113a94d6 on either server?

                                  P.S. You could also try this from one of your other XO VMs or even spin up a new one to see if that solves the problem.

                                  Good find, the UUID 5008bd88-786a-4bde-a428-b137113a94d6 is somehow attached to a different VM.

                                  But when I look at the VM itself, it has a different UUID (within XO).

                                  1 Reply Last reply Reply Quote 0
                                  • D Offline
                                    DustinB
                                    last edited by

                                    Okay now I'm just confused and this must be an XO version bug, I added the 3rd host (that I was wanting to migrate these VMs too) and it shows the VM there.

                                    So maybe this is just a display bug.

                                    1 Reply Last reply Reply Quote 1
                                    • DanpD Offline
                                      Danp Pro Support Team
                                      last edited by

                                      It's difficult to know for sure where the problem lies since your XO isn't up to date. Wouldn't the host need to already exist in XO in order for it to be a migration target?

                                      D 1 Reply Last reply Reply Quote 0
                                      • D Offline
                                        DustinB @Danp
                                        last edited by

                                        @Danp said in Live Migrate between two stand alone pools fails:

                                        It's difficult to know for sure where the problem lies since your XO isn't up to date. Wouldn't the host need to already exist in XO in order for it to be a migration target?

                                        Yeah, I had two host connected to this XO instance, just to help get an insight to what is where.

                                        I've disconnected the target host, and can see the VM on the desired host.

                                        1 Reply Last reply Reply Quote 0
                                        • D DustinB has marked this topic as solved on
                                        • First post
                                          Last post