XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    VM Migrate problem

    Scheduled Pinned Locked Moved Solved Compute
    11 Posts 3 Posters 1.6k Views 2 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M Offline
      mal
      last edited by mal

      I have 2 hosts, each in the own pool.
      I have 9 VMs, 7 running.
      I can migrate some VMs without a problem, back and forth.
      I have (at least) one VM which will migrate when shut down, but not when it is running. (XO(A) reports: Management agent 7.30.0-11)
      Through XO(A) I get the dreaded "INTERNAL_ERROR" message which doesn't really help.

      vm.migrate
      {
        "vm": "<VM_UUID>",
        "mapVifsNetworks": {
          "<vif_0>": "<pool_vif_0_remote>",
          "<vif_1>": "<pool_vif_1_remote>"
        },
        "migrationNetwork": "<migration_network>",
        "sr": "<remote_storage_SR>",
        "targetHost": "<target_host>"
      }
      {
        "code": 21,
        "data": {
          "objectId": "<VM_UUID>",
          "code": "INTERNAL_ERROR"
        },
        "message": "operation failed",
        "name": "XoError",
        "stack": "XoError: operation failed
          at operationFailed (/opt/xo/xo-builds/xen-orchestra-202210181910/packages/xo-common/api-errors.js:26:11)
          at file:///opt/xo/xo-builds/xen-orchestra-202210181910/packages/xo-server/src/api/vm.mjs:497:15
          at runMicrotasks (<anonymous>)
          at runNextTicks (node:internal/process/task_queues:61:5)
          at processImmediate (node:internal/timers:437:9)
          at process.callbackTrampoline (node:internal/async_hooks:130:17)
          at Xo.migrate (file:///opt/xo/xo-builds/xen-orchestra-202210181910/packages/xo-server/src/api/vm.mjs:483:3)
          at Api.#callApiMethod (file:///opt/xo/xo-builds/xen-orchestra-202210181910/packages/xo-server/src/xo-mixins/api.mjs:394:20)"
      }
      

      Using the CLI I get a slightly more informative message about invalid aguments

      xe vm-migrate uuid=<VM_UUID> \
      remote-master=<remote_master> \
      remote-username=<remote_user> \
      remote-password=<remote_password> \
      host-uuid=<target_host> \
      vif:<vif_0>=<pool_vif_0_remote> \
      vif:<vif_1>=<pool_vif_1_remote>
      
      Performing a Storage XenMotion migration. Your VM's VDIs will be migrated with the VM.
      Will migrate to remote host: <REMOTE_HOSTNAME>, using remote network: <MIGRATION_NETWORK_NAME>. Here is the VDI mapping:
      VDI <vm_vdi> -> SR <remote_storage_SR>
      
      The server failed to handle your request, due to an internal error. The given message may give details useful for debugging the problem.
      message: Xenops_interface.Xenopsd_error([S(Internal_error);S(Domain.Emu_manager_failure("Received error from emu-manager: xenguest Invalid argument"))])
      

      Any thoughts please?

      1 Reply Last reply Reply Quote 0
      • DanpD Offline
        Danp Pro Support Team
        last edited by

        From here --

        Check your dynamic memory setting, that's often the issue

        If that isn't it, then can only suggest that you provide additional details about your environment.

        M 1 Reply Last reply Reply Quote 0
        • M Offline
          mal @Danp
          last edited by mal

          @Danp - Thanks for your response
          Dynamic memory is a 1.4Gb.

          Receiving host has 14Gb free
          Sending host has 732Mb free however if I shut down/migrate other VMs, it makes no difference.

          Receiving host: v8.2.1 release/yangtze/master/58
          Sending host: v8.2.1 release/yangtze/master/58

          It doesn't happen with all VMs. Others (using more memory) are fine

          What else would you like to know?

          Thanks in advance

          1 Reply Last reply Reply Quote 0
          • olivierlambertO Offline
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            Double check to have dynamic min = dynamic max = static max (so "static" memory in the end).

            M 1 Reply Last reply Reply Quote 0
            • M Offline
              mal @olivierlambert
              last edited by

              @olivierlambert - Thank you for your response

              That worked!! Brilliant!

              So, with the deprecation of DMC, it appears that it "broke" migration if you don't have dynamic=static

              Can I assume that "over commit" of memory will never be a "thing", unlike other hypervisors?

              1 Reply Last reply Reply Quote 0
              • olivierlambertO Offline
                olivierlambert Vates 🪐 Co-Founder CEO
                last edited by olivierlambert

                No, it's not like that. Most of the time, it's because you set up a dynamic min that's too low.

                As soon as you migrate, Xen will "balloon out" the memory and go down to dynamic min.

                However, if your VM system is reduced to 1.5GiB (in your case) and can't handle it, it will just fail. So DMC is probably working if your dynamic min was higher.

                M 2 Replies Last reply Reply Quote 0
                • M Offline
                  mal @olivierlambert
                  last edited by

                  @olivierlambert
                  Understand and thanks for the explanation.

                  Also, appreciate the brilliant effort you and the team put in.

                  1 Reply Last reply Reply Quote 1
                  • M Offline
                    mal @olivierlambert
                    last edited by

                    @olivierlambert

                    I have a similar issue with a different VM where it fails with:

                    Not enough server memory is available to perform this operation.
                    needed: 2167406592
                    available: 1543491584
                    

                    however the host has 9.5Gb free (according to XO). VM has static=dynamic=2Gb

                    Sorry, but what am I missing this time?

                    M 1 Reply Last reply Reply Quote 0
                    • M Offline
                      mal @mal
                      last edited by

                      @mal

                      Restarting the toolstack allowed me to move the VM. More work-around than fix, I feel!

                      1 Reply Last reply Reply Quote 0
                      • olivierlambertO Offline
                        olivierlambert Vates 🪐 Co-Founder CEO
                        last edited by

                        This means your host had not enough memory (only 1.4GiB free when computing all the dynamic max range combined on all your VMs). That's the catch with dynamic memory, it's quickly messy.

                        Can you confirm that you have other VMs with higher dynamic max on this host?

                        M 1 Reply Last reply Reply Quote 0
                        • M Offline
                          mal @olivierlambert
                          last edited by mal

                          @olivierlambert

                          Unfortunately I've moved VMs around (this is all testing at the moment) however there were other VMs with static=dynamic=2Gb which I could move.

                          Could you please either explain or point me to the documentation where I can understand how your comment works, please?

                          1 Reply Last reply Reply Quote 0
                          • M mal marked this topic as a question on
                          • M mal has marked this topic as solved on
                          • First post
                            Last post