XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    VM metadata import fail & stuck

    Scheduled Pinned Locked Moved Xen Orchestra
    9 Posts 4 Posters 96 Views 3 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • henri9813H Offline
      henri9813
      last edited by henri9813

      I have an updated pool.

      I'm evacuating an host for update to another pool.

      But the operation fail, and remains in the XO tasks

      VM metadata import (on new-hypervisor) 0%
      

      and in the VM.migrate api call detail.

      vm.migrate
      {
        "vm": "30bb4942-c7fd-a3b3-2690-ae6152d272c5",
        "mapVifsNetworks": {
          "7e1ad49f-d4df-d9d7-2a74-0d00486ae5ff": "b3204067-a3fd-bd19-7214-7856e637d076"
        },
        "migrationNetwork": "e31e7aea-37de-2819-83fe-01bd33509855",
        "sr": "3070cc36-b869-a51f-38ee-bd5de5e4cb6c",
        "targetHost": "36a07da2-7493-454d-836d-df8ada5b958f"
      }
      {
        "code": "INTERNAL_ERROR",
        "params": [
          "Http_client.Http_error(\"500\", \"{ frame = false; method = GET; uri = /export_metadata?export_snapshots=true&ref=OpaqueRef:75e166f7-5056-a662-f7ff-25c09aee5bec; query = [  ]; content_length = [  ]; transfer encoding = ; version = 1.0; cookie = [ (value filtered) ]; task = ; subtask_of = OpaqueRef:9976b5f2-3381-e79e-a6dd-0c7a20621501; content-type = ; host = ; user_agent = xapi/25.33; }\")"
        ],
        "task": {
          "uuid": "9c87e615-5dca-c714-0c55-5da571ad8fa5",
          "name_label": "Async.VM.assert_can_migrate",
          "name_description": "",
          "allowed_operations": [],
          "current_operations": {},
          "created": "20260131T08:16:14Z",
          "finished": "20260131T08:16:14Z",
          "status": "failure",
          "resident_on": "OpaqueRef:37858c1b-fa8c-5733-ed66-dcd4fc7ae88c",
          "progress": 1,
          "type": "<none/>",
          "result": "",
          "error_info": [
            "INTERNAL_ERROR",
            "Http_client.Http_error(\"500\", \"{ frame = false; method = GET; uri = /export_metadata?export_snapshots=true&ref=OpaqueRef:75e166f7-5056-a662-f7ff-25c09aee5bec; query = [  ]; content_length = [  ]; transfer encoding = ; version = 1.0; cookie = [ (value filtered) ]; task = ; subtask_of = OpaqueRef:9976b5f2-3381-e79e-a6dd-0c7a20621501; content-type = ; host = ; user_agent = xapi/25.33; }\")"
          ],
          "other_config": {},
          "subtask_of": "OpaqueRef:NULL",
          "subtasks": [],
          "backtrace": "(((process xapi)(filename ocaml/libs/http-lib/http_client.ml)(line 215))((process xapi)(filename ocaml/libs/http-lib/http_client.ml)(line 228))((process xapi)(filename ocaml/libs/http-lib/xmlrpc_client.ml)(line 375))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/importexport.ml)(line 313))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/xapi_vm_migrate.ml)(line 1920))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 2551))((process xapi)(filename ocaml/xapi/rbac.ml)(line 229))((process xapi)(filename ocaml/xapi/rbac.ml)(line 239))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 78)))"
        },
        "message": "INTERNAL_ERROR(Http_client.Http_error(\"500\", \"{ frame = false; method = GET; uri = /export_metadata?export_snapshots=true&ref=OpaqueRef:75e166f7-5056-a662-f7ff-25c09aee5bec; query = [  ]; content_length = [  ]; transfer encoding = ; version = 1.0; cookie = [ (value filtered) ]; task = ; subtask_of = OpaqueRef:9976b5f2-3381-e79e-a6dd-0c7a20621501; content-type = ; host = ; user_agent = xapi/25.33; }\"))",
        "name": "XapiError",
        "stack": "XapiError: INTERNAL_ERROR(Http_client.Http_error(\"500\", \"{ frame = false; method = GET; uri = /export_metadata?export_snapshots=true&ref=OpaqueRef:75e166f7-5056-a662-f7ff-25c09aee5bec; query = [  ]; content_length = [  ]; transfer encoding = ; version = 1.0; cookie = [ (value filtered) ]; task = ; subtask_of = OpaqueRef:9976b5f2-3381-e79e-a6dd-0c7a20621501; content-type = ; host = ; user_agent = xapi/25.33; }\"))
          at XapiError.wrap (file:///etc/xen-orchestra/packages/xen-api/_XapiError.mjs:16:12)
          at default (file:///etc/xen-orchestra/packages/xen-api/_getTaskResult.mjs:13:29)
          at Xapi._addRecordToCache (file:///etc/xen-orchestra/packages/xen-api/index.mjs:1078:24)
          at file:///etc/xen-orchestra/packages/xen-api/index.mjs:1112:14
          at Array.forEach (<anonymous>)
          at Xapi._processEvents (file:///etc/xen-orchestra/packages/xen-api/index.mjs:1102:12)
          at Xapi._watchEvents (file:///etc/xen-orchestra/packages/xen-api/index.mjs:1275:14)"
      }
      

      My XO is up to date.

      I already update the master, i'm processing the slave.

      But neither the master ( updated ) nor the slave ( not updated ) can migrate VM to an updated pool.

      Do you have an idea ?

      DanpD 1 Reply Last reply Reply Quote 0
      • DanpD Online
        Danp Pro Support Team @henri9813
        last edited by

        @henri9813 said in VM metadata import fail & stuck:

        I'm evacuating an host for update to another pool.

        I'm unsure what you mean by this. Please provide additional details so that we can better understand the end goal.

        • Are you trying to move the XCP-ng host to a different pool?
        • Have all hosts been rebooted following after patching was performed?
        • etc
        henri9813H 1 Reply Last reply Reply Quote 0
        • henri9813H Offline
          henri9813 @Danp
          last edited by henri9813

          Hello, @Danp

          My pool is in upgrade, not all nodes are updated.

          1. I evacuate the master hosts's vms to the slave node except 3-4 vms which are "very not important and can be shut for few minutes
          2. I upgrade the master node with success
          3. I WANT to move all vms to the master, but it deosn't have enought disk space, so i tried the following:
            a. Migrate the very not important VM to the slave node ( not updated ) to have enought space to move the "important" vms.
            b. Move vms of the updated master to ANOTHER pool.

          I tried both:

          • VM running
          • VM halted

          Thanks !

          nikadeN henri9813H 2 Replies Last reply Reply Quote 0
          • nikadeN Offline
            nikade Top contributor @henri9813
            last edited by

            @henri9813 Did you reboot the master after it was updated? If yes, I think you should be able to migrate back the VM's to the master, and then continue patcting the rest of the hosts.

            1 Reply Last reply Reply Quote 1
            • henri9813H Offline
              henri9813 @henri9813
              last edited by henri9813

              Hello,

              I was able to perform "Warm" migrate from either the slave or the master.

              Yes the master was rebooted.

              @henri9813 Did you reboot the master after it was updated? If yes, I think you should be able to migrate back the VM's to the master, and then continue patcting the rest of the hosts.

              No, i would migrate back vm from MASTER (updated ) to slave ( not updated ), but wasn't working.

              Only warm migrations works.

              nikadeN 1 Reply Last reply Reply Quote 0
              • nikadeN Offline
                nikade Top contributor @henri9813
                last edited by

                @henri9813 Maybe im now understaind the problem here? But a warm migration is a online migration, in other words a live migration without shutting the vm down and that is exactly how it should work.

                henri9813H 1 Reply Last reply Reply Quote 0
                • henri9813H Offline
                  henri9813 @nikade
                  last edited by henri9813

                  Hello @nikade.

                  I agree but this is my case.

                  Try to migrate a running VM: error 500
                  Try to migrate an halted VM: error 500.
                  Warm migrate: It's okay.

                  I don't understand myself the difference except it doesn't transfer the "VM" but recreate the VM and import the VDI, ( so , the same things ), but there may be a light difference. I don't know how the "warm migration" works under the hood

                  florentF nikadeN 2 Replies Last reply Reply Quote 0
                  • florentF Offline
                    florent Vates 🪐 XO Team @henri9813
                    last edited by

                    @henri9813 the warm mode is more manual by XO, and it can replicate from almost any version to any version, or any CPU to any CPU
                    the direct migration has some precondition : always migrate to te same or superior version. Live migration also add some condition on the CPU brand and generation

                    1 Reply Last reply Reply Quote 1
                    • nikadeN Offline
                      nikade Top contributor @henri9813
                      last edited by

                      @henri9813 Ahh alright, I understand now! Thanks for clarifying.

                      1 Reply Last reply Reply Quote 0
                      • First post
                        Last post