XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. jmannik
    3. Posts
    J
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 4
    • Posts 31
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)

      said in Unable to enable High Availability - INTERNAL_ERROR(Not_found):

      said in Unable to enable High Availability - INTERNAL_ERROR(Not_found):

      @psafont Would designating a new pool master do the same thing?
      I ran the above command and its had no effect

      Well, I tried changing the pool master and when VMHost11 was the master I was able to enable HA.
      Switching back to VMHost13 as the master now so will see how that goes

      Everything is working as expected/hoped.

      So for anyone reading through this and wants a TL;DR

      • Issue was related to the pool master setting, changing the pool master to a different host and then back to the original fixed the incorrect settings allowing HA to be enabled
      posted in XCP-ng
      J
      jmannik
    • RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)

      said in Unable to enable High Availability - INTERNAL_ERROR(Not_found):

      @psafont Would designating a new pool master do the same thing?
      I ran the above command and its had no effect

      Well, I tried changing the pool master and when VMHost11 was the master I was able to enable HA.
      Switching back to VMHost13 as the master now so will see how that goes

      posted in XCP-ng
      J
      jmannik
    • RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)

      @psafont Would designating a new pool master do the same thing?
      I ran the above command and its had no effect

      posted in XCP-ng
      J
      jmannik
    • RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)

      @psafont said in Unable to enable High Availability - INTERNAL_ERROR(Not_found):

      @jmannik The IPs match, and now I don't have an explanation on why is this happening, I'll take another look at the codepath, but that'll have to take a while, as work is piling up

      Ahh but they dont match.
      VMHost13 lists 192.168.10.13
      VMHost12 and VMHost11 list 192.168.30.13

      posted in XCP-ng
      J
      jmannik
    • RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)

      Ok, so in this process I have come across a re-occurring issue I have had with XCP-NG where it will have the wrong order for the ethernet interfaces.
      Each of my hosts has a 1gbit interface onboard, then a 4 port 10gbit card
      It SHOULD be ordering the interfaces like so:
      ETH0 1gbit
      ETH1 10gbit
      ETH2 10gbit
      ETH3 10gbit
      ETH4 10gbit

      But it will randomly decide upon install (VMHost11 was recently rebuilt due to an id10t pebkac issue) to order them like below for no apparent reason:

      ETH0 10gbit
      ETH1 1gbit
      ETH2 10gbit
      ETH3 10gbit
      ETH4 10gbit

      And to be able to re-order the interfaces its just a lot more difficult that I think it should be.

      posted in XCP-ng
      J
      jmannik
    • RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)
      [22:27 vmhost12 ~]# xe pool-param-get uuid=213186d2-e3ba-154f-d371-4122388deb83 param-name=master | xargs -I _ xe host-param-get uuid=_ param-name=address
      192.168.10.13
      [22:27 vmhost12 ~]# cat /etc/xensource/pool.conf
      slave:192.168.30.13[22:27 vmhost12 ~]#
      
      [22:27 vmhost11 ~]# xe pool-param-get uuid=213186d2-e3ba-154f-d371-4122388deb83  param-name=master | xargs -I _ xe host-param-get uuid=_ param-name=address
      192.168.10.13
      [22:28 vmhost11 ~]# cat /etc/xensource/pool.conf
      slave:192.168.30.13[22:28 vmhost11 ~]#
      

      I think I see where the issue is, not sure how to solve it though

      posted in XCP-ng
      J
      jmannik
    • RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)

      @psafont
      [22:13 vmhost13 ~]# xe pool-param-get uuid=213186d2-e3ba-154f-d371-4122388deb83 param-name=master | xargs -I _ xe host-param-get uuid=_ param-name=address
      192.168.10.13
      [22:13 vmhost13 ~]# cat /etc/xensource/pool.conf
      master[22:14 vmhost13 ~]#

      posted in XCP-ng
      J
      jmannik
    • RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)

      Well this is what im getting now:

      {
        "id": "0mhbgkupy",
        "properties": {
          "method": "pool.enableHa",
          "params": {
            "pool": "213186d2-e3ba-154f-d371-4122388deb83",
            "heartbeatSrs": [
              "381caeb2-5ad9-8924-365d-4b130c67c064"
            ],
            "configuration": {}
          },
          "name": "API call: pool.enableHa",
          "userId": "71d48027-d471-4b01-83f9-830df4279f7e",
          "type": "api.call"
        },
        "start": 1761709884550,
        "status": "failure",
        "updatedAt": 1761709923544,
        "end": 1761709923544,
        "result": {
          "code": "INTERNAL_ERROR",
          "params": [
            "unable to gather the coordinator's UUID: Not_found"
          ],
          "call": {
            "duration": 38993,
            "method": "pool.enable_ha",
            "params": [
              "* session id *",
              [
                "OpaqueRef:a83a416f-c97d-1ed8-c7fc-213af89b8f86"
              ],
              {}
            ]
          },
          "message": "INTERNAL_ERROR(unable to gather the coordinator's UUID: Not_found)",
          "name": "XapiError",
          "stack": "XapiError: INTERNAL_ERROR(unable to gather the coordinator's UUID: Not_found)\n    at Function.wrap (file:///opt/xen-orchestra/packages/xen-api/_XapiError.mjs:16:12)\n    at file:///opt/xen-orchestra/packages/xen-api/transports/json-rpc.mjs:38:21\n    at runNextTicks (node:internal/process/task_queues:65:5)\n    at processImmediate (node:internal/timers:453:9)\n    at process.callbackTrampoline (node:internal/async_hooks:130:17)"
        }
      }
      
      posted in XCP-ng
      J
      jmannik
    • RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)

      @psafont
      That is done now, tried to enable HA again and it was unsuccessful, what would you like me to do now?

      posted in XCP-ng
      J
      jmannik
    • RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)

      @tjkreidl This hasn't been my experience so far, enabling HA has just enabled HA, no reboot needed.

      @psafont I am patching all my hosts now, will do the above test packages on Sunday Night (it is Friday afternoon at the time of this post)

      posted in XCP-ng
      J
      jmannik
    • RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)

      @andriy.sultanov @psafont
      https://drive.google.com/file/d/1aJyCYSAuRIBb0X-23gJ6ORtrHSciYH8a/view?usp=sharing
      Here is the log file

      posted in XCP-ng
      J
      jmannik
    • RE: Unable to enable High Availability - INTERNAL_ERROR(Not_found)

      @olivierlambert

      [18:15 vmhost13 ~]# xe pool-ha-enable heartbeat-sr-uuids=381caeb2-5ad9-8924-365d-4b130c67c064
      The server failed to handle your request, due to an internal error. The given message may give details useful for debugging the problem.
      message: Not_found

      posted in XCP-ng
      J
      jmannik
    • Unable to enable High Availability - INTERNAL_ERROR(Not_found)

      Good Morning all,

      Im running into an issue with my pool where it won't let me enable HA and I can't figure out why, it starts enabling HA then just stops, the below shows up in the logs for the task list.

      {
        "id": "0mgsn7vq8",
        "properties": {
          "method": "pool.enableHa",
          "params": {
            "pool": "213186d2-e3ba-154f-d371-4122388deb83",
            "heartbeatSrs": [
              "381caeb2-5ad9-8924-365d-4b130c67c064"
            ],
            "configuration": {}
          },
          "name": "API call: pool.enableHa",
          "userId": "71d48027-d471-4b01-83f9-830df4279f7e",
          "type": "api.call"
        },
        "start": 1760572179296,
        "status": "failure",
        "updatedAt": 1760572219231,
        "end": 1760572219230,
        "result": {
          "code": "INTERNAL_ERROR",
          "params": [
            "Not_found"
          ],
          "call": {
            "duration": 39934,
            "method": "pool.enable_ha",
            "params": [
              "* session id *",
              [
                "OpaqueRef:a83a416f-c97d-1ed8-c7fc-213af89b8f86"
              ],
              {}
            ]
          },
          "message": "INTERNAL_ERROR(Not_found)",
          "name": "XapiError",
          "stack": "XapiError: INTERNAL_ERROR(Not_found)\n    at Function.wrap (file:///opt/xen-orchestra/packages/xen-api/_XapiError.mjs:16:12)\n    at file:///opt/xen-orchestra/packages/xen-api/transports/json-rpc.mjs:38:21\n    at runNextTicks (node:internal/process/task_queues:65:5)\n    at processImmediate (node:internal/timers:453:9)\n    at process.callbackTrampoline (node:internal/async_hooks:130:17)"
        }
      }
      
      posted in XCP-ng
      J
      jmannik
    • RE: Unable to Migrate VM's to newly added host in pool

      @olivierlambert Appreciated, hopefully my frustration doesn't come across as rude and only as frustrated.

      The biggest question mark for me is why the 1gb interface decided to be eth1 despite all other interfaces being on the one network card?
      Ideally (in my opinion) re-assigning interface order in the future should be just like other config options where you click on the interface assignment in Xen Orchestra, type in the new name of the interface and it is done. It would probably mean having to rename eth0 to eth5, renaming eth1 to eth0 and then renaming eth5 to eth1... but what I would expect.

      Another thing that would make things easier/better is the ability to reassign networks to different pifs via a similar thing to the above.
      In an ideal world I don't want to have to drop to a cli to do the configuration changes unless im scripting a large number of things or doing something I would consider out of the ordinary

      posted in Management
      J
      jmannik
    • RE: Unable to Migrate VM's to newly added host in pool

      Just to be clear, everything is working as I want it right now... so I don't want to poke it out of fear of it breaking again

      posted in Management
      J
      jmannik
    • RE: Unable to Migrate VM's to newly added host in pool

      So just to update on this whole thing, after building a new pool and migrating everything over it is all sorted and working as expected now.
      I don't know what is different because I could not find any difference between the old pool and the new pool.

      Something that was incredibly frustrating and that I found rather backwards to setup and get working correctly was the interface order.
      Each host has 1x1gb and 4x10g network interfaces, two of the three hosts had the interfaces ordered as expected with the 1g interface being eth0 (network cards Chelsio T540-CR cards, 1g eth is onboard realtek)
      One host decided to have the 1g interface on eth1 for no apparent reason.
      The process of renaming and reordering the interfaces was the most convoluted frustrating experience with configuring basic networking ive experienced in a long time.
      I do not understand why it is done like it is? Why does it take so much work to do this? It seems like a very basic thing that should be easy. The process I had to follow to fix this was as follows:

      • SSH into the host
      • Re-assign the management interface to eth4
      • Reboot the host
      • SSH back into the host via eth4
      • Shut down the interfaces eth0 and eth1
      • Run the commands interface-rename --rename eth0=1g interface mac eth1=first interface on 10g car
      • Run the same command except --update instead of --rename (if I did not do both commands in that order it would NOT work)
      • Re-enable the two interfaces and reboot again.
      • Removed and recreate the pif's for the above two interfaces.

      I spent an entire weekend to work out this process

      I don't understand why this had to be this hard to do, nor do I understand why the interfaces were the wrong order on 1 host of the 3. Identical motherboards, and configuration. The only difference is that one of the two correct interface order boxes has a 3900xt instead of a 5900x cpu.

      posted in Management
      J
      jmannik
    • RE: Unable to Migrate VM's to newly added host in pool

      @olivierlambert
      I would have thought that too... except it does it for every VM

      At this stage I am just going to rebuild the pool and migrate the VM's and other hosts across to it instead.

      posted in Management
      J
      jmannik
    • Unable to Migrate VM's to newly added host in pool

      I am trying to live migrate vm's to a freshly added host to the pool.
      I am getting thee below error:

      vm.migrate
      {
        "vm": "5270ed06-85dc-1050-dc26-789a39a6ca0a",
        "migrationNetwork": "b012ab5c-bbe8-ccca-bbe7-596468bb04cf",
        "targetHost": "b2857df4-26dd-4e93-9ab0-60cc1f753bd1"
      }
      {
        "code": "VM_LACKS_FEATURE",
        "params": [
          "OpaqueRef:10dfa2af-92c1-194a-a787-8ffa0c44adee"
        ],
        "task": {
          "uuid": "cd8fa000-92d5-3b9e-578a-c3359f5c3814",
          "name_label": "Async.VM.migrate_send",
          "name_description": "",
          "allowed_operations": [],
          "current_operations": {},
          "created": "20250531T11:53:50Z",
          "finished": "20250531T11:53:50Z",
          "status": "failure",
          "resident_on": "OpaqueRef:aa4ba7f4-b2d8-6068-e502-a8bc50a06177",
          "progress": 1,
          "type": "<none/>",
          "result": "",
          "error_info": [
            "VM_LACKS_FEATURE",
            "OpaqueRef:10dfa2af-92c1-194a-a787-8ffa0c44adee"
          ],
          "other_config": {},
          "subtask_of": "OpaqueRef:NULL",
          "subtasks": [],
          "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_lifecycle.ml)(line 743))((process xapi)(filename ocaml/xapi/xapi_vm_helpers.ml)(line 1652))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/helpers.ml)(line 1706))((process xapi)(filename ocaml/xapi/xapi_vm_helpers.ml)(line 1651))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 2612))((process xapi)(filename ocaml/xapi/rbac.ml)(line 188))((process xapi)(filename ocaml/xapi/rbac.ml)(line 197))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 77)))"
        },
        "message": "VM_LACKS_FEATURE(OpaqueRef:10dfa2af-92c1-194a-a787-8ffa0c44adee)",
        "name": "XapiError",
        "stack": "XapiError: VM_LACKS_FEATURE(OpaqueRef:10dfa2af-92c1-194a-a787-8ffa0c44adee)
          at Function.wrap (file:///opt/xen-orchestra/packages/xen-api/_XapiError.mjs:16:12)
          at default (file:///opt/xen-orchestra/packages/xen-api/_getTaskResult.mjs:13:29)
          at Xapi._addRecordToCache (file:///opt/xen-orchestra/packages/xen-api/index.mjs:1072:24)
          at file:///opt/xen-orchestra/packages/xen-api/index.mjs:1106:14
          at Array.forEach (<anonymous>)
          at Xapi._processEvents (file:///opt/xen-orchestra/packages/xen-api/index.mjs:1096:12)
          at Xapi._watchEvents (file:///opt/xen-orchestra/packages/xen-api/index.mjs:1269:14)"
      }
      

      The VM is a Windows based VM, the latest tools were downloaded and installed on the same day as this post.

      It is moving from a Ryzen 5900X host to another Ryzen 5900X host.

      It is using shared storage (NFS Share).

      I really wish the error message would specify what feature is lacking as it would help me troubleshoot this much easier.

      posted in Management
      J
      jmannik
    • RE: My XCP-NG Homelab 2023 edition

      Ahh how the homelab changes over time...
      Hosts are still the same, but storage, network setup, and location have all changed.
      Will have to post a new updated thread on my current server setup.

      posted in Share your setup!
      J
      jmannik
    • RE: First SMAPIv3 driver is available in preview

      So this can't be used for NFS SR's yet then?

      posted in Development
      J
      jmannik