XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. dsiminiuk
    3. Posts
    D
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 7
    • Posts 91
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Rename Networks on XO Hosts

      And the secret to this was using the script correctly (--rename instead of --update)

      interface-rename --rename eth0=3c:a8:2a:15:4a:9c
      
      Name  MAC                PCI              ethN  Phys  SMBios                         Driver  Version  Firmware                 
      eth0  3c:a8:2a:15:4a:9c  0000:02:00.0[0]  eth0  em1   Embedded LOM 1 Port 1          tg3     3.137    5719-v1.46 NCSI v1.5.12.0
      eth1  3c:a8:2a:15:4a:9d  0000:02:00.1[0]  eth1  em2   Embedded LOM 1 Port 2          tg3     3.137    5719-v1.46 NCSI v1.5.12.0
      eth2  3c:a8:2a:15:4a:9e  0000:02:00.2[0]  eth2  em3   Embedded LOM 1 Port 3          tg3     3.137    5719-v1.46 NCSI v1.5.12.0
      eth3  3c:a8:2a:15:4a:9f  0000:02:00.3[0]  eth3  em4   Embedded LOM 1 Port 4          tg3     3.137    5719-v1.46 NCSI v1.5.12.0
      eth4  38:ea:a7:12:de:14  0000:04:00.0[0]  eth4  em49  Embedded FlexibleLOM 1 Port 1  ixgbe   5.9.4    0x800009e0, 1.3089.0     
      eth5  38:ea:a7:12:de:15  0000:04:00.1[0]  eth5  em50  Embedded FlexibleLOM 1 Port 2  ixgbe   5.9.4    0x800009e0, 1.3089.0 
      

      All done.

      posted in Xen Orchestra
      D
      dsiminiuk
    • RE: Rename Networks on XO Hosts

      For example, rename eth6 to eth0...

      ~# ifconfig eth6 down
      
      ~]# xe pif-list host-uuid=7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e device=eth6
      uuid ( RO)                  : ab7303f0-accd-7458-2a1f-e03906a85418
                      device ( RO): eth6
          currently-attached ( RO): false
                        VLAN ( RO): -1
                network-uuid ( RO): 69435d48-6bf5-c76b-9df9-8e276e2b51fc
      
      ~]# xe pif-forget uuid=ab7303f0-accd-7458-2a1f-e03906a85418
      
      ~]# interface-rename --update eth0=3c:a8:2a:15:4a:9c
      INFO     [2025-11-07 09:56:27] Performing manual update of rules.  Not actually renaming interfaces
      INFO     [2025-11-07 09:56:27] All done
      
      ~]# ifconfig eth0 up
      eth0: ERROR while getting interface flags: No such device
      

      I can remove the NICs but if I rescan then they appear as eth6,7,8,9 again.

      posted in Xen Orchestra
      D
      dsiminiuk
    • RE: Rename Networks on XO Hosts

      I have 2 hosts.
      I have enabled the 4 port LOM NIC in the BIOS on my pool master. The interfaces ethN appear correctly but are misnamed in the Name column.

      Name  MAC                PCI              ethN  Phys  SMBios                         Driver  Version  Firmware                 
      eth4  38:ea:a7:12:de:14  0000:04:00.0[0]  eth4  em49  Embedded FlexibleLOM 1 Port 1  ixgbe   5.9.4    0x800009e0, 1.3089.0     
      eth5  38:ea:a7:12:de:15  0000:04:00.1[0]  eth5  em50  Embedded FlexibleLOM 1 Port 2  ixgbe   5.9.4    0x800009e0, 1.3089.0     
      eth6  3c:a8:2a:15:4a:9c  0000:02:00.0[0]  eth0  em1   Embedded LOM 1 Port 1          tg3     3.137    5719-v1.46 NCSI v1.5.12.0
      eth7  3c:a8:2a:15:4a:9d  0000:02:00.1[0]  eth1  em2   Embedded LOM 1 Port 2          tg3     3.137    5719-v1.46 NCSI v1.5.12.0
      eth8  3c:a8:2a:15:4a:9e  0000:02:00.2[0]  eth2  em3   Embedded LOM 1 Port 3          tg3     3.137    5719-v1.46 NCSI v1.5.12.0
      eth9  3c:a8:2a:15:4a:9f  0000:02:00.3[0]  eth3  em4   Embedded LOM 1 Port 4          tg3     3.137    5719-v1.46 NCSI v1.5.12.0
      

      I would like to change the Name in the first column to match the name in the ethN column as they are in my second host (which had the NIC enabled previously).

      Name  MAC                PCI              ethN  Phys  SMBios                         Driver  Version  Firmware                 
      eth0  3c:a8:2a:1e:4e:e8  0000:02:00.0[0]  eth0  em1   Embedded LOM 1 Port 1          tg3     3.137    5719-v1.46 NCSI v1.5.33.0
      eth1  3c:a8:2a:1e:4e:e9  0000:02:00.1[0]  eth1  em2   Embedded LOM 1 Port 2          tg3     3.137    5719-v1.46 NCSI v1.5.33.0
      eth2  3c:a8:2a:1e:4e:ea  0000:02:00.2[0]  eth2  em3   Embedded LOM 1 Port 3          tg3     3.137    5719-v1.46 NCSI v1.5.33.0
      eth3  3c:a8:2a:1e:4e:eb  0000:02:00.3[0]  eth3  em4   Embedded LOM 1 Port 4          tg3     3.137    5719-v1.46 NCSI v1.5.33.0
      eth4  5c:b9:01:8a:c0:e0  0000:04:00.0[0]  eth4  em49  Embedded FlexibleLOM 1 Port 1  ixgbe   5.9.4    0x800009e0, 1.3089.0     
      eth5  5c:b9:01:8a:c0:e1  0000:04:00.1[0]  eth5  em50  Embedded FlexibleLOM 1 Port 2  ixgbe   5.9.4    0x800009e0, 1.3089.0 
      

      eth4 and eth5 match in both hosts and I don't want to break those because I have existing data and NFS on them.

      I just want to rename eth6-9 on the pool master to match eth0-3 on the second host.

      I have tried the interface-rename method but I am not getting what I expect. All the examples I have seen are useful for swapping the existing names for another existing name, not for changing the name to one that does not exist in the list already.

      Any guidance would be most helpful.

      posted in Xen Orchestra
      D
      dsiminiuk
    • RE: Rolling Pool Update - host took too long to restart

      @tuxpowered said in Rolling Pool Update - host took too long to restart:

      When the first note reboots (the master), I can see that the system is back up in 5-8 min. If I go in to XO > Settings > Servers and click the Enable/Disable status button to reconnect it pops right up. Again, does not resume migrating the other nodes.

      That is what I am seeing also, logged here https://xcp-ng.org/forum/topic/9683/rolling-pool-update-incomplete

      posted in Xen Orchestra
      D
      dsiminiuk
    • RE: Rolling Pool Update - host took too long to restart

      @tuxpowered I'm wondering if this is a network connectivity issue. 😎

      When the rolling pool update stops, what does your route table look like on the master (can it reach the other node)?

      Is your VPN layer 3 (routed), layer 2 (non-routed), IPSEC tunnel?

      posted in Xen Orchestra
      D
      dsiminiuk
    • RE: Rolling Pool Update incomplete

      Perhaps this was because there were 3 pages of failed tasks?
      I have deleted them all with xo-cli and I'll see how the next patch cycle goes.

      posted in Management
      D
      dsiminiuk
    • Rolling Pool Update incomplete

      Over that last two patch bundles my Rolling Pool Update fails to complete,
      VMs are evacuated from the pool master to the pool slave (a 2 node pool).
      Patches are applied to the pool master.
      The pool master reboots.
      After I can see that the pool master console is up, I reconnect XOA to the master.
      I wait, wait, and wait some more and nothing else happens after that. The VMs remain on the slave server.
      The "Rolling pool update" task is still in "Started" state.
      There is an "API call: vm.stats" task that started after that in a failed state.

      {
        "id": "0m1ku3cog",
        "properties": {
          "method": "vm.stats",
          "params": {
            "id": "ad5850fb-8264-18e2-c974-9df9ccaa6ccc"
          },
          "name": "API call: vm.stats",
          "userId": "2844af20-1bee-43f8-9b91-e1ac3b49239f",
          "type": "api.call"
        },
        "start": 1727448260848,
        "status": "failure",
        "updatedAt": 1727448260937,
        "end": 1727448260937,
        "result": {
          "message": "unusable",
          "name": "TypeError",
          "stack": "TypeError: unusable\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/undici/lib/api/readable.js:224:34\n    at Promise._execute (/opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/bluebird/js/release/debuggability.js:384:9)\n    at Promise._resolveFromExecutor (/opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/bluebird/js/release/promise.js:518:18)\n    at new Promise (/opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/bluebird/js/release/promise.js:103:10)\n    at consume (/opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/undici/lib/api/readable.js:212:10)\n    at BodyReadable.text (/opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/undici/lib/api/readable.js:111:12)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi-stats.mjs:258:39\n    at XapiStats._getAndUpdateStats (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi-stats.mjs:319:18)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Api.#callApiMethod (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/api.mjs:402:20)"
        }
      }
      

      The "Rolling pool update" task finally timed out.

      {
        "id": "0m1ku1muz",
        "properties": {
          "poolId": "ac9585b6-39ac-016c-f864-8c75b00c082b",
          "poolName": "Gen9",
          "progress": 40,
          "name": "Rolling pool update",
          "userId": "2844af20-1bee-43f8-9b91-e1ac3b49239f"
        },
        "start": 1727448180731,
        "status": "failure",
        "updatedAt": 1727449839025,
        "tasks": [
          {
            "id": "y5cpp9ox6vg",
            "properties": {
              "name": "Listing missing patches",
              "total": 2,
              "progress": 100
            },
            "start": 1727448180736,
            "status": "success",
            "tasks": [
              {
                "id": "x7rtdxs48rl",
                "properties": {
                  "name": "Listing missing patches for host 0366c500-c154-4967-8f12-fc45cf9390a5",
                  "hostId": "0366c500-c154-4967-8f12-fc45cf9390a5",
                  "hostName": "xcpng02"
                },
                "start": 1727448180737,
                "status": "success",
                "end": 1727448180738
              },
              {
                "id": "11re7s9vjpkb",
                "properties": {
                  "name": "Listing missing patches for host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                  "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                  "hostName": "xcpng01"
                },
                "start": 1727448180737,
                "status": "success",
                "end": 1727448180738
              }
            ],
            "end": 1727448180738
          },
          {
            "id": "pq07ubwpo",
            "properties": {
              "name": "Updating and rebooting"
            },
            "start": 1727448180738,
            "status": "failure",
            "tasks": [
              {
                "id": "kut3i9gog4m",
                "properties": {
                  "name": "Restarting hosts",
                  "progress": 33
                },
                "start": 1727448180824,
                "status": "failure",
                "tasks": [
                  {
                    "id": "54axt2hik3c",
                    "properties": {
                      "name": "Restarting host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                      "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                      "hostName": "xcpng01"
                    },
                    "start": 1727448180824,
                    "status": "failure",
                    "tasks": [
                      {
                        "id": "ocube7c6kmf",
                        "properties": {
                          "name": "Evacuate",
                          "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                          "hostName": "xcpng01"
                        },
                        "start": 1727448181014,
                        "status": "success",
                        "end": 1727448592236
                      },
                      {
                        "id": "millse8k12o",
                        "properties": {
                          "name": "Installing patches",
                          "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                          "hostName": "xcpng01"
                        },
                        "start": 1727448592237,
                        "status": "success",
                        "end": 1727448638798
                      },
                      {
                        "id": "5aazbc573dg",
                        "properties": {
                          "name": "Restart",
                          "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                          "hostName": "xcpng01"
                        },
                        "start": 1727448638799,
                        "status": "success",
                        "end": 1727448638986
                      },
                      {
                        "id": "1roala4dv69",
                        "properties": {
                          "name": "Waiting for host to be up",
                          "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                          "hostName": "xcpng01"
                        },
                        "start": 1727448638986,
                        "status": "failure",
                        "end": 1727449839025,
                        "result": {
                          "message": "Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart",
                          "name": "Error",
                          "stack": "Error: Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:152:17\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:40\n    at AsyncLocalStorage.run (node:async_hooks:346:14)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:41)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:31)\n    at Function.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:27)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:142:24\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:112:11\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Xapi.rollingPoolReboot (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:102:5)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/patching.mjs:524:7\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at XenServers.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/xen-servers.mjs:703:5)\n    at Xo.rollingUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/api/pool.mjs:243:3)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)"
                        }
                      }
                    ],
                    "end": 1727449839025,
                    "result": {
                      "message": "Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart",
                      "name": "Error",
                      "stack": "Error: Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:152:17\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:40\n    at AsyncLocalStorage.run (node:async_hooks:346:14)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:41)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:31)\n    at Function.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:27)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:142:24\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:112:11\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Xapi.rollingPoolReboot (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:102:5)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/patching.mjs:524:7\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at XenServers.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/xen-servers.mjs:703:5)\n    at Xo.rollingUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/api/pool.mjs:243:3)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)"
                    }
                  }
                ],
                "end": 1727449839025,
                "result": {
                  "message": "Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart",
                  "name": "Error",
                  "stack": "Error: Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:152:17\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:40\n    at AsyncLocalStorage.run (node:async_hooks:346:14)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:41)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:31)\n    at Function.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:27)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:142:24\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:112:11\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Xapi.rollingPoolReboot (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:102:5)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/patching.mjs:524:7\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at XenServers.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/xen-servers.mjs:703:5)\n    at Xo.rollingUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/api/pool.mjs:243:3)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)"
                }
              }
            ],
            "end": 1727449839025,
            "result": {
              "message": "Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart",
              "name": "Error",
              "stack": "Error: Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:152:17\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:40\n    at AsyncLocalStorage.run (node:async_hooks:346:14)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:41)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:31)\n    at Function.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:27)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:142:24\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:112:11\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Xapi.rollingPoolReboot (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:102:5)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/patching.mjs:524:7\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at XenServers.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/xen-servers.mjs:703:5)\n    at Xo.rollingUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/api/pool.mjs:243:3)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)"
            }
          }
        ],
        "end": 1727449839025,
        "result": {
          "message": "Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart",
          "name": "Error",
          "stack": "Error: Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:152:17\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:40\n    at AsyncLocalStorage.run (node:async_hooks:346:14)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:41)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:31)\n    at Function.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:27)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:142:24\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:112:11\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Xapi.rollingPoolReboot (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:102:5)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/patching.mjs:524:7\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at XenServers.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/xen-servers.mjs:703:5)\n    at Xo.rollingUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/api/pool.mjs:243:3)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)"
        }
      }
      

      The way I have mitigated it in the past was to vacate the slave node and update the patches on it.
      Then I have to go into the pool master and cleanse the failed/stale tasks.

      It's a bloody mess.

      Any advice is welcome.
      Danny

      posted in Management
      D
      dsiminiuk
    • RE: Rolling Pool Update - host took too long to restart

      @olivierlambert I finally had a chance to apply patches to the two ProLiant servers with the 20 minute boot time and everything worked as expected.

      posted in Xen Orchestra
      D
      dsiminiuk
    • RE: Unable to unblock a vm for reversion of snapshot

      @Danp I tried some of the xe commands listed in that post like xe vm-param-clear and xe vm-param-remove and wasn't successful.

      posted in XCP-ng
      D
      dsiminiuk
    • RE: Unable to unblock a vm for reversion of snapshot

      Additional observations...

      When a VM is set to prevent accidental deletion (only)...

      blocked-operations (MRW): destroy: true
      

      When set to prevent accidental shutdown (only)...

      blocked-operations (MRW): clean_shutdown: true; (unknown operation): true; pause: true; hard_reboot: true; suspend: true; hard_shutdown: true; clean_reboot: true
      

      And of course with both options enabled, it is the aggregation of both sets.

      blocked-operations (MRW): destroy: true; clean_shutdown: true; (unknown operation): true; pause: true; hard_reboot: true; suspend: true; hard_shutdown: true; clean_reboot: true
      

      Danny

      posted in XCP-ng
      D
      dsiminiuk
    • RE: Unable to unblock a vm for reversion of snapshot

      I figured out another way around it. I imported a new xo VM with the same script and connected it to my pool master just long enough to revert to the snapshot of the XO I wanted to keep.
      I was able to login and delete the temporary XO VM.
      I would still like to understand what the
      blocked-operations (MRW): (unknown operation): true;
      means.

      Anyway, not urgent.
      Thanks
      Danny

      posted in XCP-ng
      D
      dsiminiuk
    • Unable to unblock a vm for reversion of snapshot

      I am using a pre-built xo VM running on my pool master created from a script by Ronivay and it has been working well. I had the VM set to prevent accidental deletion and shutdown and with autostart. No problem.

      I took a snapshot of it and upgraded the kernel to hwe (6.x) and upon reboot it never came back up on the network.

      I tried to revert to the snapshot via xe...

      xe snapshot-revert snapshot-uuid=1a91d725-65b0-7bb7-f70f-0be5903e8d44
      You attempted an operation that was explicitly blocked (see the blocked_operations field of the given object).
      ref: e8d5def9-e079-d7a6-e106-fe8d96f55cac (xo-ce)
      code: false
      

      And so I attempted to set all the blocked-operations vm-parameters to false, and I got all of them except one.

      xe vm-param-list uuid=e8d5def9-e079-d7a6-e106-fe8d96f55cac | fgrep blocked
      
      blocked-operations (MRW): destroy: false; pause: false; clean_reboot: false; suspend: false; hard_reboot: false; hard_shutdown: false; clean_shutdown: false; (unknown operation): true
      

      I can't revert to the snapshot and I am still unable to set the "(unknown operation)" parameter to false.

      I have a backup of the XO-Config so i could start over but it would be nice to not have to do that.

      Any pointers would be most welcome.

      Thanks
      Danny

      posted in XCP-ng
      D
      dsiminiuk
    • RE: Rolling Pool Update - host took too long to restart

      @olivierlambert I've made the needed adjustment in the build script to override the default. Now I wait for another set of patches to test it.
      Thanks all.

      posted in Xen Orchestra
      D
      dsiminiuk
    • RE: Rolling Pool Update - host took too long to restart

      @DustinB Not a faulty disk. It appears to be memory testing at boot time and at other times after init doing the same thing,

      The cluster is a pair of HPE ProLiant DL580 Gen9 servers, each with 2TB of RAM.

      Yes, I could turn off memory checking during startup, but I'd rather not.

      Danny

      posted in Xen Orchestra
      D
      dsiminiuk
    • Rolling Pool Update - host took too long to restart

      Rolling pool updates fail because the master is taking too long to restart.
      What is considered too long?
      Perhaps this should be a setting in the server config to override a default.
      In this case, my servers take about 20 minutes to reboot.
      Is there a CI I can adjust?

      pool.rollingUpdate
      {
        "pool": "3cfffa75-69ea-7792-a320-92a7cb33f6f8"
      }
      {
        "message": "Host b725c95c-17af-41ae-a9c5-deeb1b7bfc50 took too long to restart",
        "name": "Error",
        "stack": "Error: Host b725c95c-17af-41ae-a9c5-deeb1b7bfc50 took too long to restart
          at Xapi.rollingPoolReboot (file:///opt/xo/xo-builds/xen-orchestra-202404111938/packages/xo-server/src/xapi/mixins/pool.mjs:127:9)
          at Xapi.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202404111938/packages/xo-server/src/xapi/mixins/patching.mjs:506:5)
          at XenServers.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202404111938/packages/xo-server/src/xo-mixins/xen-servers.mjs:689:5)
          at Xo.rollingUpdate (file:///opt/xo/xo-builds/xen-orchestra-202404111938/packages/xo-server/src/api/pool.mjs:231:3)
          at Api.#callApiMethod (file:///opt/xo/xo-builds/xen-orchestra-202404111938/packages/xo-server/src/xo-mixins/api.mjs:366:20)"
      }
      
      posted in Xen Orchestra
      D
      dsiminiuk
    • RE: Restoring from backup error: self-signed certificate

      @Danp Is it safe to say this is being worked on? I don't want to assume anything.

      posted in Backup
      D
      dsiminiuk
    • RE: Patching behind a corporate proxy server

      @Gheppy A follow up... Patches appeared in XO right after updating the files (prior to an XCP-ng reboot). Just FYI.

      posted in Management
      D
      dsiminiuk
    • RE: Patching behind a corporate proxy server

      @Gheppy Thank you.

      posted in Management
      D
      dsiminiuk
    • RE: Patching behind a corporate proxy server

      @olivierlambert Not production yet. I'm setting up a proof of concept so that management can see there are alternatives to VMware Broadcom.

      posted in Management
      D
      dsiminiuk
    • Patching behind a corporate proxy server

      I have 2 HPE servers and a separate VM with XO built from sources using ronivay's script.

      I have the web proxy setup in the XO VM for yarn, apt, and global use and everything works there. I can do apt updates and XO updates.

      I am trying to understand what I need to set elsewhere in XCP-ng for patching to work, i.e. detection, download, and deployment. I have set the same proxy used elsewhere for the proxy setting in the master host but I have no idea if that is the correct thing to do or do I need to setup something else on the XO instance?

      I think this is the last issue before I can say the environment is ready for prime time.

      Thanks
      Danny

      posted in Management
      D
      dsiminiuk