XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. dsiminiuk
    3. Posts
    D
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 7
    • Posts 88
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Rolling Pool Update - host took too long to restart

      @tuxpowered said in Rolling Pool Update - host took too long to restart:

      When the first note reboots (the master), I can see that the system is back up in 5-8 min. If I go in to XO > Settings > Servers and click the Enable/Disable status button to reconnect it pops right up. Again, does not resume migrating the other nodes.

      That is what I am seeing also, logged here https://xcp-ng.org/forum/topic/9683/rolling-pool-update-incomplete

      posted in Xen Orchestra
      D
      dsiminiuk
    • RE: Rolling Pool Update - host took too long to restart

      @tuxpowered I'm wondering if this is a network connectivity issue. 😎

      When the rolling pool update stops, what does your route table look like on the master (can it reach the other node)?

      Is your VPN layer 3 (routed), layer 2 (non-routed), IPSEC tunnel?

      posted in Xen Orchestra
      D
      dsiminiuk
    • RE: Rolling Pool Update incomplete

      Perhaps this was because there were 3 pages of failed tasks?
      I have deleted them all with xo-cli and I'll see how the next patch cycle goes.

      posted in Management
      D
      dsiminiuk
    • Rolling Pool Update incomplete

      Over that last two patch bundles my Rolling Pool Update fails to complete,
      VMs are evacuated from the pool master to the pool slave (a 2 node pool).
      Patches are applied to the pool master.
      The pool master reboots.
      After I can see that the pool master console is up, I reconnect XOA to the master.
      I wait, wait, and wait some more and nothing else happens after that. The VMs remain on the slave server.
      The "Rolling pool update" task is still in "Started" state.
      There is an "API call: vm.stats" task that started after that in a failed state.

      {
        "id": "0m1ku3cog",
        "properties": {
          "method": "vm.stats",
          "params": {
            "id": "ad5850fb-8264-18e2-c974-9df9ccaa6ccc"
          },
          "name": "API call: vm.stats",
          "userId": "2844af20-1bee-43f8-9b91-e1ac3b49239f",
          "type": "api.call"
        },
        "start": 1727448260848,
        "status": "failure",
        "updatedAt": 1727448260937,
        "end": 1727448260937,
        "result": {
          "message": "unusable",
          "name": "TypeError",
          "stack": "TypeError: unusable\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/undici/lib/api/readable.js:224:34\n    at Promise._execute (/opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/bluebird/js/release/debuggability.js:384:9)\n    at Promise._resolveFromExecutor (/opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/bluebird/js/release/promise.js:518:18)\n    at new Promise (/opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/bluebird/js/release/promise.js:103:10)\n    at consume (/opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/undici/lib/api/readable.js:212:10)\n    at BodyReadable.text (/opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/undici/lib/api/readable.js:111:12)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi-stats.mjs:258:39\n    at XapiStats._getAndUpdateStats (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi-stats.mjs:319:18)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Api.#callApiMethod (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/api.mjs:402:20)"
        }
      }
      

      The "Rolling pool update" task finally timed out.

      {
        "id": "0m1ku1muz",
        "properties": {
          "poolId": "ac9585b6-39ac-016c-f864-8c75b00c082b",
          "poolName": "Gen9",
          "progress": 40,
          "name": "Rolling pool update",
          "userId": "2844af20-1bee-43f8-9b91-e1ac3b49239f"
        },
        "start": 1727448180731,
        "status": "failure",
        "updatedAt": 1727449839025,
        "tasks": [
          {
            "id": "y5cpp9ox6vg",
            "properties": {
              "name": "Listing missing patches",
              "total": 2,
              "progress": 100
            },
            "start": 1727448180736,
            "status": "success",
            "tasks": [
              {
                "id": "x7rtdxs48rl",
                "properties": {
                  "name": "Listing missing patches for host 0366c500-c154-4967-8f12-fc45cf9390a5",
                  "hostId": "0366c500-c154-4967-8f12-fc45cf9390a5",
                  "hostName": "xcpng02"
                },
                "start": 1727448180737,
                "status": "success",
                "end": 1727448180738
              },
              {
                "id": "11re7s9vjpkb",
                "properties": {
                  "name": "Listing missing patches for host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                  "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                  "hostName": "xcpng01"
                },
                "start": 1727448180737,
                "status": "success",
                "end": 1727448180738
              }
            ],
            "end": 1727448180738
          },
          {
            "id": "pq07ubwpo",
            "properties": {
              "name": "Updating and rebooting"
            },
            "start": 1727448180738,
            "status": "failure",
            "tasks": [
              {
                "id": "kut3i9gog4m",
                "properties": {
                  "name": "Restarting hosts",
                  "progress": 33
                },
                "start": 1727448180824,
                "status": "failure",
                "tasks": [
                  {
                    "id": "54axt2hik3c",
                    "properties": {
                      "name": "Restarting host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                      "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                      "hostName": "xcpng01"
                    },
                    "start": 1727448180824,
                    "status": "failure",
                    "tasks": [
                      {
                        "id": "ocube7c6kmf",
                        "properties": {
                          "name": "Evacuate",
                          "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                          "hostName": "xcpng01"
                        },
                        "start": 1727448181014,
                        "status": "success",
                        "end": 1727448592236
                      },
                      {
                        "id": "millse8k12o",
                        "properties": {
                          "name": "Installing patches",
                          "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                          "hostName": "xcpng01"
                        },
                        "start": 1727448592237,
                        "status": "success",
                        "end": 1727448638798
                      },
                      {
                        "id": "5aazbc573dg",
                        "properties": {
                          "name": "Restart",
                          "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                          "hostName": "xcpng01"
                        },
                        "start": 1727448638799,
                        "status": "success",
                        "end": 1727448638986
                      },
                      {
                        "id": "1roala4dv69",
                        "properties": {
                          "name": "Waiting for host to be up",
                          "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                          "hostName": "xcpng01"
                        },
                        "start": 1727448638986,
                        "status": "failure",
                        "end": 1727449839025,
                        "result": {
                          "message": "Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart",
                          "name": "Error",
                          "stack": "Error: Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:152:17\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:40\n    at AsyncLocalStorage.run (node:async_hooks:346:14)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:41)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:31)\n    at Function.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:27)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:142:24\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:112:11\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Xapi.rollingPoolReboot (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:102:5)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/patching.mjs:524:7\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at XenServers.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/xen-servers.mjs:703:5)\n    at Xo.rollingUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/api/pool.mjs:243:3)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)"
                        }
                      }
                    ],
                    "end": 1727449839025,
                    "result": {
                      "message": "Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart",
                      "name": "Error",
                      "stack": "Error: Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:152:17\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:40\n    at AsyncLocalStorage.run (node:async_hooks:346:14)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:41)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:31)\n    at Function.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:27)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:142:24\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:112:11\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Xapi.rollingPoolReboot (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:102:5)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/patching.mjs:524:7\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at XenServers.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/xen-servers.mjs:703:5)\n    at Xo.rollingUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/api/pool.mjs:243:3)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)"
                    }
                  }
                ],
                "end": 1727449839025,
                "result": {
                  "message": "Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart",
                  "name": "Error",
                  "stack": "Error: Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:152:17\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:40\n    at AsyncLocalStorage.run (node:async_hooks:346:14)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:41)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:31)\n    at Function.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:27)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:142:24\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:112:11\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Xapi.rollingPoolReboot (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:102:5)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/patching.mjs:524:7\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at XenServers.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/xen-servers.mjs:703:5)\n    at Xo.rollingUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/api/pool.mjs:243:3)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)"
                }
              }
            ],
            "end": 1727449839025,
            "result": {
              "message": "Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart",
              "name": "Error",
              "stack": "Error: Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:152:17\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:40\n    at AsyncLocalStorage.run (node:async_hooks:346:14)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:41)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:31)\n    at Function.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:27)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:142:24\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:112:11\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Xapi.rollingPoolReboot (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:102:5)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/patching.mjs:524:7\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at XenServers.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/xen-servers.mjs:703:5)\n    at Xo.rollingUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/api/pool.mjs:243:3)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)"
            }
          }
        ],
        "end": 1727449839025,
        "result": {
          "message": "Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart",
          "name": "Error",
          "stack": "Error: Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:152:17\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:40\n    at AsyncLocalStorage.run (node:async_hooks:346:14)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:41)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:31)\n    at Function.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:27)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:142:24\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:112:11\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Xapi.rollingPoolReboot (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:102:5)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/patching.mjs:524:7\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at XenServers.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/xen-servers.mjs:703:5)\n    at Xo.rollingUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/api/pool.mjs:243:3)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)"
        }
      }
      

      The way I have mitigated it in the past was to vacate the slave node and update the patches on it.
      Then I have to go into the pool master and cleanse the failed/stale tasks.

      It's a bloody mess.

      Any advice is welcome.
      Danny

      posted in Management
      D
      dsiminiuk
    • RE: Rolling Pool Update - host took too long to restart

      @olivierlambert I finally had a chance to apply patches to the two ProLiant servers with the 20 minute boot time and everything worked as expected.

      posted in Xen Orchestra
      D
      dsiminiuk
    • RE: Unable to unblock a vm for reversion of snapshot

      @Danp I tried some of the xe commands listed in that post like xe vm-param-clear and xe vm-param-remove and wasn't successful.

      posted in XCP-ng
      D
      dsiminiuk
    • RE: Unable to unblock a vm for reversion of snapshot

      Additional observations...

      When a VM is set to prevent accidental deletion (only)...

      blocked-operations (MRW): destroy: true
      

      When set to prevent accidental shutdown (only)...

      blocked-operations (MRW): clean_shutdown: true; (unknown operation): true; pause: true; hard_reboot: true; suspend: true; hard_shutdown: true; clean_reboot: true
      

      And of course with both options enabled, it is the aggregation of both sets.

      blocked-operations (MRW): destroy: true; clean_shutdown: true; (unknown operation): true; pause: true; hard_reboot: true; suspend: true; hard_shutdown: true; clean_reboot: true
      

      Danny

      posted in XCP-ng
      D
      dsiminiuk
    • RE: Unable to unblock a vm for reversion of snapshot

      I figured out another way around it. I imported a new xo VM with the same script and connected it to my pool master just long enough to revert to the snapshot of the XO I wanted to keep.
      I was able to login and delete the temporary XO VM.
      I would still like to understand what the
      blocked-operations (MRW): (unknown operation): true;
      means.

      Anyway, not urgent.
      Thanks
      Danny

      posted in XCP-ng
      D
      dsiminiuk
    • Unable to unblock a vm for reversion of snapshot

      I am using a pre-built xo VM running on my pool master created from a script by Ronivay and it has been working well. I had the VM set to prevent accidental deletion and shutdown and with autostart. No problem.

      I took a snapshot of it and upgraded the kernel to hwe (6.x) and upon reboot it never came back up on the network.

      I tried to revert to the snapshot via xe...

      xe snapshot-revert snapshot-uuid=1a91d725-65b0-7bb7-f70f-0be5903e8d44
      You attempted an operation that was explicitly blocked (see the blocked_operations field of the given object).
      ref: e8d5def9-e079-d7a6-e106-fe8d96f55cac (xo-ce)
      code: false
      

      And so I attempted to set all the blocked-operations vm-parameters to false, and I got all of them except one.

      xe vm-param-list uuid=e8d5def9-e079-d7a6-e106-fe8d96f55cac | fgrep blocked
      
      blocked-operations (MRW): destroy: false; pause: false; clean_reboot: false; suspend: false; hard_reboot: false; hard_shutdown: false; clean_shutdown: false; (unknown operation): true
      

      I can't revert to the snapshot and I am still unable to set the "(unknown operation)" parameter to false.

      I have a backup of the XO-Config so i could start over but it would be nice to not have to do that.

      Any pointers would be most welcome.

      Thanks
      Danny

      posted in XCP-ng
      D
      dsiminiuk
    • RE: Rolling Pool Update - host took too long to restart

      @olivierlambert I've made the needed adjustment in the build script to override the default. Now I wait for another set of patches to test it.
      Thanks all.

      posted in Xen Orchestra
      D
      dsiminiuk
    • RE: Rolling Pool Update - host took too long to restart

      @DustinB Not a faulty disk. It appears to be memory testing at boot time and at other times after init doing the same thing,

      The cluster is a pair of HPE ProLiant DL580 Gen9 servers, each with 2TB of RAM.

      Yes, I could turn off memory checking during startup, but I'd rather not.

      Danny

      posted in Xen Orchestra
      D
      dsiminiuk
    • Rolling Pool Update - host took too long to restart

      Rolling pool updates fail because the master is taking too long to restart.
      What is considered too long?
      Perhaps this should be a setting in the server config to override a default.
      In this case, my servers take about 20 minutes to reboot.
      Is there a CI I can adjust?

      pool.rollingUpdate
      {
        "pool": "3cfffa75-69ea-7792-a320-92a7cb33f6f8"
      }
      {
        "message": "Host b725c95c-17af-41ae-a9c5-deeb1b7bfc50 took too long to restart",
        "name": "Error",
        "stack": "Error: Host b725c95c-17af-41ae-a9c5-deeb1b7bfc50 took too long to restart
          at Xapi.rollingPoolReboot (file:///opt/xo/xo-builds/xen-orchestra-202404111938/packages/xo-server/src/xapi/mixins/pool.mjs:127:9)
          at Xapi.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202404111938/packages/xo-server/src/xapi/mixins/patching.mjs:506:5)
          at XenServers.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202404111938/packages/xo-server/src/xo-mixins/xen-servers.mjs:689:5)
          at Xo.rollingUpdate (file:///opt/xo/xo-builds/xen-orchestra-202404111938/packages/xo-server/src/api/pool.mjs:231:3)
          at Api.#callApiMethod (file:///opt/xo/xo-builds/xen-orchestra-202404111938/packages/xo-server/src/xo-mixins/api.mjs:366:20)"
      }
      
      posted in Xen Orchestra
      D
      dsiminiuk
    • RE: Restoring from backup error: self-signed certificate

      @Danp Is it safe to say this is being worked on? I don't want to assume anything.

      posted in Backup
      D
      dsiminiuk
    • RE: Patching behind a corporate proxy server

      @Gheppy A follow up... Patches appeared in XO right after updating the files (prior to an XCP-ng reboot). Just FYI.

      posted in Management
      D
      dsiminiuk
    • RE: Patching behind a corporate proxy server

      @Gheppy Thank you.

      posted in Management
      D
      dsiminiuk
    • RE: Patching behind a corporate proxy server

      @olivierlambert Not production yet. I'm setting up a proof of concept so that management can see there are alternatives to VMware Broadcom.

      posted in Management
      D
      dsiminiuk
    • Patching behind a corporate proxy server

      I have 2 HPE servers and a separate VM with XO built from sources using ronivay's script.

      I have the web proxy setup in the XO VM for yarn, apt, and global use and everything works there. I can do apt updates and XO updates.

      I am trying to understand what I need to set elsewhere in XCP-ng for patching to work, i.e. detection, download, and deployment. I have set the same proxy used elsewhere for the proxy setting in the master host but I have no idea if that is the correct thing to do or do I need to setup something else on the XO instance?

      I think this is the last issue before I can say the environment is ready for prime time.

      Thanks
      Danny

      posted in Management
      D
      dsiminiuk
    • RE: How to add a static IP address to a PIF

      @olivierlambert Wow too easy. Thank you!

      posted in Management
      D
      dsiminiuk
    • How to add a static IP address to a PIF

      XCP-ng 8.2.1 release/yangtze/master/58
      Xen Orchestra, commit 9cb94
      Master, commit 7cb2f

      I have setup anew on a pair of DL580s (72 CPUs, 2TB RAM). I have SAN HBAs, a 1TB LUN attached with multipath. I have a VLAN for VM network access, all works fine.

      I have an additional 10GbE NIC on these machines connected through a switch on a private VLAN and I would like to use this network as a migration network. The PIFs are up but I don't see a way to add IP addresses so that I can use it.

      How do you add a static IP to a PIF in XO?

      Thanks
      Danny

      posted in Management
      D
      dsiminiuk
    • RE: Hosts have disappeared from XO Web UI

      @julien-f @julien-f All I have done is let ronivay's script do the builds daily on schedule if there is a new commit on the master branch. His script has not changed. There have been no new patches to the hosts. I don't have an explanation.

      posted in Xen Orchestra
      D
      dsiminiuk