XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. dsiminiuk
    D
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 7
    • Posts 88
    • Groups 0

    dsiminiuk

    @dsiminiuk

    9
    Reputation
    367
    Profile views
    88
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online

    dsiminiuk Unfollow Follow

    Best posts made by dsiminiuk

    • RE: Rolling Pool Update - host took too long to restart

      olivierlambert I've made the needed adjustment in the build script to override the default. Now I wait for another set of patches to test it.
      Thanks all.

      posted in Xen Orchestra
      D
      dsiminiuk
    • RE: How to add a static IP address to a PIF

      olivierlambert Wow too easy. Thank you!

      posted in Management
      D
      dsiminiuk
    • RE: Rolling Pool Update - host took too long to restart

      olivierlambert I finally had a chance to apply patches to the two ProLiant servers with the 20 minute boot time and everything worked as expected.

      posted in Xen Orchestra
      D
      dsiminiuk

    Latest posts made by dsiminiuk

    • RE: Rolling Pool Update - host took too long to restart

      tuxpowered said in Rolling Pool Update - host took too long to restart:

      When the first note reboots (the master), I can see that the system is back up in 5-8 min. If I go in to XO > Settings > Servers and click the Enable/Disable status button to reconnect it pops right up. Again, does not resume migrating the other nodes.

      That is what I am seeing also, logged here https://xcp-ng.org/forum/topic/9683/rolling-pool-update-incomplete

      posted in Xen Orchestra
      D
      dsiminiuk
    • RE: Rolling Pool Update - host took too long to restart

      tuxpowered I'm wondering if this is a network connectivity issue. 😎

      When the rolling pool update stops, what does your route table look like on the master (can it reach the other node)?

      Is your VPN layer 3 (routed), layer 2 (non-routed), IPSEC tunnel?

      posted in Xen Orchestra
      D
      dsiminiuk
    • RE: Rolling Pool Update incomplete

      Perhaps this was because there were 3 pages of failed tasks?
      I have deleted them all with xo-cli and I'll see how the next patch cycle goes.

      posted in Management
      D
      dsiminiuk
    • Rolling Pool Update incomplete

      Over that last two patch bundles my Rolling Pool Update fails to complete,
      VMs are evacuated from the pool master to the pool slave (a 2 node pool).
      Patches are applied to the pool master.
      The pool master reboots.
      After I can see that the pool master console is up, I reconnect XOA to the master.
      I wait, wait, and wait some more and nothing else happens after that. The VMs remain on the slave server.
      The "Rolling pool update" task is still in "Started" state.
      There is an "API call: vm.stats" task that started after that in a failed state.

      {
        "id": "0m1ku3cog",
        "properties": {
          "method": "vm.stats",
          "params": {
            "id": "ad5850fb-8264-18e2-c974-9df9ccaa6ccc"
          },
          "name": "API call: vm.stats",
          "userId": "2844af20-1bee-43f8-9b91-e1ac3b49239f",
          "type": "api.call"
        },
        "start": 1727448260848,
        "status": "failure",
        "updatedAt": 1727448260937,
        "end": 1727448260937,
        "result": {
          "message": "unusable",
          "name": "TypeError",
          "stack": "TypeError: unusable\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/undici/lib/api/readable.js:224:34\n    at Promise._execute (/opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/bluebird/js/release/debuggability.js:384:9)\n    at Promise._resolveFromExecutor (/opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/bluebird/js/release/promise.js:518:18)\n    at new Promise (/opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/bluebird/js/release/promise.js:103:10)\n    at consume (/opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/undici/lib/api/readable.js:212:10)\n    at BodyReadable.text (/opt/xo/xo-builds/xen-orchestra-202409271433/node_modules/undici/lib/api/readable.js:111:12)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi-stats.mjs:258:39\n    at XapiStats._getAndUpdateStats (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi-stats.mjs:319:18)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Api.#callApiMethod (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/api.mjs:402:20)"
        }
      }
      

      The "Rolling pool update" task finally timed out.

      {
        "id": "0m1ku1muz",
        "properties": {
          "poolId": "ac9585b6-39ac-016c-f864-8c75b00c082b",
          "poolName": "Gen9",
          "progress": 40,
          "name": "Rolling pool update",
          "userId": "2844af20-1bee-43f8-9b91-e1ac3b49239f"
        },
        "start": 1727448180731,
        "status": "failure",
        "updatedAt": 1727449839025,
        "tasks": [
          {
            "id": "y5cpp9ox6vg",
            "properties": {
              "name": "Listing missing patches",
              "total": 2,
              "progress": 100
            },
            "start": 1727448180736,
            "status": "success",
            "tasks": [
              {
                "id": "x7rtdxs48rl",
                "properties": {
                  "name": "Listing missing patches for host 0366c500-c154-4967-8f12-fc45cf9390a5",
                  "hostId": "0366c500-c154-4967-8f12-fc45cf9390a5",
                  "hostName": "xcpng02"
                },
                "start": 1727448180737,
                "status": "success",
                "end": 1727448180738
              },
              {
                "id": "11re7s9vjpkb",
                "properties": {
                  "name": "Listing missing patches for host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                  "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                  "hostName": "xcpng01"
                },
                "start": 1727448180737,
                "status": "success",
                "end": 1727448180738
              }
            ],
            "end": 1727448180738
          },
          {
            "id": "pq07ubwpo",
            "properties": {
              "name": "Updating and rebooting"
            },
            "start": 1727448180738,
            "status": "failure",
            "tasks": [
              {
                "id": "kut3i9gog4m",
                "properties": {
                  "name": "Restarting hosts",
                  "progress": 33
                },
                "start": 1727448180824,
                "status": "failure",
                "tasks": [
                  {
                    "id": "54axt2hik3c",
                    "properties": {
                      "name": "Restarting host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                      "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                      "hostName": "xcpng01"
                    },
                    "start": 1727448180824,
                    "status": "failure",
                    "tasks": [
                      {
                        "id": "ocube7c6kmf",
                        "properties": {
                          "name": "Evacuate",
                          "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                          "hostName": "xcpng01"
                        },
                        "start": 1727448181014,
                        "status": "success",
                        "end": 1727448592236
                      },
                      {
                        "id": "millse8k12o",
                        "properties": {
                          "name": "Installing patches",
                          "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                          "hostName": "xcpng01"
                        },
                        "start": 1727448592237,
                        "status": "success",
                        "end": 1727448638798
                      },
                      {
                        "id": "5aazbc573dg",
                        "properties": {
                          "name": "Restart",
                          "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                          "hostName": "xcpng01"
                        },
                        "start": 1727448638799,
                        "status": "success",
                        "end": 1727448638986
                      },
                      {
                        "id": "1roala4dv69",
                        "properties": {
                          "name": "Waiting for host to be up",
                          "hostId": "7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e",
                          "hostName": "xcpng01"
                        },
                        "start": 1727448638986,
                        "status": "failure",
                        "end": 1727449839025,
                        "result": {
                          "message": "Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart",
                          "name": "Error",
                          "stack": "Error: Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:152:17\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:40\n    at AsyncLocalStorage.run (node:async_hooks:346:14)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:41)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:31)\n    at Function.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:27)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:142:24\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:112:11\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Xapi.rollingPoolReboot (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:102:5)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/patching.mjs:524:7\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at XenServers.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/xen-servers.mjs:703:5)\n    at Xo.rollingUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/api/pool.mjs:243:3)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)"
                        }
                      }
                    ],
                    "end": 1727449839025,
                    "result": {
                      "message": "Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart",
                      "name": "Error",
                      "stack": "Error: Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:152:17\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:40\n    at AsyncLocalStorage.run (node:async_hooks:346:14)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:41)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:31)\n    at Function.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:27)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:142:24\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:112:11\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Xapi.rollingPoolReboot (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:102:5)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/patching.mjs:524:7\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at XenServers.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/xen-servers.mjs:703:5)\n    at Xo.rollingUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/api/pool.mjs:243:3)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)"
                    }
                  }
                ],
                "end": 1727449839025,
                "result": {
                  "message": "Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart",
                  "name": "Error",
                  "stack": "Error: Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:152:17\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:40\n    at AsyncLocalStorage.run (node:async_hooks:346:14)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:41)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:31)\n    at Function.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:27)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:142:24\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:112:11\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Xapi.rollingPoolReboot (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:102:5)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/patching.mjs:524:7\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at XenServers.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/xen-servers.mjs:703:5)\n    at Xo.rollingUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/api/pool.mjs:243:3)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)"
                }
              }
            ],
            "end": 1727449839025,
            "result": {
              "message": "Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart",
              "name": "Error",
              "stack": "Error: Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:152:17\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:40\n    at AsyncLocalStorage.run (node:async_hooks:346:14)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:41)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:31)\n    at Function.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:27)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:142:24\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:112:11\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Xapi.rollingPoolReboot (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:102:5)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/patching.mjs:524:7\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at XenServers.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/xen-servers.mjs:703:5)\n    at Xo.rollingUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/api/pool.mjs:243:3)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)"
            }
          }
        ],
        "end": 1727449839025,
        "result": {
          "message": "Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart",
          "name": "Error",
          "stack": "Error: Host 7cbc09aa-6d32-44a9-b6a2-eb5b30c11e1e took too long to restart\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:152:17\n    at /opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:40\n    at AsyncLocalStorage.run (node:async_hooks:346:14)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:41)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:31)\n    at Function.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:54:27)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:142:24\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:112:11\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Xapi.rollingPoolReboot (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/pool.mjs:102:5)\n    at file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xapi/mixins/patching.mjs:524:7\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)\n    at XenServers.rollingPoolUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/xo-mixins/xen-servers.mjs:703:5)\n    at Xo.rollingUpdate (file:///opt/xo/xo-builds/xen-orchestra-202409271433/packages/xo-server/src/api/pool.mjs:243:3)\n    at Task.runInside (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:169:22)\n    at Task.run (/opt/xo/xo-builds/xen-orchestra-202409271433/@vates/task/index.js:153:20)"
        }
      }
      

      The way I have mitigated it in the past was to vacate the slave node and update the patches on it.
      Then I have to go into the pool master and cleanse the failed/stale tasks.

      It's a bloody mess.

      Any advice is welcome.
      Danny

      posted in Management
      D
      dsiminiuk
    • RE: Rolling Pool Update - host took too long to restart

      olivierlambert I finally had a chance to apply patches to the two ProLiant servers with the 20 minute boot time and everything worked as expected.

      posted in Xen Orchestra
      D
      dsiminiuk
    • RE: Unable to unblock a vm for reversion of snapshot

      Danp I tried some of the xe commands listed in that post like xe vm-param-clear and xe vm-param-remove and wasn't successful.

      posted in XCP-ng
      D
      dsiminiuk
    • RE: Unable to unblock a vm for reversion of snapshot

      Additional observations...

      When a VM is set to prevent accidental deletion (only)...

      blocked-operations (MRW): destroy: true
      

      When set to prevent accidental shutdown (only)...

      blocked-operations (MRW): clean_shutdown: true; (unknown operation): true; pause: true; hard_reboot: true; suspend: true; hard_shutdown: true; clean_reboot: true
      

      And of course with both options enabled, it is the aggregation of both sets.

      blocked-operations (MRW): destroy: true; clean_shutdown: true; (unknown operation): true; pause: true; hard_reboot: true; suspend: true; hard_shutdown: true; clean_reboot: true
      

      Danny

      posted in XCP-ng
      D
      dsiminiuk
    • RE: Unable to unblock a vm for reversion of snapshot

      I figured out another way around it. I imported a new xo VM with the same script and connected it to my pool master just long enough to revert to the snapshot of the XO I wanted to keep.
      I was able to login and delete the temporary XO VM.
      I would still like to understand what the
      blocked-operations (MRW): (unknown operation): true;
      means.

      Anyway, not urgent.
      Thanks
      Danny

      posted in XCP-ng
      D
      dsiminiuk
    • Unable to unblock a vm for reversion of snapshot

      I am using a pre-built xo VM running on my pool master created from a script by Ronivay and it has been working well. I had the VM set to prevent accidental deletion and shutdown and with autostart. No problem.

      I took a snapshot of it and upgraded the kernel to hwe (6.x) and upon reboot it never came back up on the network.

      I tried to revert to the snapshot via xe...

      xe snapshot-revert snapshot-uuid=1a91d725-65b0-7bb7-f70f-0be5903e8d44
      You attempted an operation that was explicitly blocked (see the blocked_operations field of the given object).
      ref: e8d5def9-e079-d7a6-e106-fe8d96f55cac (xo-ce)
      code: false
      

      And so I attempted to set all the blocked-operations vm-parameters to false, and I got all of them except one.

      xe vm-param-list uuid=e8d5def9-e079-d7a6-e106-fe8d96f55cac | fgrep blocked
      
      blocked-operations (MRW): destroy: false; pause: false; clean_reboot: false; suspend: false; hard_reboot: false; hard_shutdown: false; clean_shutdown: false; (unknown operation): true
      

      I can't revert to the snapshot and I am still unable to set the "(unknown operation)" parameter to false.

      I have a backup of the XO-Config so i could start over but it would be nice to not have to do that.

      Any pointers would be most welcome.

      Thanks
      Danny

      posted in XCP-ng
      D
      dsiminiuk
    • RE: Rolling Pool Update - host took too long to restart

      olivierlambert I've made the needed adjustment in the build script to override the default. Now I wait for another set of patches to test it.
      Thanks all.

      posted in Xen Orchestra
      D
      dsiminiuk