<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Patching and trying to Pool Hosts after they&#x27;ve been in production]]></title><description><![CDATA[<p dir="auto">Hey all,</p>
<p dir="auto">Looking for some guidance on this. I have 3 hosts all running XCP-ng.</p>
<ul>
<li>XCP-ng 8.2.1 (GPLv2)</li>
<li>XCP-ng 8.0.0 (GPLv2)</li>
<li>XCP-ng 8.2.1 (GPLv2)</li>
</ul>
<p dir="auto">Respectively,</p>
<p dir="auto">One host was recently running 7.6.0 and has been updated. I want to introduce this system back into my NFS shares that are attached to the other two hosts.</p>
<p dir="auto">When I look to do that on Dom0 I get this warning message about potential data loss.</p>
<p dir="auto">The 4th line item highlighted is the repo that I need to attach.</p>
<p dir="auto"><img src="/forum/assets/uploads/files/1704998603057-27ff976f-4fd9-4dc4-a4fe-edde64b9b348-image.png" alt="27ff976f-4fd9-4dc4-a4fe-edde64b9b348-image.png" class=" img-fluid img-markdown" /></p>
<p dir="auto">How worried should I be about this, as this storage is provided over NFS from a NAS?</p>
<p dir="auto">What does this then do if I want to try and pool all of these hosts, which is the end goal?</p>
]]></description><link>https://xcp-ng.org/forum/topic/8198/patching-and-trying-to-pool-hosts-after-they-ve-been-in-production</link><generator>RSS for Node</generator><lastBuildDate>Wed, 22 Apr 2026 11:28:23 GMT</lastBuildDate><atom:link href="https://xcp-ng.org/forum/topic/8198.rss" rel="self" type="application/rss+xml"/><pubDate>Thu, 11 Jan 2024 18:44:24 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to Patching and trying to Pool Hosts after they&#x27;ve been in production on Thu, 11 Jan 2024 23:12:33 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/danp" aria-label="Profile: Danp">@<bdi>Danp</bdi></a> said in <a href="/forum/post/69495">Patching and trying to Pool Hosts after they've been in production</a>:</p>
<blockquote>
<p dir="auto">Warm migration should work in this case because the VM is halted then restarted as part of the process. See <a href="https://xen-orchestra.com/blog/warm-migration-with-xen-orchestra/" target="_blank" rel="noopener noreferrer nofollow ugc">here</a> for more details.</p>
</blockquote>
<p dir="auto">Sweet, I'll setup something small on the old host for testing and use the Warm Migration process.</p>
]]></description><link>https://xcp-ng.org/forum/post/69496</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/69496</guid><dc:creator><![CDATA[DustinB]]></dc:creator><pubDate>Thu, 11 Jan 2024 23:12:33 GMT</pubDate></item><item><title><![CDATA[Reply to Patching and trying to Pool Hosts after they&#x27;ve been in production on Thu, 11 Jan 2024 22:55:02 GMT]]></title><description><![CDATA[<p dir="auto">Warm migration should work in this case because the VM is halted then restarted as part of the process. See <a href="https://xen-orchestra.com/blog/warm-migration-with-xen-orchestra/" target="_blank" rel="noopener noreferrer nofollow ugc">here</a> for more details.</p>
]]></description><link>https://xcp-ng.org/forum/post/69495</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/69495</guid><dc:creator><![CDATA[Danp]]></dc:creator><pubDate>Thu, 11 Jan 2024 22:55:02 GMT</pubDate></item><item><title><![CDATA[Reply to Patching and trying to Pool Hosts after they&#x27;ve been in production on Thu, 11 Jan 2024 21:58:01 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/olivierlambert" aria-label="Profile: olivierlambert">@<bdi>olivierlambert</bdi></a> said in <a href="/forum/post/69489">Patching and trying to Pool Hosts after they've been in production</a>:</p>
<blockquote>
<p dir="auto">As I said, you need to migrate the VMs to the target pool until the host is empty, and then you can add it to the target pool.</p>
</blockquote>
<p dir="auto">I think I got it sorted out and was able to add a new storage repo to the pool to use to migrate VM's.</p>
<p dir="auto">I tested with one VM as a migration from the old host to the updated one and it failed with something similar to the below - I was able to restore from backup, but would like to know if this should be expected.. (cause I can see the headache now lol)</p>
<pre><code>vm.migrate
{
  "vm": "b735466c-08f4-2d1a-3778-7e17401ec822",
  "mapVifsNetworks": {
    "490fbf16-4951-2856-a8d8-0bb8ae8f67a6": "e7a40a11-d541-ce1f-f246-c1c3d855820b"
  },
  "migrationNetwork": "e7a40a11-d541-ce1f-f246-c1c3d855820b",
  "sr": "3df813d4-0818-6b95-a15f-f704ae7858e0",
  "targetHost": "165ebeed-a67e-4b7c-8898-6a9656d105a2"
}
{
  "code": "VM_INCOMPATIBLE_WITH_THIS_HOST",
  "params": [
    "OpaqueRef:7cac0ba3-d626-42ff-b2d2-eb55b627dff9",
    "OpaqueRef:705ea907-9ac3-43df-93c7-87fb85f9ba61",
    "VM last booted on a CPU with features this host's CPU does not have."
  ],
  "task": {
    "uuid": "254495fa-8950-2f36-5997-a7500c8bd1f3",
    "name_label": "Async.VM.assert_can_migrate",
    "name_description": "",
    "allowed_operations": [],
    "current_operations": {},
    "created": "20240111T20:55:19Z",
    "finished": "20240111T20:55:19Z",
    "status": "failure",
    "resident_on": "OpaqueRef:c065fa00-9e2d-47de-a5f2-bc7426b945cd",
    "progress": 1,
    "type": "&lt;none/&gt;",
    "result": "",
    "error_info": [
      "VM_INCOMPATIBLE_WITH_THIS_HOST",
      "OpaqueRef:7cac0ba3-d626-42ff-b2d2-eb55b627dff9",
      "OpaqueRef:705ea907-9ac3-43df-93c7-87fb85f9ba61",
      "VM last booted on a CPU with features this host's CPU does not have."
    ],
    "other_config": {},
    "subtask_of": "OpaqueRef:NULL",
    "subtasks": [],
    "backtrace": "(((process xapi)(filename ocaml/xapi/rbac.ml)(line 231))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 103)))"
  },
  "message": "VM_INCOMPATIBLE_WITH_THIS_HOST(OpaqueRef:7cac0ba3-d626-42ff-b2d2-eb55b627dff9, OpaqueRef:705ea907-9ac3-43df-93c7-87fb85f9ba61, VM last booted on a CPU with features this host's CPU does not have.)",
  "name": "XapiError",
  "stack": "XapiError: VM_INCOMPATIBLE_WITH_THIS_HOST(OpaqueRef:7cac0ba3-d626-42ff-b2d2-eb55b627dff9, OpaqueRef:705ea907-9ac3-43df-93c7-87fb85f9ba61, VM last booted on a CPU with features this host's CPU does not have.)
    at Function.wrap (file:///opt/xen-orchestra/packages/xen-api/_XapiError.mjs:16:12)
    at default (file:///opt/xen-orchestra/packages/xen-api/_getTaskResult.mjs:11:29)
    at Xapi._addRecordToCache (file:///opt/xen-orchestra/packages/xen-api/index.mjs:1006:24)
    at file:///opt/xen-orchestra/packages/xen-api/index.mjs:1040:14
    at Array.forEach (&lt;anonymous&gt;)
    at Xapi._processEvents (file:///opt/xen-orchestra/packages/xen-api/index.mjs:1030:12)
    at Xapi._watchEvents (file:///opt/xen-orchestra/packages/xen-api/index.mjs:1203:14)
    at runNextTicks (node:internal/process/task_queues:60:5)
    at processImmediate (node:internal/timers:447:9)
    at process.callbackTrampoline (node:internal/async_hooks:128:17)"
}
</code></pre>
<p dir="auto">Which okay, I get it, I forced it anyways (should I cold migrate in this case?)</p>
<p dir="auto">When I did force migrate it anyways this came out and the VM wasn't operable (restored from backup just to get back to what I had - no biggie)</p>
<pre><code>vm.migrate
{
  "vm": "b735466c-08f4-2d1a-3778-7e17401ec822",
  "force": true,
  "mapVifsNetworks": {
    "490fbf16-4951-2856-a8d8-0bb8ae8f67a6": "e7a40a11-d541-ce1f-f246-c1c3d855820b"
  },
  "migrationNetwork": "e7a40a11-d541-ce1f-f246-c1c3d855820b",
  "sr": "3df813d4-0818-6b95-a15f-f704ae7858e0",
  "targetHost": "165ebeed-a67e-4b7c-8898-6a9656d105a2"
}
{
  "code": "INTERNAL_ERROR",
  "params": [
    "Xenops_interface.Xenopsd_error([S(Internal_error);S(Xenops_migrate.Remote_failed(\"unmarshalling error message from remote\"))])"
  ],
  "task": {
    "uuid": "5ed69757-c5da-cf5d-da8c-3197c7098262",
    "name_label": "Async.VM.migrate_send",
    "name_description": "",
    "allowed_operations": [],
    "current_operations": {},
    "created": "20240111T20:55:31Z",
    "finished": "20240111T21:07:37Z",
    "status": "failure",
    "resident_on": "OpaqueRef:c065fa00-9e2d-47de-a5f2-bc7426b945cd",
    "progress": 1,
    "type": "&lt;none/&gt;",
    "result": "",
    "error_info": [
      "INTERNAL_ERROR",
      "Xenops_interface.Xenopsd_error([S(Internal_error);S(Xenops_migrate.Remote_failed(\"unmarshalling error message from remote\"))])"
    ],
    "other_config": {},
    "subtask_of": "OpaqueRef:NULL",
    "subtasks": [],
    "backtrace": "(((process xenopsd-xc)(filename lib/xenops_migrate.ml)(line 65))((process xenopsd-xc)(filename lib/xenops_server.ml)(line 2365))((process xenopsd-xc)(filename lib/open_uri.ml)(line 20))((process xenopsd-xc)(filename lib/open_uri.ml)(line 20))((process xenopsd-xc)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xenopsd-xc)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xenopsd-xc)(filename lib/xenops_server.ml)(line 2316))((process xenopsd-xc)(filename lib/xenops_server.ml)(line 2751))((process xenopsd-xc)(filename lib/xenops_server.ml)(line 2761))((process xenopsd-xc)(filename lib/xenops_server.ml)(line 2780))((process xenopsd-xc)(filename lib/task_server.ml)(line 162))((process xapi)(filename ocaml/xapi/xapi_xenops.ml)(line 3154))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/xapi_xenops.ml)(line 3319))((process xapi)(filename ocaml/xapi/xapi_vm_migrate.ml)(line 200))((process xapi)(filename ocaml/xapi/xapi_vm_migrate.ml)(line 206))((process xapi)(filename ocaml/xapi/xapi_vm_migrate.ml)(line 230))((process xapi)(filename ocaml/xapi/xapi_vm_migrate.ml)(line 1340))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/xapi_vm_migrate.ml)(line 1471))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 35))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 128))((process xapi)(filename lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/xapi/rbac.ml)(line 231))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 103)))"
  },
  "message": "INTERNAL_ERROR(Xenops_interface.Xenopsd_error([S(Internal_error);S(Xenops_migrate.Remote_failed(\"unmarshalling error message from remote\"))]))",
  "name": "XapiError",
  "stack": "XapiError: INTERNAL_ERROR(Xenops_interface.Xenopsd_error([S(Internal_error);S(Xenops_migrate.Remote_failed(\"unmarshalling error message from remote\"))]))
    at Function.wrap (file:///opt/xen-orchestra/packages/xen-api/_XapiError.mjs:16:12)
    at default (file:///opt/xen-orchestra/packages/xen-api/_getTaskResult.mjs:11:29)
    at Xapi._addRecordToCache (file:///opt/xen-orchestra/packages/xen-api/index.mjs:1006:24)
    at file:///opt/xen-orchestra/packages/xen-api/index.mjs:1040:14
    at Array.forEach (&lt;anonymous&gt;)
    at Xapi._processEvents (file:///opt/xen-orchestra/packages/xen-api/index.mjs:1030:12)
    at Xapi._watchEvents (file:///opt/xen-orchestra/packages/xen-api/index.mjs:1203:14)
    at runNextTicks (node:internal/process/task_queues:60:5)
    at processImmediate (node:internal/timers:447:9)
    at process.callbackTrampoline (node:internal/async_hooks:128:17)"
}
</code></pre>
]]></description><link>https://xcp-ng.org/forum/post/69494</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/69494</guid><dc:creator><![CDATA[DustinB]]></dc:creator><pubDate>Thu, 11 Jan 2024 21:58:01 GMT</pubDate></item><item><title><![CDATA[Reply to Patching and trying to Pool Hosts after they&#x27;ve been in production on Thu, 11 Jan 2024 19:06:05 GMT]]></title><description><![CDATA[<p dir="auto">As I said, you need to migrate the VMs to the target pool until the host is empty, and then you can add it to the target pool.</p>
]]></description><link>https://xcp-ng.org/forum/post/69489</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/69489</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Thu, 11 Jan 2024 19:06:05 GMT</pubDate></item><item><title><![CDATA[Reply to Patching and trying to Pool Hosts after they&#x27;ve been in production on Thu, 11 Jan 2024 19:04:31 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/olivierlambert" aria-label="Profile: olivierlambert">@<bdi>olivierlambert</bdi></a> said in <a href="/forum/post/69487">Patching and trying to Pool Hosts after they've been in production</a>:</p>
<blockquote>
<p dir="auto">You <strong>can</strong> connected to the same NAS, but each pool will have a dedicated folder named after the SR UUID. So you can't actually <strong>share</strong> a VM disk between 2 pools, by design (the pool is the only way to know which host get the lock on the disk).</p>
</blockquote>
<p dir="auto">Yeah and that is the challenge that I'm trying to remedy, ideally I want to vacate 1 more of the hosts, so that I have two up to date XCP instances, and then pool those two systems.</p>
<p dir="auto">My challenge is that these hosts were all setup separate of each other over a year or so, but do have the same architecture.</p>
]]></description><link>https://xcp-ng.org/forum/post/69488</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/69488</guid><dc:creator><![CDATA[DustinB]]></dc:creator><pubDate>Thu, 11 Jan 2024 19:04:31 GMT</pubDate></item><item><title><![CDATA[Reply to Patching and trying to Pool Hosts after they&#x27;ve been in production on Thu, 11 Jan 2024 19:01:19 GMT]]></title><description><![CDATA[<p dir="auto">You <strong>can</strong> connected to the same NAS, but each pool will have a dedicated folder named after the SR UUID. So you can't actually <strong>share</strong> a VM disk between 2 pools, by design (the pool is the only way to know which host get the lock on the disk).</p>
]]></description><link>https://xcp-ng.org/forum/post/69487</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/69487</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Thu, 11 Jan 2024 19:01:19 GMT</pubDate></item><item><title><![CDATA[Reply to Patching and trying to Pool Hosts after they&#x27;ve been in production on Thu, 11 Jan 2024 18:59:56 GMT]]></title><description><![CDATA[<p dir="auto">Each host has its own Pool (of itself), and then has VM storage on a NAS that its accessing over NFS.</p>
]]></description><link>https://xcp-ng.org/forum/post/69486</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/69486</guid><dc:creator><![CDATA[DustinB]]></dc:creator><pubDate>Thu, 11 Jan 2024 18:59:56 GMT</pubDate></item><item><title><![CDATA[Reply to Patching and trying to Pool Hosts after they&#x27;ve been in production on Thu, 11 Jan 2024 18:59:12 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/forum/user/olivierlambert" aria-label="Profile: olivierlambert">@<bdi>olivierlambert</bdi></a> said in <a href="/forum/post/69484">Patching and trying to Pool Hosts after they've been in production</a>:</p>
<blockquote>
<p dir="auto">Hi,</p>
<p dir="auto">You can't share a storage between different pools. A shared storage is only shared with hosts inside the same pool. So first, you'll need to migrate (or warm migration, or CR) VMs from a single host to the destination pool with the "final" shared SR configured there.</p>
<p dir="auto">Then, you'll remove the previous SR and join the destination pool.</p>
</blockquote>
<p dir="auto">That is extremely odd, because it at least was setup like that before my time with this org..</p>
<p dir="auto"><img src="/forum/assets/uploads/files/1704999549808-93b18fed-43a7-4aaf-aebf-b7828ccb5650-image.png" alt="93b18fed-43a7-4aaf-aebf-b7828ccb5650-image.png" class=" img-fluid img-markdown" /></p>
]]></description><link>https://xcp-ng.org/forum/post/69485</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/69485</guid><dc:creator><![CDATA[DustinB]]></dc:creator><pubDate>Thu, 11 Jan 2024 18:59:12 GMT</pubDate></item><item><title><![CDATA[Reply to Patching and trying to Pool Hosts after they&#x27;ve been in production on Thu, 11 Jan 2024 18:56:49 GMT]]></title><description><![CDATA[<p dir="auto">Hi,</p>
<p dir="auto">You can't share a storage between different pools. A shared storage is only shared with hosts inside the same pool. So first, you'll need to migrate (or warm migration, or CR) VMs from a single host to the destination pool with the "final" shared SR configured there.</p>
<p dir="auto">Then, you'll remove the previous SR and join the destination pool.</p>
]]></description><link>https://xcp-ng.org/forum/post/69484</link><guid isPermaLink="true">https://xcp-ng.org/forum/post/69484</guid><dc:creator><![CDATA[olivierlambert]]></dc:creator><pubDate>Thu, 11 Jan 2024 18:56:49 GMT</pubDate></item></channel></rss>