Move VM to a host containing a CR vm
-
Have come across a few occasions where a VM needs to migrate to a different host that already holds a CR replica but the VM cannot be moved until the (one or more) CR replicas have been removed. The replica is given a different name as part of the CR process (ie suffixed with date and name of CR process) yet the move always fails with a 'DUPLICATE_VM' error.
I presume this is due to the UUIDs matching hence the fail, except the conflict of redundant VMs is not universal since I can have two or more replicas of the same VM on the same secondary server.
A common Use Case would be to have two operational hosts, one as primary and one as secondary, where the primary has a continuous replication of its VMs to the secondary in case of outage or corruption. When the User needs to update (patch) the master, the normal approach would be to migrate the live VMs to the secondary host, upgrade the master, then move the VMs back.
However in the current scenario this fails with the above error, so instead one has to delete the replicas on the secondary before the VM migration can be undertaken (thus removing the safety net and the redundancy until the process is completed and the restarted CR process has successfully run again).
Why can't I suspend the CR process and migrate my production VM across to the secondary host without deleting all the VM's replicas first?
(If I have misunderstood something or made some stupid error, I beg forgiveness in advance )
-
@shorian I suppose you don't have a pool with shared storage?
-
@jedimarcus That is correct; issue ceases to be a problem with shared storage.
-
@shorian Does the error get logged under Settings > Logs? If so, then you may want to post the complete error log here.
-
@danp Sorry, since it seems to be a design item rather than a bug I didn't think to put the logs in. Here you go:
vm.migrate { "vm": "24103ce1-e47b-fe12-4029-d643e0382f08", "mapVifsNetworks": { "7457d175-8d01-613e-7b47-fb1714693074": "b62c7a9a-222a-e8e9-754e-982839e00d0e" }, "migrationNetwork": "ef24440c-fda5-d88b-ce4a-fd12b7ad1d4d", "sr": "cf2dbaa3-21f3-903b-0fd1-fbe68539f897", "targetHost": "98da99c3-4ec2-4db8-ab1b-a1cb6ffd329a" } { "code": 21, "data": { "objectId": "24103ce1-e47b-fe12-4029-d643e0382f08", "code": "DUPLICATE_VM" }, "message": "operation failed", "name": "XoError", "stack": "XoError: operation failed at factory (/opt/xo/xo-builds/xen-orchestra-202102171611/packages/xo-common/src/api-errors.js:21:32) at /opt/xo/xo-builds/xen-orchestra-202102171611/packages/xo-server/src/api/vm.js:487:15 at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:97:5) at runNextTicks (internal/process/task_queues.js:66:3) at processImmediate (internal/timers.js:434:9) at Object.migrate (/opt/xo/xo-builds/xen-orchestra-202102171611/packages/xo-server/src/api/vm.js:474:3) at Api.callApiMethod (/opt/xo/xo-builds/xen-orchestra-202102171611/packages/xo-server/src/xo-mixins/api.js:304:20)" }
-
@shorian No worries. I just figured the full error log would be beneficial to @olivierlambert and team if they review this thread.
Edit: This thread seems related