Migrating VM fails with DUPLICATE_VM error
-
@olivierlambert Thanks for the reply.
Sorry for my noobishness, I visually checked each VM on the destination host in XO's GUI, and none of them have that UUID.
I also have another VM that cannot migrate, which has a different UUID.Am I only interested in the UUID of the VM? Or are there additional layers of UUID that I also need to check? (eg. snapshots, VDIs, etc?)
Is there a shell command I can run to list all UUIDs on the host?
-
If there's a duplicate UUID due of an interrupted previous migration, you won't be able to see it in XO sadly (UUID should be truly unique, but there's some edge cases where it's not).
SSH to the destination host, and do a
xe vm-list | grep bb75397d-980e-9fd8-aa63-c8b13c38bd3e
.Destroy this VM first.
-
@olivierlambert Ok, so I ran the command without the grep on destination and source (grep doesnt find anything.)
I couldn't see the UUIDs of the VMs I want to migrate anywhere on the destination list.
The two VMs I am trying to migrate on the source:
[17:54 xcp-ks ~]# xe vm-list uuid ( RO) : bb75397d-980e-9fd8-aa63-c8b13c38bd3e name-label ( RW): Win10 Print Server power-state ( RO): halted uuid ( RO) : 5c27c06b-0995-73c6-bb2d-d442db947ee8 name-label ( RW): WIN10 Print Release Terminal power-state ( RO): halted
My VM list on the destination:
[17:50 xcp-city ~]# xe vm-list uuid ( RO) : 3b6f09b1-8d47-7478-b72e-2dd53788a57e name-label ( RW): MLS-VM-Ubuntu_2020-06-29T04:01:46.552Z power-state ( RO): halted uuid ( RO) : 280cf6a8-7c56-9323-0148-ce048a4081d4 name-label ( RW): WServer2019-PublicFacing_2020-05-17T16:51:13.772Z power-state ( RO): halted uuid ( RO) : 9eff99d0-a16b-d91d-5f41-d36e24357ec8 name-label ( RW): WServer2019-HiSec_2020-06-15T07:59:15.309Z power-state ( RO): halted uuid ( RO) : d5896b2d-bb75-262f-9d55-668315419306 name-label ( RW): MLS-VM-SMTP power-state ( RO): running uuid ( RO) : 826f34bd-aaaa-2bdd-61fe-2511f843d713 name-label ( RW): WS-PubNet_2020-06-29T04:57:59.732Z power-state ( RO): halted uuid ( RO) : a55b24bc-02b9-4dcd-a7fc-05f8f107660e name-label ( RW): Control domain on host: xcp-city power-state ( RO): running uuid ( RO) : 2617b8ed-a087-e257-4ca2-0e44de5b8ee9 name-label ( RW): WS-MSAdmin_2020-06-26T03:03:27.154Z power-state ( RO): halted uuid ( RO) : 08d020c9-6314-9c6b-3792-dba8f1d2f15d name-label ( RW): WServer2019-HiSec_2020-06-14T06:48:02.042Z power-state ( RO): halted uuid ( RO) : 6d98ba66-09ad-3754-ef14-ee4a8035717a name-label ( RW): WS-PubNet_2020-06-26T05:16:37.397Z power-state ( RO): halted uuid ( RO) : d513e984-5ef6-ad7f-8ae0-18c483718f23 name-label ( RW): WServer2019-PublicFacing_2020-05-22T12:57:51.293Z power-state ( RO): halted uuid ( RO) : 522e6da7-25e7-22bb-dbda-1584a5c851cf name-label ( RW): XO CE Kubuntu power-state ( RO): running uuid ( RO) : fe1f5255-4779-7510-e657-5e259032bf6a name-label ( RW): MLS-VM-Ubuntu power-state ( RO): running uuid ( RO) : 84876754-324b-aedd-fea4-7e358d0dd74c name-label ( RW): WS-MSAdmin power-state ( RO): running uuid ( RO) : a0921e9a-bb0e-6555-e1ea-8546afa3dd55 name-label ( RW): WServer2019-HiSec_2020-06-15T03:32:08.343Z power-state ( RO): halted uuid ( RO) : 6a1d28d8-f30a-74e8-3ad9-9a1fe6b37e13 name-label ( RW): WS-PubNet power-state ( RO): running uuid ( RO) : 2662852b-8a39-07f9-7a2c-de7e85c1ab76 name-label ( RW): XO CE Kubuntu_2020-03-16T00:16:43.655Z power-state ( RO): halted uuid ( RO) : b1390286-0569-e935-b922-89aa7aaf670c name-label ( RW): [XO Backup Backup XO CE Kubuntu] XO CE Kubuntu power-state ( RO): halted uuid ( RO) : 0da8353c-7ce9-543c-1448-ecaa3df96573 name-label ( RW): WS-HiSec power-state ( RO): running
-
@zevgeny @olivierlambert For anyone that comes across this under a different guise - we found exactly the same issue and couldn't work out why some VMs were migrating just fine and others were not.
It turned out that we are running a Continuous Replication task which means that there is a UUID conflict when moving the relevant VM across.
So we have Primary Host A & Secondary Host B. The VM is running on the Primary, and we use CR to keep a copy on the Secondary. However, when we want to upgrade the Primary without causing downtime on the VM, we attempt to migrate VM to the Secondary but it fails, as we already have the CR entity on that host.
I can see why this occurs - we are trying to create two versions of the same VM on the same host, however I'd have thought that the Use Case was fairly common. Primary Server running, replicates to a Secondary Server, but wants to move running VM to avoid downtime without having to delete the replicas in case there are any issues. With a copied VM the UUID conflict does not occur, but with the migrate it does.
(In our case, on each host we have a large SATA array for backups that we replicate to, with live VMs running off the SSD array. Hence there is still value in keeping the replica - it protects against disk failure or corruption despite being on the same host for a short period whilst we upgrade the Primary).
Discussion - should one be permitted to update the UUID for replicas (perhaps under advanced settings in Backups?) to avoid these conflicts - or would it cause more widespread issues? Or is our Use Case unique ?
Thanks!
-
CR isn't duplicating any VM UUID (it's a new VM created on destination).
-
That's strange. When I checked the UUID with 'xe vm-list | grep UUID' on the target host I got the 'importing...CR...' item, when I deleted it the same error persisted although the offending item was no longer present in vm-list. Only when I deleted all CR replicas could I get the migration to complete. Has been reproducible - I will investigate further and revert over the next week.
-
It might also be related to another duplicated UUID somewhere else in the object.
Feel free to post the result of
xe vm-param-list
between both VMs (the one replicated on the original one). Maybe we can spot what's going on -
@olivierlambert I know I'm chiming in here quite late but I'm also facing the same issue, and the machine uuid is not on the destination server.
I have 3 xcp-ng servers, I want to migrate from Server C to B, all 3 are independent pools and the same XCP-NG version.
da010bca-fa81-629c-5b25-ea16f03e9510 is the UUID of the VM I'm looking to move
Server C: uuid ( RO) : 4104416d-2539-411d-8d73-3590865eca4d name-label ( RW): Control domain on host: localhost power-state ( RO): running uuid ( RO) : da010bca-fa81-629c-5b25-ea16f03e9510 name-label ( RW): AZUREDIRSYNC power-state ( RO): running
Server B:
uuid ( RO) : 907bf603-49df-dc36-fc9a-9bb706f12cc8 name-label ( RW): F2VMSBODOOIT001 power-state ( RO): halted uuid ( RO) : 62d721e2-0ef0-4d37-5858-d754369825f7 name-label ( RW): F2VMSBSSOXIT002 power-state ( RO): running uuid ( RO) : 7af2c68c-7fde-c3c3-38a3-3e107412537e name-label ( RW): xenorchestrav2 power-state ( RO): running uuid ( RO) : 695a14bf-37f0-05b5-7141-ed1be479b0de name-label ( RW): Spiceworks-Fourman-Household power-state ( RO): running uuid ( RO) : 9c13dfc9-0e57-4da0-93dd-8bbfbc850cb4 name-label ( RW): Control domain on host: localhost power-state ( RO): running uuid ( RO) : 6cd8e824-a5c0-c521-9f1d-bf768e3b9999 name-label ( RW): F2VMSBODOOIT001_2021-04-01T21:40:16.406Z power-state ( RO): halted uuid ( RO) : 82c805a7-e79a-6d85-8ad6-1e8a08efce23 name-label ( RW): F2VMSBWADMIT001 power-state ( RO): halted uuid ( RO) : f0d98192-bb27-129a-4cec-7ef9d4dac880 name-label ( RW): F2VMSBDBNOIT004 power-state ( RO): halted uuid ( RO) : 35357e77-8a48-f523-5abd-dbb1da560b00 name-label ( RW): F2VMSBINVXIT001 power-state ( RO): running uuid ( RO) : 350e0434-2cab-c3eb-0d22-48d4f38d7997 name-label ( RW): F2VMSBWWWXIT001 power-state ( RO): running uuid ( RO) : 4db452a8-a06d-ff74-3ddb-7df3fad3e1c5 name-label ( RW): F2VMSBSSOXIT001 power-state ( RO): running uuid ( RO) : 5d229ce2-9a31-aabe-d250-7f1ff377c551 name-label ( RW): F2VMSBODOOIT001_2021-04-01T18:53:52.769Z power-state ( RO): halted
Getting the following error in both xcp-ng center and XO:
vm.migrate { "vm": "da010bca-fa81-629c-5b25-ea16f03e9510", "mapVdisSrs": { "dd58e003-a906-420f-8f41-a96d1d42e087": "41587f9b-a755-03f8-1fa1-e80d18c2e8b1" }, "mapVifsNetworks": { "b37022e9-031d-04aa-f574-92102095b6ad": "89375484-4323-e6c1-542b-cb993b4272c4" }, "migrationNetwork": "93b5130a-6c13-9218-9588-582a9379bef4", "sr": "41587f9b-a755-03f8-1fa1-e80d18c2e8b1", "targetHost": "126c62b2-aacb-4f5a-b6bc-12865592ed20" } { "code": 21, "data": { "objectId": "da010bca-fa81-629c-5b25-ea16f03e9510", "code": "DUPLICATE_VM" }, "message": "operation failed", "name": "XoError", "stack": "XoError: operation failed at factory (/opt/xen-orchestra/packages/xo-common/src/api-errors.js:21:32) at /opt/xen-orchestra/packages/xo-server/src/api/vm.js:487:15 at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:93:5) at runNextTicks (internal/process/task_queues.js:62:3) at processImmediate (internal/timers.js:434:9) at process.topLevelDomainCallback (domain.js:144:15) at process.callbackTrampoline (internal/async_hooks.js:129:14) at Object.migrate (/opt/xen-orchestra/packages/xo-server/src/api/vm.js:474:3) at Api.callApiMethod (/opt/xen-orchestra/packages/xo-server/src/xo-mixins/api.js:304:20)" }
Thanks,
Dan -
Can you try with
xe
command to see if you have the same error? -
@olivierlambert having exactly the same issue as wcom939 reported.
What did you mean by "Can you try with xe command to see if you have the same error?" ? Migrate using CLI? there are too many options:) -