@Cyrille Thanks! Looking forward to using this, or something like this.
It would be awesome if there was a Youtube demo getting started video for yaml or typescript!
@Cyrille Thanks! Looking forward to using this, or something like this.
It would be awesome if there was a Youtube demo getting started video for yaml or typescript!
@Jonathon this is really nice to have shared, as we are looking to migrate from the RKE cluster we've deployed on bare-metal Xen to XCP-ng VMs to setup an RKE2 cluster to migrate to.
Will review this and probably have a bunch of questions!
@florent Thanks, if there is something I need to do to coalesce the vdis to avoid disaster, that would be good to understand. I had been thinking it was just an XO issue that would not affect running vms and their vdis.
We have been quietly suffering without the time to try and resolve it for the past couple of months.
I have now spent the day trying to resolve it in our environment, as we have one SR having this problem with some hundreds! of vdi.
I am having the same problem whether running the docker container version of XO CE or the local install on a VM we've been using for ages.
It seems that the api call from the VDIs tab in v6 (Disks in v5) may be triggering a call to the wrong url, without the /rest/v0 prefix:
sudo journalctl -u xo-server -n 300 --no-pager
2026-02-26T06:01:57.169Z xo:rest-api:error-handler INFO [GET] /vms/[[UUID]]/vdis (404)
I had the same experience as some others for a while, a couple of months ago, where it would not show up in the v5 UI but was showing in v6, but very quickly after that it stopped working in either.
I know that these are VDIs with a snapshot in the chain, for example a parent VDI that may have two snapshots from it.
I had thought the issue may have something to do with https://github.com/vatesfr/xen-orchestra/pull/9381, as it was around this time that I saw the problem in v6 - but I see that this topic was started just before Christmas so there must have been something else too, perhaps that is when it emerged in v5 and this later patch then surfaced it in v6.
If I run curl with the /rest/v0 prefix to the url I don't get the 404.
I hope this helps to track it down!
@Cyrille Thanks! Looking forward to using this, or something like this.
It would be awesome if there was a Youtube demo getting started video for yaml or typescript!
@Jonathon this is really nice to have shared, as we are looking to migrate from the RKE cluster we've deployed on bare-metal Xen to XCP-ng VMs to setup an RKE2 cluster to migrate to.
Will review this and probably have a bunch of questions!
@Forza There are various topics touching on how SMAPIv1 is a bottleneck here, eg: https://xcp-ng.org/forum/topic/9389/backup-migration-performance/8 - that's probably not the best example as it is also more on backups than migration!
@pdonias nice - so I gather we can delete from the 'server' settings the secondary host that we tried adding there, but is showing an error.
From what you say, I gather the 'pool connections' is for master servers of pools so there should just be one in the settings for each pool?
@ZaphodB I have the same experience some 2.5yrs later - I don't know why there isn't some more clarity in XO as to what and when you should add a machine as a "Server" under settings. When I installed XCP-NG on my second 'host' it joined the pool but wasn't added in XO as a "Server".
@Meth0d thanks for sharing your experience. Fortunately I was able to just hit "Convert to HVM" after my imported PV failed to reboot after upgrade from 20.04 to 22.04 - was all set to load a livecd etc too!
@stevewest15 that is a shame! Thanks for sharing!
Were you able to work it out? I have a 32GB RAM domU that keeps failing migration between nodes unfortunately!