Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    rzrR
    Thank you every visitors and for those who did not had the chance to visit fosdem check this report: https://xcp-ng.org/blog/2026/02/19/fosdem-2026-follow-up/ [image: vates-xcp-ng-at-fosdem-2026-1.webp] One question, I promised to forward to the forum, how many VM are you running on XCP-ng ? Dozen, hundreds, thousands or more ?
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    D
    @nikade said in Every virtual machine I restart doesn't boot.: @DustinB yeah im guessing the VDI isn't attached to the VM for some reason, based on the screenshot. Im also wondering if he ever rebooted the VM's after installing them with PXE Right, it's a likely answer... but even then I would've expected his PXE server to just restart the installation process all over again... assuming that the disk is attached to the VM etc and that PXE boot isn't disabled automatically like it is with an ISO after first boot. haha
  • 3k Topics
    27k Posts
    andrewperryA
    We have been quietly suffering without the time to try and resolve it for the past couple of months. I have now spent the day trying to resolve it in our environment, as we have one SR having this problem with some hundreds! of vdi. I am having the same problem whether running the docker container version of XO CE or the local install on a VM we've been using for ages. It seems that the api call from the VDIs tab in v6 (Disks in v5) may be triggering a call to the wrong url, without the /rest/v0 prefix: sudo journalctl -u xo-server -n 300 --no-pager 2026-02-26T06:01:57.169Z xo:rest-api:error-handler INFO [GET] /vms/[[UUID]]/vdis (404) I had the same experience as some others for a while, a couple of months ago, where it would not show up in the v5 UI but was showing in v6, but very quickly after that it stopped working in either. I know that these are VDIs with a snapshot in the chain, for example a parent VDI that may have two snapshots from it. I had thought the issue may have been had something to do with https://github.com/vatesfr/xen-orchestra/pull/9381 as it was around this time that I saw the problem in v5 before it also trickled through to v6 - but now I see that this topic was started just before Christmas so there must have been something else too. If I run curl with the /rest/v0 prefix to the url I don't get the 404. I hope this helps to track it down! MathieuRA opened this pull request in vatesfr/xen-orchestra closed fix(rest-api): fix getVmVdis and enhance the type #9381
  • Our hyperconverged storage solution

    41 Topics
    717 Posts
    DanpD
    @tmnguyen You can open a support ticket and request that we reactivate your XOSTOR trial licenses to match your existing XOA trial.
  • 32 Topics
    94 Posts
    olivierlambertO
    Difficile de parler de « réalité » avec les benchmarks. Vérifiez aussi l'iodepth (qui dépend du type matériel que vous avez, sur du flash/NVMe vous pouvez monter à 128 ou 256), la latence entre en compte aussi bien sûr. Le bottleneck principal est le monothreading de tapdisk, si vous testez sur plusieurs VMs différentes la somme va monter de manière assez régulière.