Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    rzrR
    Thank you every visitors and for those who did not had the chance to visit fosdem check this report: https://xcp-ng.org/blog/2026/02/19/fosdem-2026-follow-up/ [image: vates-xcp-ng-at-fosdem-2026-1.webp] One question, I promised to forward to the forum, how many VM are you running on XCP-ng ? Dozen, hundreds, thousands or more ?
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    nikadeN
    I'm doing a bit of labbing this friday evening and I ran into a scenario I haven't encountered before and I just wanted to see if this is a bug or if im just being unlucky. 7st virtual xcp-ng 8.3 vm's running on a physical xcp-ng 8.3, all vm's have nested virtualization enabled. I fired up some VM's in the pool to try the load balancer out, by running some "benchmark"-scripts within the vm's to create some load. After a while 3/7 hosts failed, because of not enough ram (only 4gb per VM) which isn't really that strange, but after they failed they're not able to "connect" to the pool again: [image: 1771606169069-e0b56a17-937f-488a-a11c-63a46f6b7491-bild.png] I then went to the sto-xcp7 vm and checked the pool.conf, only to see that it actually listed sto-xcp8 (which is the master after the fencing): [17:48 sto-xcp7 ~]# cat /etc/xensource/pool.conf slave:10.200.0.98[17:49 sto-xcp7 ~]# I can also go the host in XO and see that it's "halted" but yet, it displays the console: [image: 1771606266855-479624f6-fe9c-4205-b303-8f1f1c772907-bild.png] Just for a sanity check, I checked xcp-ng center as well, and it agrees with XO, that the hosts are offline: [image: 1771606298601-933c545f-65f7-4b48-a2a1-eb8681c9ff29-bild.png] Is this a bug or what's actually going on? I tried rebooting the failed hosts, without any luck. Any pointers on where to look?
  • 3k Topics
    27k Posts
    J
    @vates-11940382 Hi All at Vates, I just wanted to say a big thank you for the MCP support that’s now landing in Xen Orchestra. This is definitely a genuinely forward-thinking move, and it’s going to have a huge impact on how IaC tooling interacts with XCP-ng going forward. MCP gives XO a clean, structured, read-only interface that modern AI-assisted tools (e.g Pulumi Neo, Copilot, Claude, Cursor, etc.) can understand natively. That’s a massive step toward making XCP-ng an AI-visible, AI-navigable platform - something no other virtualisation stack is doing yet. What’s even more exciting is the long-term implication: this kind of openness and clarity is exactly what hyperscalers have been struggling with internally. If Vates continues down this path, it’s not unrealistic that MCP-native infrastructure could start attracting interest from much larger players - whether as customers, collaborators, or contributors. The combination of: IaC MCP AI-assisted operations and XCP-ng’s open architecture Puts Vates in a very strong position for the future. Thanks again for pushing this forward. It’s a big deal, and it’s going to unlock a lot of possibilities for the community.
  • Our hyperconverged storage solution

    40 Topics
    715 Posts
    I
    @ronan-a Thanks
  • 32 Topics
    94 Posts
    olivierlambertO
    Difficile de parler de « réalité » avec les benchmarks. Vérifiez aussi l'iodepth (qui dépend du type matériel que vous avez, sur du flash/NVMe vous pouvez monter à 128 ou 256), la latence entre en compte aussi bien sûr. Le bottleneck principal est le monothreading de tapdisk, si vous testez sur plusieurs VMs différentes la somme va monter de manière assez régulière.