Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    rzrR
    Thank you every visitors and for those who did not had the chance to visit fosdem check this report: https://xcp-ng.org/blog/2026/02/19/fosdem-2026-follow-up/ [image: vates-xcp-ng-at-fosdem-2026-1.webp] One question, I promised to forward to the forum, how many VM are you running on XCP-ng ? Dozen, hundreds, thousands or more ?
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    nikadeN
    I'm doing a bit of labbing this friday evening and I ran into a scenario I haven't encountered before and I just wanted to see if this is a bug or if im just being unlucky. 7st virtual xcp-ng 8.3 vm's running on a physical xcp-ng 8.3, all vm's have nested virtualization enabled. I fired up some VM's in the pool to try the load balancer out, by running some "benchmark"-scripts within the vm's to create some load. After a while 3/7 hosts failed, because of not enough ram (only 4gb per VM) which isn't really that strange, but after they failed they're not able to "connect" to the pool again: [image: 1771606169069-e0b56a17-937f-488a-a11c-63a46f6b7491-bild.png] I then went to the sto-xcp7 vm and checked the pool.conf, only to see that it actually listed sto-xcp8 (which is the master after the fencing): [17:48 sto-xcp7 ~]# cat /etc/xensource/pool.conf slave:10.200.0.98[17:49 sto-xcp7 ~]# I can also go the host in XO and see that it's "halted" but yet, it displays the console: [image: 1771606266855-479624f6-fe9c-4205-b303-8f1f1c772907-bild.png] Just for a sanity check, I checked xcp-ng center as well, and it agrees with XO, that the hosts are offline: [image: 1771606298601-933c545f-65f7-4b48-a2a1-eb8681c9ff29-bild.png] Is this a bug or what's actually going on? I tried rebooting the failed hosts, without any luck. Any pointers on where to look?
  • 3k Topics
    27k Posts
    A
    @Danp I don't have that option in the Advanced Tab.
  • Our hyperconverged storage solution

    40 Topics
    715 Posts
    I
    @ronan-a Thanks
  • 32 Topics
    94 Posts
    olivierlambertO
    Difficile de parler de « réalité » avec les benchmarks. Vérifiez aussi l'iodepth (qui dépend du type matériel que vous avez, sur du flash/NVMe vous pouvez monter à 128 ou 256), la latence entre en compte aussi bien sûr. Le bottleneck principal est le monothreading de tapdisk, si vous testez sur plusieurs VMs différentes la somme va monter de manière assez régulière.