Thank you every visitors and for those who did not had the chance to visit fosdem check this report:
https://xcp-ng.org/blog/2026/02/19/fosdem-2026-follow-up/
[image: vates-xcp-ng-at-fosdem-2026-1.webp]
One question, I promised to forward to the forum, how many VM are you running on XCP-ng ?
Dozen, hundreds, thousands or more ?
@comdirect
Use this command: (replace sda in the command below with the relevant device)
cat /sys/block/sda/queue/scheduler
The active scheduler will be enclosed in brackets. e.g. noop deadline [cfq]
For multiple drives use:
grep "" /sys/block/*/queue/scheduler
I have been having the same issue and have been watching it for the last couple weeks. Initially my XOA only had 8GB of ram assigned, i have bumped it up to 16 to try an alleviate the issue. Seems to be some sort of memory leak. This is the official XO Appliance too not XO CE.
I changed the systemd file to make use of the extra memory as per the docs,
ExecStart=/usr/local/bin/node --max-old-space-size=12288 /usr/local/bin/xo-server
It seems that over time it will just consume all of its memory until it crashes and restarts no matter how much i assign.
[image: 1771812894717-d7c736e8-475a-401a-acce-e22d4c8688d7-image.png]
Difficile de parler de « réalité » avec les benchmarks. Vérifiez aussi l'iodepth (qui dépend du type matériel que vous avez, sur du flash/NVMe vous pouvez monter à 128 ou 256), la latence entre en compte aussi bien sûr.
Le bottleneck principal est le monothreading de tapdisk, si vous testez sur plusieurs VMs différentes la somme va monter de manière assez régulière.