Thank you every visitors and for those who did not had the chance to visit fosdem check this report:
https://xcp-ng.org/blog/2026/02/19/fosdem-2026-follow-up/
[image: vates-xcp-ng-at-fosdem-2026-1.webp]
One question, I promised to forward to the forum, how many VM are you running on XCP-ng ?
Dozen, hundreds, thousands or more ?
@comdirect
Use this command: (replace sda in the command below with the relevant device)
cat /sys/block/sda/queue/scheduler
The active scheduler will be enclosed in brackets. e.g. noop deadline [cfq]
For multiple drives use:
grep "" /sys/block/*/queue/scheduler
@acebmxer as per your screenshots, we can guess you have daily backup jobs operated by the XOA
and the memory graph is significant of a reguler OOM kill (out of memory)
there is some kind of memory leak happening and the devs are investigating it in another thread in here
upgrading to 8GB or 16GB your XOA memory will take it more days before OOM happens, but it will happen
check this thread https://xcp-ng.org/forum/topic/11721/backup-mail-report-says-interrupted-but-it-s-not/48
Difficile de parler de « réalité » avec les benchmarks. Vérifiez aussi l'iodepth (qui dépend du type matériel que vous avez, sur du flash/NVMe vous pouvez monter à 128 ou 256), la latence entre en compte aussi bien sûr.
Le bottleneck principal est le monothreading de tapdisk, si vous testez sur plusieurs VMs différentes la somme va monter de manière assez régulière.