Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    rzrR
    Thank you every visitors and for those who did not had the chance to visit fosdem check this report: https://xcp-ng.org/blog/2026/02/19/fosdem-2026-follow-up/ [image: vates-xcp-ng-at-fosdem-2026-1.webp] One question, I promised to forward to the forum, how many VM are you running on XCP-ng ? Dozen, hundreds, thousands or more ?
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    T
    @comdirect Use this command: (replace sda in the command below with the relevant device) cat /sys/block/sda/queue/scheduler The active scheduler will be enclosed in brackets. e.g. noop deadline [cfq] For multiple drives use: grep "" /sys/block/*/queue/scheduler
  • 3k Topics
    27k Posts
    P
    @acebmxer as per your screenshots, we can guess you have daily backup jobs operated by the XOA and the memory graph is significant of a reguler OOM kill (out of memory) there is some kind of memory leak happening and the devs are investigating it in another thread in here upgrading to 8GB or 16GB your XOA memory will take it more days before OOM happens, but it will happen check this thread https://xcp-ng.org/forum/topic/11721/backup-mail-report-says-interrupted-but-it-s-not/48
  • Our hyperconverged storage solution

    40 Topics
    715 Posts
    I
    @ronan-a Thanks
  • 32 Topics
    94 Posts
    olivierlambertO
    Difficile de parler de « réalité » avec les benchmarks. Vérifiez aussi l'iodepth (qui dépend du type matériel que vous avez, sur du flash/NVMe vous pouvez monter à 128 ou 256), la latence entre en compte aussi bien sûr. Le bottleneck principal est le monothreading de tapdisk, si vous testez sur plusieurs VMs différentes la somme va monter de manière assez régulière.