Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    rzrR
    Thank you every visitors and for those who did not had the chance to visit fosdem check this report: https://xcp-ng.org/blog/2026/02/19/fosdem-2026-follow-up/ [image: vates-xcp-ng-at-fosdem-2026-1.webp] One question, I promised to forward to the forum, how many VM are you running on XCP-ng ? Dozen, hundreds, thousands or more ?
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    T
    @comdirect Use this command: (replace sda in the command below with the relevant device) cat /sys/block/sda/queue/scheduler The active scheduler will be enclosed in brackets. e.g. noop deadline [cfq] For multiple drives use: grep "" /sys/block/*/queue/scheduler
  • 3k Topics
    27k Posts
    A
    @acebmxer Yeah, I definitely believe this is not the latest version, but I was told it was kept like that for a reason. Can't quite remember what the reasoning is, as this is for another company who utilizes xoa. But, would running this command ' xe vm-reset-powerstate uuid=<VM_UUID> --force ' reboot the vm?
  • Our hyperconverged storage solution

    40 Topics
    715 Posts
    I
    @ronan-a Thanks
  • 32 Topics
    94 Posts
    olivierlambertO
    Difficile de parler de « réalité » avec les benchmarks. Vérifiez aussi l'iodepth (qui dépend du type matériel que vous avez, sur du flash/NVMe vous pouvez monter à 128 ou 256), la latence entre en compte aussi bien sûr. Le bottleneck principal est le monothreading de tapdisk, si vous testez sur plusieurs VMs différentes la somme va monter de manière assez régulière.