Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    rzrR
    Thank you every visitors and for those who did not had the chance to visit fosdem check this report: https://xcp-ng.org/blog/2026/02/19/fosdem-2026-follow-up/ [image: vates-xcp-ng-at-fosdem-2026-1.webp] One question, I promised to forward to the forum, how many VM are you running on XCP-ng ? Dozen, hundreds, thousands or more ?
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    T
    @comdirect Use this command: (replace sda in the command below with the relevant device) cat /sys/block/sda/queue/scheduler The active scheduler will be enclosed in brackets. e.g. noop deadline [cfq] For multiple drives use: grep "" /sys/block/*/queue/scheduler
  • 3k Topics
    27k Posts
    M
    Various similar issues I've spotted on Reddit re. Windows updates breaking SMB Mounts over the last 12-18 months with updates. Impossible to track down the specifics so not even going to bother. Then found this issue too, https://xcp-ng.org/forum/topic/10545/long-delays-at-46-when-creating-or-starting-a-new-vm Which was solved by moving from SMB Mount to NFS Mount. That isn't going to work with my infrastructure setup as the Windows machine is on a Home Edition, and NFS is only supported on Business or Enterprise edition. As it stands right now, and given nothing I've tried to get the SMB Mount to work again, feels like I'm at a dead end to get this fixed. Looking like I'll need to re-address the architecture for storing the ISOs and one of the backup routes (thankfully there are many types of backups in place for redundancy) Not the end of the world, but a tad annoying which feels like it's probably a Windows update that broke this with one of the almost daily automated updates/shutdowns/restarts that kicks in. Only started happening yesterday, all previous backups via the SR were working fine until then. That's the only thing I can put this down to really.
  • Our hyperconverged storage solution

    40 Topics
    715 Posts
    I
    @ronan-a Thanks
  • 32 Topics
    94 Posts
    olivierlambertO
    Difficile de parler de « réalité » avec les benchmarks. Vérifiez aussi l'iodepth (qui dépend du type matériel que vous avez, sur du flash/NVMe vous pouvez monter à 128 ou 256), la latence entre en compte aussi bien sûr. Le bottleneck principal est le monothreading de tapdisk, si vous testez sur plusieurs VMs différentes la somme va monter de manière assez régulière.