Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    rzrR
    Thank you every visitors and for those who did not had the chance to visit fosdem check this report: https://xcp-ng.org/blog/2026/02/19/fosdem-2026-follow-up/ [image: vates-xcp-ng-at-fosdem-2026-1.webp] One question, I promised to forward to the forum, how many VM are you running on XCP-ng ? Dozen, hundreds, thousands or more ?
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    T
    @comdirect Use this command: (replace sda in the command below with the relevant device) cat /sys/block/sda/queue/scheduler The active scheduler will be enclosed in brackets. e.g. noop deadline [cfq] For multiple drives use: grep "" /sys/block/*/queue/scheduler
  • 3k Topics
    27k Posts
    M
    Bit baffled by this one..... I've spun up a lot of VMs historically with no problems at all. Yet when I've tried recently on the XOA from Sources (installed on a VM on the Bare Metal Host) It's failed on 3x attempted VM Creations, and their subsequent Deletions have also failed to kick in. Initially though this could be a network issue as backups were taking place at the same time, but even after those finished (well, 99% finished according to the Tasks) - All of these basic tasks were just getting stuck on the process. VM.start: 54% (Stuck at....) Async.VM.hard_shutdown (on HOST) 33% (Stuck at....) VBD.unplug (on HOST) 0% (Stuck at.....) Async.VM.destroy (on HOST) 0% (Stuck at.....) As I mentioned, a handful of backups are stuck at 99%, so not sure if that is somehow related and/of blocking things - But can't see how it would be as the backups are only for currently running VMs) It's all a bit odd..... And while odd is normal working in tech, I'm getting zero information from XOA about what the issue is, and ultimately how to solve it, and a quick Google of "Turn the Host off and back on again isn't really an option in the real world" Any ideas? Regards, Michael
  • Our hyperconverged storage solution

    40 Topics
    715 Posts
    I
    @ronan-a Thanks
  • 32 Topics
    94 Posts
    olivierlambertO
    Difficile de parler de « réalité » avec les benchmarks. Vérifiez aussi l'iodepth (qui dépend du type matériel que vous avez, sur du flash/NVMe vous pouvez monter à 128 ou 256), la latence entre en compte aussi bien sûr. Le bottleneck principal est le monothreading de tapdisk, si vous testez sur plusieurs VMs différentes la somme va monter de manière assez régulière.