Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    rzrR
    Thank you every visitors and for those who did not had the chance to visit fosdem check this report: https://xcp-ng.org/blog/2026/02/19/fosdem-2026-follow-up/ [image: vates-xcp-ng-at-fosdem-2026-1.webp] One question, I promised to forward to the forum, how many VM are you running on XCP-ng ? Dozen, hundreds, thousands or more ?
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    T
    @comdirect Use this command: (replace sda in the command below with the relevant device) cat /sys/block/sda/queue/scheduler The active scheduler will be enclosed in brackets. e.g. noop deadline [cfq] For multiple drives use: grep "" /sys/block/*/queue/scheduler
  • 3k Topics
    27k Posts
    M
    @Greg_E Looks like you were writing a reply as I was... But yeah, Truenas is on the roadmap at some point in the future, once I've had the time to dig into the details and understand where this fits in the overall system architecture. Out of interest though, where would you see Truenas fitting in with this setup outlined below? My initial thoughts are that Truenas in this context would be either; A glorified USB Stick storing ISOs A glorified (RAID Redundant) of an External HDD running as the Backup platform Or both of the above Have I understood that context correctly? Current setup is essentially..... Bare Metal HBA Raid Controllers for Storage Primary Raid Array Backup Raid Array Dom0 XCP-ng Server DomU XOA VM Backups; Primary Raid Array -> Backup Raid Array Backups; Primary Raid Array -> Remote PC Backups; Primary Raid Array -> Dom0 Remote PC ISOs via SMB Share Truenas in the above context would simply be an additional backup to either; A Truenas VM on the Base Metal as a VM A physically separate Bare Metal running Truenas (aka. a glorified RAID'ed External USB Disk). Is that where you would see things fitting into the overall system architecture? Curious to get your thoughts on that.....
  • Our hyperconverged storage solution

    40 Topics
    715 Posts
    I
    @ronan-a Thanks
  • 32 Topics
    94 Posts
    olivierlambertO
    Difficile de parler de « réalité » avec les benchmarks. Vérifiez aussi l'iodepth (qui dépend du type matériel que vous avez, sur du flash/NVMe vous pouvez monter à 128 ou 256), la latence entre en compte aussi bien sûr. Le bottleneck principal est le monothreading de tapdisk, si vous testez sur plusieurs VMs différentes la somme va monter de manière assez régulière.