Categories

  • All news regarding Xen and XCP-ng ecosystem

    143 Topics
    4k Posts
    bogikornelB
    @stormi XCP-ng QCOW2 vs. VHD Performance Feedback on NVMe First of all, I would like to thank the team for all the hard work in bringing QCOW2 support to a production-ready state. It is a very welcome feature. I have performed some quick I/O benchmarks comparing the new QCOW2 format against the traditional VHD. In my tests, QCOW2 appears significantly slower than VHD on my hardware. Test Environment Hypervisor: Dell PowerEdge R420 CPU: Intel Xeon E5-2470 v2 Storage: Intel SSDPELKX010T8 NVMe VM OS: Debian 13 VM Specs: 2 vCPUs, 1GB RAM Setup: One 10GB VHD and one 10GB QCOW2 disk, both pre-filled from /dev/random. Methodology I used a custom test suite available here: https://vm01.unsoft.hu/~ventura/fio/fio_test_20250408.tar.gz [image: 1778009249525-vhd_bandwidth_summary.png] [image: 1778009256705-vhd_latency_summary.png] [image: 1778009281033-qcow2_bandwidth_summary.png] [image: 1778009286521-qcow2_latency_summary.png] I also ran a simplefio loop with the following results: VHD: root@Debian-13-CloudInit-20250810:/mnt/vhd# for mode in read write; do for jobs in 1 16; do for bs in 4 64; do for t in "" rand; do printf "%2i qd %2ik % 4s " $jobs $bs $t; fio --name=random-write --rw=$t$mode --bs=${bs}k --numjobs=1 --size=1g --iodepth=$jobs --runtime=10 --time_based --direct=1 --ioengine=libaio|grep -e BW -e runt ; done; done; done; done 1 qd 4k read: IOPS=9625, BW=37.6MiB/s (39.4MB/s)(376MiB/10001msec) 1 qd 4k rand read: IOPS=5414, BW=21.2MiB/s (22.2MB/s)(212MiB/10001msec) 1 qd 64k read: IOPS=2657, BW=166MiB/s (174MB/s)(1661MiB/10001msec) 1 qd 64k rand read: IOPS=2575, BW=161MiB/s (169MB/s)(1610MiB/10001msec) 16 qd 4k read: IOPS=45.7k, BW=178MiB/s (187MB/s)(1785MiB/10001msec) 16 qd 4k rand read: IOPS=45.9k, BW=179MiB/s (188MB/s)(1794MiB/10001msec) 16 qd 64k read: IOPS=16.7k, BW=1041MiB/s (1092MB/s)(10.2GiB/10001msec) 16 qd 64k rand read: IOPS=16.7k, BW=1042MiB/s (1093MB/s)(10.2GiB/10001msec) 1 qd 4k write: IOPS=8842, BW=34.5MiB/s (36.2MB/s)(345MiB/10001msec); 0 zone resets 1 qd 4k rand write: IOPS=8880, BW=34.7MiB/s (36.4MB/s)(347MiB/10001msec); 0 zone resets 1 qd 64k write: IOPS=6095, BW=381MiB/s (399MB/s)(3810MiB/10001msec); 0 zone resets 1 qd 64k rand write: IOPS=6006, BW=375MiB/s (394MB/s)(3755MiB/10001msec); 0 zone resets 16 qd 4k write: IOPS=49.3k, BW=193MiB/s (202MB/s)(1928MiB/10001msec); 0 zone resets 16 qd 4k rand write: IOPS=47.3k, BW=185MiB/s (194MB/s)(1848MiB/10001msec); 0 zone resets 16 qd 64k write: IOPS=14.3k, BW=891MiB/s (934MB/s)(8910MiB/10001msec); 0 zone resets 16 qd 64k rand write: IOPS=15.5k, BW=966MiB/s (1013MB/s)(9663MiB/10001msec); 0 zone resets QCOW2 root@Debian-13-CloudInit-20250810:/mnt/qcow2# for mode in read write; do for jobs in 1 16; do for bs in 4 64; do for t in "" rand; do printf "%2i qd %2ik % 4s " $jobs $bs $t; fio --name=random-write --rw=$t$mode --bs=${bs}k --numjobs=1 --size=1g --iodepth=$jobs --runtime=10 --time_based --direct=1 --ioengine=libaio|grep -e BW -e runt ; done; done; done; done 1 qd 4k read: IOPS=5866, BW=22.9MiB/s (24.0MB/s)(229MiB/10001msec) 1 qd 4k rand read: IOPS=4000, BW=15.6MiB/s (16.4MB/s)(156MiB/10001msec) 1 qd 64k read: IOPS=2229, BW=139MiB/s (146MB/s)(1394MiB/10001msec) 1 qd 64k rand read: IOPS=2161, BW=135MiB/s (142MB/s)(1351MiB/10001msec) 16 qd 4k read: IOPS=16.9k, BW=66.2MiB/s (69.4MB/s)(662MiB/10001msec) 16 qd 4k rand read: IOPS=17.6k, BW=68.8MiB/s (72.1MB/s)(688MiB/10001msec) 16 qd 64k read: IOPS=7244, BW=453MiB/s (475MB/s)(4529MiB/10002msec) 16 qd 64k rand read: IOPS=6994, BW=437MiB/s (458MB/s)(4372MiB/10002msec) 1 qd 4k write: IOPS=5551, BW=21.7MiB/s (22.7MB/s)(217MiB/10001msec); 0 zone resets 1 qd 4k rand write: IOPS=5159, BW=20.2MiB/s (21.1MB/s)(202MiB/10001msec); 0 zone resets 1 qd 64k write: IOPS=4024, BW=252MiB/s (264MB/s)(2515MiB/10001msec); 0 zone resets 1 qd 64k rand write: IOPS=4027, BW=252MiB/s (264MB/s)(2517MiB/10001msec); 0 zone resets 16 qd 4k write: IOPS=14.5k, BW=56.8MiB/s (59.6MB/s)(568MiB/10002msec); 0 zone resets 16 qd 4k rand write: IOPS=14.0k, BW=54.7MiB/s (57.4MB/s)(547MiB/10001msec); 0 zone resets 16 qd 64k write: IOPS=6360, BW=398MiB/s (417MB/s)(3976MiB/10002msec); 0 zone resets 16 qd 64k rand write: IOPS=6090, BW=381MiB/s (399MB/s)(3807MiB/10002msec); 0 zone resets I would be interested to know if I'm overlooking something, or if the qcow2 format simply provides lower performance compared to VHD for the time being?
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    D
    @vlamincktr said: @acebmxer I may just need to re-evaluate our backup strategy and adjust it so there is more time for the backups, I could also just run the daily delta's, the main issue is the weekly fulls that I run as a precaution, I'm always paranoid about something happening with the daily delta chain and having an unusable backup so i also pull dedicated weekly full backups which take a lot of time to run. I've also considered running the full backups at different days to spread them out more, sounds like one of those is my best option rather than adding more cost/complexity. I would absolutely change this backup plan, to running monthly full backups (weekly full backups are overkill for most). The backup mechanism in XO has improved a ton (since launch). Without more detail, types of VMs, workloads etc it's really difficult for anyone to offer a perfect answer, but most people here would likely agree that weekly full's aren't a benefit here. Changing the window on your backups is also an option as you mentioned, but that is only shifting when the work is being performed, not the type of work performed. If you have a 1TB server and you're backing that up daily with delta's and weekly with full backups you're backing up something like 1300 GB every week (of course this depends on your delta data change).
  • 3k Topics
    28k Posts
    B
    @florent Hi Yes, but manually, directly from the packages and with manual configuration
  • Our hyperconverged storage solution

    45 Topics
    732 Posts
    DAYELAD
    Hello, I’m experiencing an issue on an XCP-ng cluster using XOSTOR. Environment: 3-node XCP-ng cluster XOSTOR distributed storage (2x2Tob nvme on each host) XOA for management Management network 1Gb/s Storage Network 10Gb/s MTU 1500 everywhere (no jumbo frames) So during VM migrations, creation, destroy XOA loses connection to my host pool, VMs keep running normally Hosts remain reachable (SSH / HTTPS / ping OK), Connection comes back after some time 30s to 1min. Observations: No significant CPU or RAM saturation No obvious disk latency issues (iostat looks normal) No errors reported on NICs xapi process remains active (no crash or freeze) The problem is intermittent and seems random. i've monitored nic with iftop and i see no bandwith bottleneck et and can see that XOSTOR is using 10gb network only. Has anyone experienced similar behavior with XOSTOR? And how to Fix it ? Thanks in advance for your help.
  • 34 Topics
    102 Posts
    B
    La remarque a été intégrée dans l'article: https://www.myprivatelab.tech/xcp_lab_v2_ha#perte-master Merci encore pour le retour.