QCOW2 is now GA in XCP-ng
Starting with the latest updates for XCP-ng 8.3, QCOW2 is now production-ready and supports disks up to 16 TiB. This release introduces the QCOW2 open format (from QEMU) for virtual disks (VDIs), moving beyond the VHD format to overcome its inherent size limitations. The primary constraint of VHD is its 2 TiB per disk ceiling.
With QCOW2, we’re raising the maximum disk size to 16 TiB (technically 16,381 GiB to account for metadata overhead). We’re using this upper limit primarily because it’s one we can regularly test for regressions, but also to maintain consistency between SRs and enable live migration, while accounting for constraints tied to the ext SRs. In the future, we’ll decouple the maximum size from the SR type, which will allow us to scale to significantly larger capacities.
Prioritizing stability
Adding QCOW2 support to XCP-ng 8.3 has been an 18-month journey, involving significant contributions from the entire XCP-ng development team. This presented a dual challenge:
- Deliver the feature to users who need large disks immediately, rather than waiting for the next major XCP-ng release.
- Preserve the proven reliability of VHD support, which remains the standard in your production environments.
Integrating QCOW2 required substantial changes to the codebase (more than we typically introduce in an LTS release). To ensure this new feature doesn’t compromise XCP-ng’s stability, we invested considerable time and resources into rigorous QA. Thanks to extensive internal testing and valuable feedback from our user community (a huge thank you!), we’ve confirmed that VHD support remains fully stable with no regressions.
A new image format, VHD still the default
We’ve successfully added support for QCOW2 without requiring you to destroy and recreate existing storage repositories (SRs). Most existing SRs will now automatically manage QCOW2 disks alongside VHD. The exception is the linstor (XOSTOR) and smb SR types. The first one due to technical constraints - larger disks will only possible when the storage driver is ported to SMAPIv3. We'll implement support for QCOW2 in the second one if there's a real-world need, but SMB is not recommended storage for VDIs, especially large disks.
VHD remains the default. Xen Orchestra will automatically create QCOW2 disks for any new virtual disk exceeding 2040 GiB (to stay safely under the 2 TiB VHD limit). Smaller disks will default to VHD.

QCOW2 and performance
Storage performance is a key focus for XCP-ng’s storage team. At present, QCOW2 performance is comparable to VHD, though we’re actively addressing a few known edge cases.
In parallel, we’ve been exploring further optimizations to storage performance. We’re still finalizing these improvements, but we plan to share more updates in the coming months.
Choosing the right format for your environment
Our team has 1.5 years of hands-on experience with QCOW2, compared to over 8 years with VHD. QCOW2 is now production-ready (we’ll address the remaining known issues in future updates; see below), but VHD has a longer track record. We recommend a gradual transition, starting with one or a few VMs.
VM size also impacts performance regardless of the disk format. Larger VMs will see longer execution times for snapshot, copy, migrate, and backup operations. We strongly recommend testing these workflows in your specific environment before scaling.
FAQ
Our team has created a FAQ: https://docs.xcp-ng.org/storage/qcow2_faq/.
We hope it helps clarify the new image format management, a significant change after years of supporting only the VHD (and RAW) formats.
We will continue refining it as new questions arise.
Known issues
- Deleting an older snapshot on LVM SR can cause a tapdisk crash during coalescing if the VM is running. Only the last two snapshots can be safely deleted while the VM is on. If the VM is powered off, any snapshot can be deleted. To delete an older snapshot, please ensure the VM is stopped beforehand. A fix for this issue will arrive in a future update.
- Coalesce can take a long time if the VM is writing heavily: Currently, coalescing mimics the VHD approach. It takes a snapshot, pauses VDI access, and coalesces before resuming. Heavy write activity can cause the parent coalesce to stall as storage I/O balances between operations and coalescing, leading to high resource usage. We’re modifying the behavior to coalesce the leaf VDI directly without snapshots, which will arrive in a future update.
- Slow VDI migration from QCOW2 to QCOW2: In several investigated cases, migration transfers every bit of the disk rather than just the allocated written blocks. This can cause large QCOW2 disk migrations to take excessively long or fail.
- Migration performance and disk usage when migrating from QCOW2 to VHD: Cross-format migrations currently read the entire QCOW2 disk rather than only the allocated blocks. This causes the disk to expand to its maximum allocated size during migration and at the destination, impacting performance and storage usage.
tapdiskcrash with large QCOW2 VDIs on ZFS: We’ve observed atapdiskprocess crash when using large QCOW2 VDIs on top of ZFS. This issue is currently under investigation.
What’s next?
This release marks a major step forward for large disk support, but it’s just the beginning. Over the coming months, we’ll be shipping performance optimizations, resolving the known issues outlined above, and advancing our storage roadmap, which will eventually unlock significantly higher disk capacities. As always, your real-world testing and feedback are invaluable.
Thank you for trusting XCP-ng, and we look forward to building a more scalable storage stack together!