QCOW2 beta announcement
It’s finally happening: after years of using the legacy VHD format, we’re opening the public beta for QCOW2 support in XCP-ng!
This marks a major milestone in our storage evolution, unlocking bigger disks, modern features, and a new foundation for future improvements.
We thank all the people who tested the alpha and provided valuable feedback.
🧱 Why QCOW2?
VHD has served us well for many years, but it also comes with limits, notably the 2 TiB virtual disk cap. By moving to QCOW2, we’re preparing XCP-ng for the next decade of virtualization:
- Support for larger disks up to 16TiB for now
- Support for online coalesce, we have plan to improve this in the future.
- Better metadata management
- Built-in compression and snapshots at the format level
- Extensible design for future features
This also gave us the chance to examine parts of the storage stack that we had previously overlooked, setting the stage for future improvements. Like we sometimes say, more to come 😄
QCOW2 will become the default virtual disk format when you install this beta.
🤸 Stability and performance
We’ve completely rewritten a large part of the storage stack to support QCOW2. This was a deep change touching several layers of XCP-ng, meaning that even existing code paths for VHD have been indirectly affected.
Of course, we’ve run our full suite of automatic tests on the VHD code, and all tests are green.
On the QCOW2 side, things are looking great too:
- Every SR type (except XOSTOR, for now) should work as expected.
- The tapdisk QCOW2 driver has seen important performance improvements compared to previous prototypes seen during the alpha phases
- We fixed a migration corruption bug (affecting VDIs when the receiving SR was QCOW2) before this beta. This was caused by how XCP-ng handled VDI transfers internally and we’ve now reworked that logic for good.
- We have worked to improve performances compared to the VHD driver which should be on par most of the time but a few edge cases are still being worked on. We won't stop at just being as good as VHD.
💾 Backup and size notes
Backups, replication and restoration of disks larger than 2 TiB in Xen Orchestra are also in beta/preview. For now, please keep that in mind when testing.
We’ve also taken great care to ensure data safety and stability, but as this is a public beta, issues can still arise. Make sure to back up any important data before using QCOW2 in production environments.
As we've said a bit earlier, the current limit with QCOW2 is fixed at 16TiB, it's not the end limit and we will increase this limit as we test and develop further.
(For what it’s worth, I’ve been running QCOW2 exclusively on my own homelab for months without problems!)
🛠️ Configuring the default format
Once the beta is installed, QCOW2 becomes the default for all newly created VDIs.
If you prefer to use VHD (or even raw) by default, you can set this during SR creation using a CLI option on dom0
.
Changing the default afterward is possible but more involved — and not yet configurable in Xen Orchestra (we’ll add this later).
The parameter to add is device-config:preferred-image-formats=<image format>
on the call to sr-create
. These commands are documented here in the XCP-ng documentation.
For example, I want to create a ext SR creating raw VDIs by default on the device /dev/nvme1n1
.
xe sr-create type=ext name-label="SR RAW" device-config:device=/dev/nvme1n1 device-config:preferred-image-formats=raw
To modify the preferred formats of an existing SR, you would have to recreate the PBD to add the preferred-image-formats
to its configuration.
To recreate a PBD, you will have to unplug it, which means disconnecting every VDIs on it from VMs or shutting down all these VMs.
You can create a VDI of any format on a SR by adding sm-config:image-format=<format>
on the vdi-create
command.
xe vdi-create sr-uuid=<SR> virtual-size=150GiB name-label="New Disk" sm-config:image-format=qcow2
Which would result in a QCOW2 of 150GiB being created then copying the resulting VDI UUID and attaching it to a VM, you can use the disk view of a VM in XO to attach it after creation.
The formats available are: vhd
, qcow2
and raw
.
These configuration parameters will be added to Xen Orchestra.
A parameter to choose the format at migration is being added to XAPI but at the moment, the migration will choose the preferred format of the destination SR or QCOW2 if none were set.
You can use this to transform a QCOW2 into VHD and vice-versa.
🧪 How to test the beta
To test the beta, you need to add a repo file containing the information for the QCOW2 feature repo.
By using this command directly on the host(s):
wget https://repo.vates.tech/xcp-ng/8/8.3/xcp-ng-qcow2.repo -O /etc/yum.repos.d/xcp-ng-qcow2.repo
Or by creating it manually: /etc/yum.repos.d/xcp-ng-qcow2.repo
, with this content:
[xcp-ng-qcow2]
name=XCP-ng QCow2 Testing Repository
baseurl=http://repo.vates.tech/xcp-ng/8/8.3/qcow2/x86_64/
enabled=0
gpgcheck=1
repo_gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-xcpng
Following this, you can then update with the command:
yum update --enablerepo=xcp-ng-testing,xcp-ng-qcow2
xe-toolstack-restart
This should update the blktap
, sm
and sm-fairlock
packages with a .qcow2
version.
The XAPI version in testing is needed, it will be in the normal updates soon.
To go back to a non-QCOW2 version, you can do:
yum downgrade -y blktap sm sm-fairlock
Then scanning the SRs with QCOW2 images again to make them be dropped from the XAPI database, they won't be removed from the underlying storage unless you delete them before uninstalling the beta, it also means that if you re-install the beta a SR scan will make them show up.
They will though appear without information like name and description, also they won't be linked to VMs anymore and you will have to attach them again.
💬 Join the test and share feedback
We’re really excited to open this new chapter for XCP-ng storage.
This is the first public step toward a modern, high-performance, and scalable storage stack — and we’d love your feedback to make it even better.
We’re still actively optimizing the storage layer, and you can expect further improvements unrelated to QCOW2 in the coming releases.