Second (and final) Release Candidate for QCOW2 image format support
-
(co-written with @dthenot)
QCOW2 RC2 release notes
Hello everyone,
Weâre happy to publish the second and last release candidate for QCOW2 support in XCP-ng 8.3, before general availability.
It allows to use the QCOW2 open format (from Qemu) for virtual disks (VDIs) instead of the VHD format, and overcome the limitations imposed by the latter. The most important one being a size limit of 2 TiB per virtual disk.
Adding support for QCOW2 to XCP-ng 8.3 has been a 1.5 year journey, mobilizing many developers of the XCP-ng team.
There was a double challenge :
- Add the new feature, for those among you who really need large disks.
- Maintain the strong reliability of the current VHD support, that is what you are using now in production.
We managed to offer support for this new format without requiring to destroy and re-create your existing storage repositories. However, adding this QCOW2 support had a large impact on the codebase. More than you would usually want on a LTS product.
Thatâs why we invested a lot of energy, time, and resources on QA. So that we can offer this feature without affecting XCP-ngâs stability.
We need the final touch now, that is: feedback from the community.
The most important test is also the simplest: update your labs (and/or less important pools) from the testing repository, and verify that it all works as expected. VMs, snapshots, live migration, making backups, restoring backups⊠You donât need to actually use the new QCOW2 format for your tests to be extremely useful.
Then, you can also start using the new disk format if you wish so (see below).
The new packages versions are:For those of you who took part in the previous betas and RC
First, a big thank you! Then, important information about this RC2.
End of the dedicated repository
You can now remove the dedicated repository file,
/etc/yum.repos.d/xcp-ng-qcow2.repo. Future updates will follow the normal pattern.VHD by default
You will notice one major difference: the default for image format has gone back to being VHD, rather than QCOW2.
Your QCOW2 VDIs will still work after the update, but, by default, migrating a VDI to a SR will try to create a VHD instead of a QCOW2 (and will cause migration to fail if the VDI is bigger than 2TiB).You can change this behaviour by defining QCOW2 as the preferred image format for the target SR. See below.
Image format management (VHD, QCOW2)
This update introduces the concept of image formats. The two possible image formats are now
vhd(our historical format) andqcow2.- Each storage repository (except LINSTOR and SMB) supports both image formats.
- SRs will create any new disk as VHD by default, in order to retain the same behaviour as before the update.
Exception: Xen Orchestra will automatically try to create a QCOW2 disk if the virtual size is bigger than 2TiB. - During a storage migration from a SR to another SR, the destination format is chosen by the destination SR, following the same rules as for the creation of a new disk (this can be used to convert from one format to another).
If you want all new VDIs to be created as QCOW2 disks on a SR, you can set its
preferred-image-formatson your SR.Configuring a SRâs
preferred-image-formatsNew SRs
Configuring the preferred image format for new SRs can be done in Xen Orchestraâs SR creation form.
You can also still add the parameter at the SR creation on the command line with
xe:device-config:preferred-image-formats=qcow2
Example:xe sr-create name-label="test-lvmsr" type=lvm device-config:device=/dev/nvme1n1 device-config:preferred-image-formats=qcow2Existing SRs
To tell an existing SR that it must prefer the
qcow2image format for new disks, it is necessary to unplug, destroy, recreate and re-plug its PBD with the added parameter in the device-config: https://docs.xcp-ng.org/storage/#-how-to-modify-an-existing-sr-connection
In order to unplug the PBD, any VMs with a VDI on the SR will have to be stopped, or the VDI moved temporarily to another SR.This operation will not affect the contents of the SR. The PDB object represents the connection to the SR.
Why is it plural?
Donât mind the
sat the end ofpreferred-image-formatsfor now. At the moment, only the first element of the list is used in most cases.
One exception: if you define the preferred image formats asvhd, qcow2and attempt to create a new disk with size > 2TiB, Xen Orchestra will automatically select QCOW2 as the format. This is the default for SRs without a configured preferred image format.Creating a QCOW2 Virtual Disk (VDI) directly
Without changing the preferred image format of the whole SR, you can also directly create a QCOW2 VDI. This is not exposed in Xen Orchestra yet.
Use the
xe vdi-createcommand withsm-config:image-format=qcow2.Example:
xe vdi-create sr-uuid=<SR UUID> virtual-size=5TiB name-label="My QCOW2 VDI" sm-config:image-format=qcow2Whatâs interesting to know
What is notable:
- The current maximum limit is 16 TiB per VDI. We could technically go beyond, but weâre only testing up to 16 TiB at the moment.
- We are using the default cluster size from Qemu, which is 64KiB. It is possible to create a VDI with bigger cluster size, see the FAQ for info.
- Our implementation of QCOW2 support also adds native support for drives with block sizes higher than 512B. This will progressively make the
largeblockSR driver obsolete, as any SR type which supports the QCOW2 image format, if configured withqcow2as itspreferred-image-formats - For users of the largeblock SR, QCOW2 is working on drives with blocksize >512B, the limitation for devices with blocksize of 4KiB is only for VHD. As such it is possible to use a normal SR instead of largeblock if you configure
preferred-image-formatsto be QCOW2. - For backups jobs containing QCOW2 images, NBD need to be active on the backup job in XO.
What is not supported:
- QCOW2 image format for LINSTOR SRs (XOSTOR)
- QCOW2 image format for SMB SRs
What is coming soon:
- A way to select the destination format for a migration is being added in XAPI. Currently, the format is only decided by the preferred format of the destination SR
- We have ongoing work to improve performances of the storage stack in general (not just for QCOW2).
Known issues
- Snapshot of RAW VDIs (it would create a VHD or QCOW2 image with the parent being the RAW VDI)
- Migrating a > 2TiB VDI towards a SR whose preferred image format is not
qcow2(it will attempt to create a VDI, and fail) - We have identified a problem with BIOS VMs when the boot disk is almost exactly 2TiB, 4TiB, 8TiB or 16TiB big. Having the disk being 1MiB bigger or smaller will allow the VM to boot. If you encounter this issue, resizing again with a bit more (minimum 1MiB) should make the disk boot again.
How to install
The update is provided as a regular update candidate in the testing repository. There are other update candidates being published at the same time. You can see what's in the update by looking at the announcement: https://xcp-ng.org/forum/topic/9964/xcp-ng-8-3-updates-announcements-and-testing/432
You can update from the testing repository:
yum clean metadata --enablerepo=xcp-ng-testing,xcp-ng-candidates yum update --enablerepo=xcp-ng-testing,xcp-ng-candidatesA reboot is necessary after the update.
Time window for the tests
Weâre aiming for a general release in about two weeks, maybe three. Given the tight timeline, your feedback will be especially valued!
-
R rzr referenced this topic
-
Here's a work in progress version of the FAQ that will go with the release.
QCOW2 FAQ
What storage space available do I need to have on my SR to have large QCOW2 disks to support snapshots?
Depending on a thin or thick allocated SR type, the answer is the same as VHD.
A thin allocated is almost free, just a bit of data for the metadata of a few new VDI.For thick allocated, you need the space for the base copy, the snapshot and the active disk.
Must I create new SRs to create large disks?
No. Most existing SR will support QCOW2. LinstorSR and SMBSR (for VDI) does not support QCOW2.
Can we have multiples different type of VDIs (VHD and QCOW2) on the same SR?
Yes, itâs supported, any existing SR (unless unsupported e.g. linstor) will be able to create QCOW2 beside VHD after installing the new
smpackageWhat happen in Live migration scenarios?
preferred-image-formatson the PBD of the master of a SR will choose the destination format in case of a migration.source preferred-image-format VHD or no format specified preferred-image-format qcow2 qcow2 >2 TiB X qcow2 qcow2 <2 TiB vhd qcow2 vhd vhd qcow2 Can we create QCOW2 VDI from XO?
XO hasnât yet added the possibility to choose the image format at the VDI creation.
But if you try to create a VDI bigger than 2TiB on a SR without any preferred image formats configuration or if preferred image formats contains QCOW2, it will create a QCOW2.Can we change the cluster size?
Yes, on File based SR, you can create a QCOW2 with a different cluster size with the command:
qemu-img create -f qcow2 -o cluster_size=2M $(uuidgen).qcow2 10G xe sr-scan uuid=<SR UUID> # to introduce it in the XAPIThe
qemu-imgcommand will print the name, the VDI is<VDI UUI>.qcow2from the output.We have not exposed the cluster size in any API call, which would allow you to create these VDIs more easily.
Can you create a SR which only ever manages QCOW2 disks? How?
Yes, you can by setting the
preferred-image-formatsparameter to onlyqcow2.Can you convert an existing SR so that it only manages QCOW2 disks? If so, and it had VHDs, what happens to them?
You can modify a SR to manage QCOW2 by modifying the
preferred-image-formatsparameter of the PBDâsdevice-config.Modifying the PBD necessitates to delete it and recreate it with the new parameter. This implies stopping access to all VDIs of the SR on the master (you can for shared SR migrate all VMs with VDIs on other hosts in the pool and temporarily stop the PBD of the master to recreate it, the parameter only need to be set on the PBD of the master).
If the SR had VHDs, they will continue to exist and be usable but wonât be automatically transformed in QCOW2.
Can I resize my VDI above 2 TiB?
A disk in VHD format canât be resized above 2 TiB, no automatic format change is implemented.
It is technically possible to resize above 2 TiB following a migration that would have transferred the VDI to QCOW2.Is there any thing to do to enable the new feature?
Installing updated packages that supports QCOW2 is enough to enable the new feature (packages: xapi, sm, blktap). Creating a VDI bigger than 2 TiB in XO will create a QCOW2 VDI instead of failing.
Can I create QCOW2 disks lesser than 2 TiB?
Yes, but you need to create it manually while setting
sm-config:image-format=qcow2or configure preferred image formats on the SR.Is QCOW2 format the default format now? Is it the best practice?
We kept VHD as the default format in order to limit the impact on production. In the future, QCOW2 will become the default image format for new disks, and VHD progressively deprecated.
Whatâs the maximum disk size?
The current limit is set to 16 TiB. Itâs not a technical limit, itâs a limit that we corresponds to what we tested. We will raise it progressively in the future.
Weâll be able to go up to 64 TiB before meeting a new technical limit related to live migration support, that we will adress at this point.
The theoretical maximum is even higher. Weâre not limited by the image format anymore.
Can I import without modification my KVM QCOW2 disk in XCP-ng?
No. You can import them, but they need to be configured to boot with the drivers like in this documentation: https://docs.xcp-ng.org/installation/migrate-to-xcp-ng/#-from-kvm-libvirt
You can just skip the conversion to VHD.So it should work depending on different configuration.
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better đ
Register Login