XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • K

      Question about Continuous Replication/ Backups always doing Full Backups

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      13
      1
      0 Votes
      13 Posts
      260 Views
      A
      @tsukraw The type delta at the bottom is a known bug... Do you have NBD Connection enabled on a network interface (Check the pool network)?
    • burbilogB

      VM backup fails with INVALID_VALUE

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      8
      0 Votes
      8 Posts
      94 Views
      burbilogB
      main.xxx (azazel.xxx) Snapshot Start: 2026-04-10 00:03 End: 2026-04-10 00:03 Local storage (137.41 GiB free - thin) - legion.xxx transfer Start: 2026-04-10 00:03 End: 2026-04-10 00:09 Duration: 6 minutes Size: 17.08 GiB Speed: 47.42 MiB/s Start: 2026-04-10 00:03 End: 2026-04-10 00:09 Duration: 6 minutes Start: 2026-04-10 00:03 End: 2026-04-10 00:09 Duration: 6 minutes Type: full
    • M

      Too many snapshots

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      14
      2
      0 Votes
      14 Posts
      139 Views
      P
      @McHenry screenshot the GENERAL tab of "Disaster Recovery" SR please just to see how many VDIs it hosts... at least 288 it was announced here https://xen-orchestra.com/blog/xen-orchestra-6-3/#-backup but it is not explaing in details, I gathered information in another topix in this forum from @florent you also have the Changelog of 6.3.0 https://github.com/vatesfr/xen-orchestra/blob/master/CHANGELOG.md#630-2026-03-31 points to PR9524 [Replication] Reuse the same VM as an incremental replication target (PR #9524)
    • stormiS

      XCP-ng 8.3 updates announcements and testing

      Watching Ignoring Scheduled Pinned Locked Moved News
      433
      1 Votes
      433 Posts
      178k Views
      dthenotD
      @rzr Host updated
    • stormiS

      Second (and final) Release Candidate for QCOW2 image format support

      Watching Ignoring Scheduled Pinned Locked Moved News
      2
      2 Votes
      2 Posts
      18 Views
      stormiS
      Here's a work in progress version of the FAQ that will go with the release. QCOW2 FAQ What storage space available do I need to have on my SR to have large QCOW2 disks to support snapshots? Depending on a thin or thick allocated SR type, the answer is the same as VHD. A thin allocated is almost free, just a bit of data for the metadata of a few new VDI. For thick allocated, you need the space for the base copy, the snapshot and the active disk. Must I create new SRs to create large disks? No. Most existing SR will support QCOW2. LinstorSR and SMBSR (for VDI) does not support QCOW2. Can we have multiples different type of VDIs (VHD and QCOW2) on the same SR? Yes, it’s supported, any existing SR (unless unsupported e.g. linstor) will be able to create QCOW2 beside VHD after installing the new sm package What happen in Live migration scenarios? preferred-image-formats on the PBD of the master of a SR will choose the destination format in case of a migration. source preferred-image-format VHD or no format specified preferred-image-format qcow2 qcow2 >2 TiB X qcow2 qcow2 <2 TiB vhd qcow2 vhd vhd qcow2 Can we create QCOW2 VDI from XO? XO hasn’t yet added the possibility to choose the image format at the VDI creation. But if you try to create a VDI bigger than 2TiB on a SR without any preferred image formats configuration or if preferred image formats contains QCOW2, it will create a QCOW2. Can we change the cluster size? Yes, on File based SR, you can create a QCOW2 with a different cluster size with the command: qemu-img create -f qcow2 -o cluster_size=2M $(uuidgen).qcow2 10G xe sr-scan uuid=<SR UUID> # to introduce it in the XAPI The qemu-img command will print the name, the VDI is <VDI UUI>.qcow2 from the output. We have not exposed the cluster size in any API call, which would allow you to create these VDIs more easily. Can you create a SR which only ever manages QCOW2 disks? How? Yes, you can by setting the preferred-image-formats parameter to only qcow2. Can you convert an existing SR so that it only manages QCOW2 disks? If so, and it had VHDs, what happens to them? You can modify a SR to manage QCOW2 by modifying the preferred-image-formats parameter of the PBD’s device-config. Modifying the PBD necessitates to delete it and recreate it with the new parameter. This implies stopping access to all VDIs of the SR on the master (you can for shared SR migrate all VMs with VDIs on other hosts in the pool and temporarily stop the PBD of the master to recreate it, the parameter only need to be set on the PBD of the master). If the SR had VHDs, they will continue to exist and be usable but won’t be automatically transformed in QCOW2. Can I resize my VDI above 2 TiB? A disk in VHD format can’t be resized above 2 TiB, no automatic format change is implemented. It is technically possible to resize above 2 TiB following a migration that would have transferred the VDI to QCOW2. Is there any thing to do to enable the new feature? Installing updated packages that supports QCOW2 is enough to enable the new feature (packages: xapi, sm, blktap). Creating a VDI bigger than 2 TiB in XO will create a QCOW2 VDI instead of failing. Can I create QCOW2 disks lesser than 2 TiB? Yes, but you need to create it manually while setting sm-config:image-format=qcow2 or configure preferred image formats on the SR. Is QCOW2 format the default format now? Is it the best practice? We kept VHD as the default format in order to limit the impact on production. In the future, QCOW2 will become the default image format for new disks, and VHD progressively deprecated. What’s the maximum disk size? The current limit is set to 16 TiB. It’s not a technical limit, it’s a limit that we corresponds to what we tested. We will raise it progressively in the future. We’ll be able to go up to 64 TiB before meeting a new technical limit related to live migration support, that we will adress at this point. The theoretical maximum is even higher. We’re not limited by the image format anymore. Can I import without modification my KVM QCOW2 disk in XCP-ng? No. You can import them, but they need to be configured to boot with the drivers like in this documentation: https://docs.xcp-ng.org/installation/migrate-to-xcp-ng/#-from-kvm-libvirt You can just skip the conversion to VHD. So it should work depending on different configuration.