• question about a master node crash in a pool.

    XO Lite
    3
    0 Votes
    3 Posts
    23 Views
    O
    Ok, thank you very much for this informations So, regarding cross-configuration: If we have two separate pools (Pool A and Pool B), is it advisable to host XOA-A on Pool B to manage Pool A, and vice versa? Is this an impractical option (or is there another one?)
  • Xen Orchestra 6.3.2 Random Replication Failure

    Backup
    6
    1
    0 Votes
    6 Posts
    128 Views
    florentF
    @flakpyro we are working on a fix here : https://github.com/vatesfr/xen-orchestra/pull/9702 . Are you using NBD ? we think pierre's diag is right , so this branch will not open the NBD server when needed, and wait for the disk to be disconnecetd before trying to write into it fbeauchamp opened this pull request in vatesfr/xen-orchestra open fix(backups): Fix "Storage_error ([S(Illegal_transition);[[S(Activated);S(RO)];[S(Activated);S(RW)]]])" #9702
  • clean-vm (end) is stalling ?

    Backup
    13
    2
    0 Votes
    13 Posts
    167 Views
    simonpS
    Hi, thanks for the heads-up, we will see about doing some comparison with the backups refactoring on our dev environment to check if we lost some speed and try to fix it if so. Very happy to hear that the issue is mostly resolved. We will patch this ASAP.
  • Warnings with Backups?

    Backup backup backup failure
    5
    2
    0 Votes
    5 Posts
    167 Views
    P
    @TechGrips Sorry, there is no quick test to be sure the VM is not corrupted. The usual way would be to make a healthcheck. We cannot be sure everything is ok as it concerns multiple tar linked to each other. If it keeps warning you on the same backups, it may be due to a faulty parent. You would need for this to create a new chain of snapshot
  • Too many snapshots

    Backup
    33
    2
    0 Votes
    33 Posts
    338 Views
    P
    @McHenry 19 VMs is 19 Chains of 16 VDIs at each hourly run, a new snapshot is created (some minutes) and the oldest one is merged/garbage collected in the first snap (time undetermined) I guess 19 merge + chain garbage collected seems to not be able to be done in the one hour timeframe before next CR is done you possibly have a chain growing can you check in DASHBOARD/HEALTH the unhealthy VDI section at 11 am ?
  • GPU share to more Windows VMs on same XCP-NG node

    Hardware
    2
    0 Votes
    2 Posts
    31 Views
    A
    @Aleksander You would need a gpu that support SR-IOV. From NVIDIA that means a non consumer gpu. Edit the below is Ai output. GPU SR-IOV support varies significantly by vendor and architecture, with Intel offering the most extensive hardware-based SR-IOV for consumer and data center graphics, while NVIDIA and AMD rely heavily on proprietary drivers or specific enterprise hardware. Intel Graphics support is the most widespread for virtualization, with 12th Gen (Alder Lake), 13th Gen (Raptor Lake), and 14th Gen (Raptor Lake Refresh) Core processors supporting SR-IOV, as do the Intel Data Center GPU Flex Series and Intel Arc Pro B-Series (requires driver version 32.0.101.8306 or newer). Older generations (6th through 10th Gen) primarily support GVT-g (software-based mediation) rather than hardware SR-IOV, while the Intel Core Ultra Series 1 (Meteor Lake) and Series 3 (Panther Lake) do not support SR-IOV. NVIDIA supports SR-IOV primarily through its proprietary vGPU and MIG (Multi-Instance GPU) features on enterprise-grade hardware, including the A100, A40, A30, RTX A-series, and Tesla lines. While open-source drivers like Nouveau exist, NVIDIA's proprietary Mdev driver is the standard method for enabling SR-IOV and mediation, often managed via tools like sriov-manage in environments like Harvester or OpenStack. AMD SR-IOV support is limited to older FirePro and Radeon Pro cards (e.g., W7100, S7150, V520) using the deprecated GIM or MxGPU drivers. Support for modern Navi and RDNA architecture consumer GPUs is currently unclear or non-existent in open-source ecosystems, with AMD reportedly focusing SR-IOV capabilities on exclusive enterprise contracts rather than consumer hardware. Vendor Architecture SR-IOV Support Status Key Hardware / Notes Intel 12th-14th Gen Core Yes (Hardware) Iris Xe, Data Center Flex, Arc Pro B-Series Intel 6th-10th Gen Core No (Software/GVT-g) HD Graphics 5500–630, UHD 620/630 Intel Core Ultra (Series 1/3) No Meteor Lake, Panther Lake NVIDIA Ampere/Hopper/Ada Yes (Proprietary) A100, A40, RTX A6000, Tesla T4 (via vGPU/MIG) NVIDIA Maxwell/Pascal/Turing Yes (Proprietary) Tesla P100, T4, Quadro RTX (via vGPU) AMD Tonga/Vega/Navi Limited/Deprecated FirePro S7150, W7100 (GIM driver); Modern support unclear
  • Just FYI: current update seams to break NUT dependancies

    XCP-ng
    28
    0 Votes
    28 Posts
    946 Views
    rzrR
    @cobordism said: yum update --disablerepo=* --enablerepo=xcp-ng-base,xcp-ng-updates It's currently in testing and will move to updates if everything (not only nut) is ok: yum install --disablerepo=* \ --enablerepo=xcp-ng-base,xcp-ng-updates,xcp-ng-testing nut
  • Veeam & XCP NG webinar incoming (FR speaking)

    Backup
    3
    1
    2 Votes
    3 Posts
    156 Views
    P
    new mail received from Laurent Nguyen today Bonjour, Face à la hausse des tarifs VMware, nous vous invitons à remplir un questionnaire pour mieux comprendre vos attentes et améliorer nos services. https://www.surveymonkey.com/r/288HCLL Merci d’avance pour votre temps. Et si n’est pas encore fait, n’oubliez pas de vous inscrire au Veeam Technical Cloud Club qui aura lieu le Jeudi 16 Avril 2026 : https://go.veeam.com/webinar-technical-cloud-club-france Go give some love to XCP-NG + VEEAM collaboration in the survey !
  • Auth LDAP "unable to get local issuer certificate"?

    Xen Orchestra
    3
    0 Votes
    3 Posts
    601 Views
    D
    @omatsei said: @omatsei I figured out the problem. There appears to be a bug in XO that requires you to check "Check Certificate" and/or "Start TLS", save the configuration, then uncheck them, then save again. Then it should work. The bug is that they're unchecked by default, but apparently they're enabled in the background. Thanks for that, I was stump on this one trying to reconfigure a new instance. This is defintely the case even with the latest auth-ldap (v0.10.11) plugin.
  • 0 Votes
    15 Posts
    306 Views
    A
    @tsukraw It's not your fault, it's a bug (@florent, two bugs) in XO. Backup should fall back to non-NBD and warn there is a network issue, second it should not say delta at the bottom when it's doing a full backup. Known issues, I hit both last month without warnings... I do hourly CR and I have the full interval set to 168 (once a week). You could set it longer if you have a slow link but backups are reliable. The full backup on CR is more of a data integrity issue. Deltas are just updates to the existing data. If there is a data problem with a delta then the copy is no longer 100% correct. The full backup (assuming it makes it over to the other site) ensures a full fresh copy without relying on past deltas being correct. If you do CR nightly then setting full to 30 would be once a month. It would be nice to have a skew option to have full backups automatically done on some VMs but not all at the same time.
  • XCP-ng 8.3 updates announcements and testing

    Pinned News
    437
    1 Votes
    437 Posts
    180k Views
    A
    @rzr Always a reboot after big updates, as instructed/required.
  • 3 Votes
    2 Posts
    132 Views
    stormiS
    Here's a work in progress version of the FAQ that will go with the release. QCOW2 FAQ What storage space available do I need to have on my SR to have large QCOW2 disks to support snapshots? Depending on a thin or thick allocated SR type, the answer is the same as VHD. A thin allocated is almost free, just a bit of data for the metadata of a few new VDI. For thick allocated, you need the space for the base copy, the snapshot and the active disk. Must I create new SRs to create large disks? No. Most existing SR will support QCOW2. LinstorSR and SMBSR (for VDI) does not support QCOW2. Can we have multiples different type of VDIs (VHD and QCOW2) on the same SR? Yes, it’s supported, any existing SR (unless unsupported e.g. linstor) will be able to create QCOW2 beside VHD after installing the new sm package What happen in Live migration scenarios? preferred-image-formats on the PBD of the master of a SR will choose the destination format in case of a migration. source preferred-image-format VHD or no format specified preferred-image-format qcow2 qcow2 >2 TiB X qcow2 qcow2 <2 TiB vhd qcow2 vhd vhd qcow2 Can we create QCOW2 VDI from XO? XO hasn’t yet added the possibility to choose the image format at the VDI creation. But if you try to create a VDI bigger than 2TiB on a SR without any preferred image formats configuration or if preferred image formats contains QCOW2, it will create a QCOW2. Can we change the cluster size? Yes, on File based SR, you can create a QCOW2 with a different cluster size with the command: qemu-img create -f qcow2 -o cluster_size=2M $(uuidgen).qcow2 10G xe sr-scan uuid=<SR UUID> # to introduce it in the XAPI The qemu-img command will print the name, the VDI is <VDI UUI>.qcow2 from the output. We have not exposed the cluster size in any API call, which would allow you to create these VDIs more easily. Can you create a SR which only ever manages QCOW2 disks? How? Yes, you can by setting the preferred-image-formats parameter to only qcow2. Can you convert an existing SR so that it only manages QCOW2 disks? If so, and it had VHDs, what happens to them? You can modify a SR to manage QCOW2 by modifying the preferred-image-formats parameter of the PBD’s device-config. Modifying the PBD necessitates to delete it and recreate it with the new parameter. This implies stopping access to all VDIs of the SR on the master (you can for shared SR migrate all VMs with VDIs on other hosts in the pool and temporarily stop the PBD of the master to recreate it, the parameter only need to be set on the PBD of the master). If the SR had VHDs, they will continue to exist and be usable but won’t be automatically transformed in QCOW2. Can I resize my VDI above 2 TiB? A disk in VHD format can’t be resized above 2 TiB, no automatic format change is implemented. It is technically possible to resize above 2 TiB following a migration that would have transferred the VDI to QCOW2. Is there any thing to do to enable the new feature? Installing updated packages that supports QCOW2 is enough to enable the new feature (packages: xapi, sm, blktap). Creating a VDI bigger than 2 TiB in XO will create a QCOW2 VDI instead of failing. Can I create QCOW2 disks lesser than 2 TiB? Yes, but you need to create it manually while setting sm-config:image-format=qcow2 or configure preferred image formats on the SR. Is QCOW2 format the default format now? Is it the best practice? We kept VHD as the default format in order to limit the impact on production. In the future, QCOW2 will become the default image format for new disks, and VHD progressively deprecated. What’s the maximum disk size? The current limit is set to 16 TiB. It’s not a technical limit, it’s a limit that we corresponds to what we tested. We will raise it progressively in the future. We’ll be able to go up to 64 TiB before meeting a new technical limit related to live migration support, that we will adress at this point. The theoretical maximum is even higher. We’re not limited by the image format anymore. Can I import without modification my KVM QCOW2 disk in XCP-ng? No. You can import them, but they need to be configured to boot with the drivers like in this documentation: https://docs.xcp-ng.org/installation/migrate-to-xcp-ng/#-from-kvm-libvirt You can just skip the conversion to VHD. So it should work depending on different configuration.
  • VM backup fails with INVALID_VALUE

    Backup
    8
    0 Votes
    8 Posts
    110 Views
    burbilogB
    main.xxx (azazel.xxx) Snapshot Start: 2026-04-10 00:03 End: 2026-04-10 00:03 Local storage (137.41 GiB free - thin) - legion.xxx transfer Start: 2026-04-10 00:03 End: 2026-04-10 00:09 Duration: 6 minutes Size: 17.08 GiB Speed: 47.42 MiB/s Start: 2026-04-10 00:03 End: 2026-04-10 00:09 Duration: 6 minutes Start: 2026-04-10 00:03 End: 2026-04-10 00:09 Duration: 6 minutes Type: full
  • 🛰️ XO 6: dedicated thread for all your feedback!

    Pinned Xen Orchestra
    174
    7 Votes
    174 Posts
    20k Views
    olivierlambertO
    Let me ping @Team-XO-Frontend
  • load-balancer : Affinity to Host groups

    Xen Orchestra
    5
    0 Votes
    5 Posts
    119 Views
    olivierlambertO
    Why not using anti affinity with the load balancer? This way, you can have multiple groups of VMs that won't run at the same place
  • 9 Votes
    37 Posts
    5k Views
    bvitnikB
    @Tristis-Oris maybe a few comparative screenshots would help
  • xo-disk-cli on latest XOA node.js problem

    Management
    10
    1
    0 Votes
    10 Posts
    111 Views
    M
    @Andrew Yeah but XOA is still using it hence my interest in aligning my XO-CE instance with XOA as close as possible.
  • 2 Votes
    4 Posts
    185 Views
    Y
    Hi, Someone from the Hypervisor & Kernel team will have a look shortly, we'll get back to you with our findings. Thanks a lot for the very detailed report! Yann
  • Restore only showing 1 VM

    Backup
    21
    1
    0 Votes
    21 Posts
    496 Views
    P
    @Bastien-Nollet I'm running c1e5f btw
  • HCL - GPUs

    Hardware gpu passthrough hcl xcp-ng 8.3
    1
    0 Votes
    1 Posts
    46 Views
    No one has replied