XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • T

      How to unbind trial license

      Watching Ignoring Scheduled Pinned Locked Moved Management
      3
      0 Votes
      3 Posts
      102 Views
      G
      Hello @tsukraw , normally your client should be able to bind the new license on their XOA. BTW, you talk about binding the license to a pool but an XOA license is bound to an XOA. I suggest your customer open a support ticket so we can help them with that issue.
    • BrantleyHobbsB

      How do I apply host patches in XO-6?

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      3
      0 Votes
      3 Posts
      80 Views
      BrantleyHobbsB
      @DustinB thank you! I suspected as much, but as my wife points out all the time, I am terrible at finding things, so I figured I would ask.
    • jerry1333J

      XOA Unable to connect xo server every 30s

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      3
      0 Votes
      3 Posts
      99 Views
      jerry1333J
      Support tunnel is active but I have troubles to access support "Oops, something went wrong! Don't worry, our team is already on it". I'll keep trying, thanks.
    • V

      iso modification and gpg key check

      Watching Ignoring Scheduled Pinned Locked Moved Solved Development
      3
      4
      0 Votes
      3 Posts
      99 Views
      olivierlambertO
      Nice catch @vagrantin ! Feel free to open a new thread if you have other problems.
    • D

      XCP-NG upgrade 8.2 to 8.3

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      3
      0 Votes
      3 Posts
      169 Views
      R
      To add a bit more detail on the upgrade path: strictly speaking, you do not need to apply outstanding 8.2 patches before upgrading. When you upgrade to 8.3, you are replacing the entire base system with the 8.3 release which already incorporates everything from the 8.2 patch stream. Any 8.2 patches you hadn't yet applied will simply be superseded. That said, applying them first is still a reasonable approach if you want a clean upgrade history and a fully-patched 8.2 baseline before jumping to 8.3. A few things worth checking before you start on a production pool: Check VM compatibility. Run a quick review of your VMs for any that might have specific OS or toolstack dependencies tied to 8.2. Most guests upgrade cleanly but it is worth knowing your environment. Use rolling pool upgrade if you have more than one host. XCP-ng supports rolling upgrades: you migrate VMs off each host, upgrade it, rejoin the pool, then proceed to the next. This maintains VM availability throughout the process. The XO interface handles this workflow if you have XOA. Back up before the jump. Export critical VM configurations or snapshots beforehand. If you use Xen Orchestra for backups, trigger a manual full backup job before starting. The upgrade itself via yum is straightforward: add the 8.3 repo, yum update, reboot. The toolstack and XAPI will handle pool registration after the host comes back up. After upgrading all hosts, run the post-upgrade checks from the docs (pool metadata sync, storage rescans) and verify HA is healthy if you use it.
    • F

      Rolling Pool Update and Affinity Host

      Watching Ignoring Scheduled Pinned Locked Moved Management
      3
      0 Votes
      3 Posts
      120 Views
      olivierlambertO
      Yes indeed, that's expected. XAPI affinity is on VM boot, not in the VM life. Then, it's the work of the load balancer.
    • K

      VM Unable to Attach ISOs After Reverting Snapshot

      Watching Ignoring Scheduled Pinned Locked Moved Unsolved Management
      3
      2
      0 Votes
      3 Posts
      119 Views
      K
      @dinhngtu Yeah, I suspected that as well. So I inspected the SR and it showed connected to both hosts (at least in the XO UI - I didn't drop to the CLI to really confirm). By altering my workflow a bit and slowing down, I seem to have found the right "sweet spot" of delay and the issue hasn't resurfaced. Here's what I'm doing now, when I need to revert the snapshots of all three VMs: In the XO VM list, I select the three VMs and power them off at the same time. I then start with VM1 and eject the ISO, VM2 and eject the ISO, then VM3 and eject the ISO. By the time I circle back to VM1 for the next step, about 10-15 secs have elapsed. I then start with VM1 and revert the snapshot, and repeat the same on VM2 and VM3. By the time I circle back to VM1 for the next step, another 10-15 secs have elapsed. I re-attach the ISO to all three VMs in sequence. Another 10-15 secs elapse. I then start with VM1 and power all three VMs sequentially. The entire workflow takes about 30-45 secs, and I'm finding that by doing this, the issue hasn't resurfaced.
    • P

      active volcano eruption going on here =)

      Watching Ignoring Scheduled Pinned Locked Moved Off topic
      3
      5
      1 Votes
      3 Posts
      127 Views
      P
      @nikade more beautiful than scary, it goes in an usual place where no one lives as you can see on satellite view it's an effusive éruption, not explosive, i was barely 100m when taking photos and you can also poke lava with a stick if you want (but need appropriate masks because of harming gazes like sulfur) not the first time it goes to the sea, the island expands slowly no casualties except from some fishes
    • B

      "Guest tools status"

      Watching Ignoring Scheduled Pinned Locked Moved Migrate to XCP-ng
      4
      0 Votes
      4 Posts
      180 Views
      kruessK
      @blueh2o Ooops, sorry - I read your message the wrong way around. You're right, that sounds really weird in your case....
    • stormiS

      Second (and final) Release Candidate for QCOW2 image format support

      Watching Ignoring Scheduled Pinned Locked Moved News
      2
      5 Votes
      2 Posts
      261 Views
      stormiS
      Here's a work in progress version of the FAQ that will go with the release. QCOW2 FAQ What storage space available do I need to have on my SR to have large QCOW2 disks to support snapshots? Depending on a thin or thick allocated SR type, the answer is the same as VHD. A thin allocated is almost free, just a bit of data for the metadata of a few new VDI. For thick allocated, you need the space for the base copy, the snapshot and the active disk. Must I create new SRs to create large disks? No. Most existing SR will support QCOW2. LinstorSR and SMBSR (for VDI) does not support QCOW2. Can we have multiples different type of VDIs (VHD and QCOW2) on the same SR? Yes, it’s supported, any existing SR (unless unsupported e.g. linstor) will be able to create QCOW2 beside VHD after installing the new sm package What happen in Live migration scenarios? preferred-image-formats on the PBD of the master of a SR will choose the destination format in case of a migration. source preferred-image-format VHD or no format specified preferred-image-format qcow2 qcow2 >2 TiB X qcow2 qcow2 <2 TiB vhd qcow2 vhd vhd qcow2 Can we create QCOW2 VDI from XO? XO hasn’t yet added the possibility to choose the image format at the VDI creation. But if you try to create a VDI bigger than 2TiB on a SR without any preferred image formats configuration or if preferred image formats contains QCOW2, it will create a QCOW2. Can we change the cluster size? Yes, on File based SR, you can create a QCOW2 with a different cluster size with the command: qemu-img create -f qcow2 -o cluster_size=2M $(uuidgen).qcow2 10G xe sr-scan uuid=<SR UUID> # to introduce it in the XAPI The qemu-img command will print the name, the VDI is <VDI UUI>.qcow2 from the output. We have not exposed the cluster size in any API call, which would allow you to create these VDIs more easily. Can you create a SR which only ever manages QCOW2 disks? How? Yes, you can by setting the preferred-image-formats parameter to only qcow2. Can you convert an existing SR so that it only manages QCOW2 disks? If so, and it had VHDs, what happens to them? You can modify a SR to manage QCOW2 by modifying the preferred-image-formats parameter of the PBD’s device-config. Modifying the PBD necessitates to delete it and recreate it with the new parameter. This implies stopping access to all VDIs of the SR on the master (you can for shared SR migrate all VMs with VDIs on other hosts in the pool and temporarily stop the PBD of the master to recreate it, the parameter only need to be set on the PBD of the master). If the SR had VHDs, they will continue to exist and be usable but won’t be automatically transformed in QCOW2. Can I resize my VDI above 2 TiB? A disk in VHD format can’t be resized above 2 TiB, no automatic format change is implemented. It is technically possible to resize above 2 TiB following a migration that would have transferred the VDI to QCOW2. Is there any thing to do to enable the new feature? Installing updated packages that supports QCOW2 is enough to enable the new feature (packages: xapi, sm, blktap). Creating a VDI bigger than 2 TiB in XO will create a QCOW2 VDI instead of failing. Can I create QCOW2 disks lesser than 2 TiB? Yes, but you need to create it manually while setting sm-config:image-format=qcow2 or configure preferred image formats on the SR. Is QCOW2 format the default format now? Is it the best practice? We kept VHD as the default format in order to limit the impact on production. In the future, QCOW2 will become the default image format for new disks, and VHD progressively deprecated. What’s the maximum disk size? The current limit is set to 16 TiB. It’s not a technical limit, it’s a limit that we corresponds to what we tested. We will raise it progressively in the future. We’ll be able to go up to 64 TiB before meeting a new technical limit related to live migration support, that we will adress at this point. The theoretical maximum is even higher. We’re not limited by the image format anymore. Can I import without modification my KVM QCOW2 disk in XCP-ng? No. You can import them, but they need to be configured to boot with the drivers like in this documentation: https://docs.xcp-ng.org/installation/migrate-to-xcp-ng/#-from-kvm-libvirt You can just skip the conversion to VHD. So it should work depending on different configuration.
    • P

      found reproductible BUG with FLR

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      2
      0 Votes
      2 Posts
      89 Views
      Bastien NolletB
      Hi @Pilow, Thanks for the report. We are aware that there are many problems with the FLR. We would like to fix them but they are not easy to fix, and we can't give an estimation date for a fix. I've linked this topic to our investigation ticket. For the moment, when FLR fails, we recommend to manually restore your files by following this documentation: https://github.com/vatesfr/xen-orchestra/blob/master/%40vates/fuse-vhd/README.md#restore-a-file-from-a-vhd-using-fuse-vhd-cli
    • F

      [dedicated thread] Dell Open Manage Appliance (OME)

      Watching Ignoring Scheduled Pinned Locked Moved Solved Compute
      94
      1
      0 Votes
      94 Posts
      37k Views
      TheNorthernLightT
      Nevermind. I forgot your previous post about: You'll need to modify two files (via SSH): /opt/dell/omc/utilities/tui/bin/ome_disk_config.sh /opt/dell/mcsi/appliance/scripts/appliance_ressource.sh In these files, replace /dev/sd with /dev/xvd. I went and made these changes, and now the Data drive is loading after doing the drive scan. I should mention that doing this BEFORE first logging into the OME web interface and completing the initialization seems to be best (no further errors appear once logged in).
    • C

      VM Migration | PIF is not attached

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      2
      0 Votes
      2 Posts
      59 Views
      P
      had the same on one pool that needed 97 updates. RPU was going great, but couldn't migrate VM post-upgrade it is a pool of 2 hosts, second hosts still had 97 patches to pass... even if RPU was finished. we waited 10 min and the 97 patches disappeared, and then we could migrate the VM
    • U

      Comparison with Ceph

      Watching Ignoring Scheduled Pinned Locked Moved XOSTOR
      2
      0 Votes
      2 Posts
      139 Views
      olivierlambertO
      Different use cases: Ceph is better with more hosts (at least 6 or 7 minimum) while XOSTOR is better between 3 to 7/8. We might have better Ceph support in the future for large clusters.
    • C

      Cannot shutdown VM or migrate

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      2
      0 Votes
      2 Posts
      99 Views
      C
      Not sure the issue, but rebooted the host and everything is fine again. I definitely had rebooted the host twice though after applying the patches and physically unplugged and moved the host.
    • M

      Memory Ballooning (DMC) broken since XCP-ng 8.3 January 2026 patches

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      13
      0 Votes
      13 Posts
      532 Views
      M
      I can confirm that when using Citrix/Xenserver guest utilities version 8.4 (https://github.com/xenserver/xe-guest-utilities/releases/tag/v8.4.0) memory ballooning / DMC is working fine. After live migration the RAM of the linux guest is expanded to dynamic_max again. So this issue was in fact caused by Rust based xen-guest-agent. For now I'll keep using Citrix/Xenserver guest utilities on my Linux guests until the feature is implemented in Vates rust-based guest utilities. Best regards
    • S

      cleanVm: incorrect backup size in metadata

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      18
      1
      0 Votes
      18 Posts
      4k Views
      M
      @hoh This is SOOO long in the tooth........... Always was annoying. Thx for findinf a fix, looking forward to it.
    • Z

      Removed VM - Now have unhealthy VDI

      Watching Ignoring Scheduled Pinned Locked Moved Unsolved Xen Orchestra
      2
      0 Votes
      2 Posts
      102 Views
      Z
      UPDATE: its been 2 days and as I suspected the VDI is still waiting to coalesce at a length of 1, The GC does not seem to be cleaning it up. Does anyone have suggestions for how to remove this orphaned VDI and base copy. I'm not interested in saving the VM just cleaning up the remnants of the botched migration and removal.
    • olivierlambertO

      New Rust Xen guest tools

      Watching Ignoring Scheduled Pinned Locked Moved Development
      167
      4 Votes
      167 Posts
      135k Views
      A
      @yann Item Opened on Gitlab.
    • henri9813H

      Storage domain server & Rolling pool upgrade

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      1
      0 Votes
      1 Posts
      37 Views
      No one has replied