XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • planedropP

      VMware to XCP-ng Migrate Only Specific Disks

      Watching Ignoring Scheduled Pinned Locked Moved Migrate to XCP-ng
      6
      0 Votes
      6 Posts
      197 Views
      planedropP
      @MajorP93 This was also my thought, the disconnection method you mentioned, I was just wanting some feedback about that. I do have to be a bit careful because the migration must happen and then the VM on VMware must go back online while I work with the vendor to migrate all the rest of the data. As for the bigger than 2TB issue, that won't be an issue for me. This VM is HUGE but it's because it's thick provisioned and no longer needs to be, the new VM will have like 8 disks but none of them will be over 2TB (most will be less than 100GB). So I am thinking using V2V and just powering off the VM on VMware, disconnect the unneeded VDIs, then migrate, then reconnect those VDIs and power the VM back on.
    • JSylvia007J

      Backup Suddenly Failing

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      28
      0 Votes
      28 Posts
      494 Views
      tjkreidlT
      @JSylvia007 Sorry, I'm really late to this thread, but note that backups can become problematic if the SR is something like 90% or more full. There needs to be some buffer for storage as part of the process. The fact you could copy/clone VMs means your SR is working OK, but backups are a different situation. If need be, you can always migrate VMs to other storage which is evidently what you ended up doing, which frees up extra disk space. Also backups are pretty intensive so make sure you have both enough CPU capacity and memory to handle the load. Finally. a defective SR will definitely cause issues if there are I/O errors, so watch your /var/log/SMlog for any such entries.
    • A

      XOA - Memory Usage

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      7
      2
      0 Votes
      7 Posts
      268 Views
      P
      @flakpyro yup still aggressive on memory consumption I have a task that reboots XOA & XOPROXIES every two days to mitigate [image: 1775146146414-8e3474d6-3c71-4b21-bf1c-293fe13c61b8-image.jpeg] since 6.3 / 6.3.1 it seems more aggressive on the ramp up (I updated just yesterday... was still on 6.1.2)
    • W

      VDI not showing in XO 5 from Source.

      Watching Ignoring Scheduled Pinned Locked Moved Unsolved Management
      39
      2
      0 Votes
      39 Posts
      3k Views
      P
      @wgomes yup thanks for bumping this topic, still having the problem too
    • C

      VM Migration | PIF is not attached

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      2
      0 Votes
      2 Posts
      36 Views
      P
      had the same on one pool that needed 97 updates. RPU was going great, but couldn't migrate VM post-upgrade it is a pool of 2 hosts, second hosts still had 97 patches to pass... even if RPU was finished. we waited 10 min and the 97 patches disappeared, and then we could migrate the VM
    • U

      Comparison with Ceph

      Watching Ignoring Scheduled Pinned Locked Moved XOSTOR
      2
      0 Votes
      2 Posts
      60 Views
      olivierlambertO
      Different use cases: Ceph is better with more hosts (at least 6 or 7 minimum) while XOSTOR is better between 3 to 7/8. We might have better Ceph support in the future for large clusters.
    • P

      Timestamp lost in Continuous Replication

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      27
      2
      0 Votes
      27 Posts
      781 Views
      florentF
      @joeymorin said: I observed similar behaviour. Two pools. Pool A composed of two hosts. Pool B is single-host. B runs a VM with XO from source. Two VMs on host A1 (on local SR), one VM on host B1 A2 (on local SR). Host A2 has a second local SR (separate physical disc) used as the target for a CR job. CR job would back up all four VMs to the second local SR on host A2. The behaviour observed was that, although the VM on B would be backed up (as expected) as a single VM with multiple snapshots (up to the 'replication retention'), the three other VMs on the same pool as the target SR would see a new full VM created for each run of the CR job. That rather quickly filled up the target SR. I noticed the situation was corrected by a commit on or about the same date reported by @ph7. Incidentally, whatever broke this, and subsequently corrected it, appears to have corrected another issue I reported here. I never got a satisfactory answer regarding that question. Questions were raised about the stability of my test environment, even though I could easily reproduce it with a completely fresh install. Thanks for the work! edit: Correction B1 A2 sometimes it's hard to find a n complete explanation without connecting to the hosts and xo, and going through a lot of logs , which is out of the scope of community support I am glad the continuous improvement of the code base fixed the issue . We will release today a new patch, because migrating from 6.2.2 to 6.3 for a full replication ( source user that updated to the intermediate version are not affected )
    • S

      How to Setup IPMI in XO

      Watching Ignoring Scheduled Pinned Locked Moved Management
      30
      0 Votes
      30 Posts
      2k Views
      A
      Come on everyone! Click Here to vote to support HP IPMI info in XO!
    • C

      Cannot shutdown VM or migrate

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      2
      0 Votes
      2 Posts
      43 Views
      C
      Not sure the issue, but rebooted the host and everything is fine again. I definitely had rebooted the host twice though after applying the patches and physically unplugged and moved the host.
    • S

      Intel Flex GPU with SR-IOV for GPU accelarated VDIs

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      52
      0 Votes
      52 Posts
      13k Views
      S
      @TeddyAstie I hadn't seen anything special was needed in a VM other that a kernel new enough for support which 6.17 has, is there more to the story with adding things to Ubuntu?
    • C

      Is v8.3 NUMA aware?

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      3
      0 Votes
      3 Posts
      103 Views
      C
      If anyone can point me to documentation on this, I would greatly appreciate it. I have looked in the docs and searched for an answer and come up empty.
    • R

      Boot device: Hard Disk - Success

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      3
      0 Votes
      3 Posts
      268 Views
      O
      @DustinB If it hangs right after detecting the disk, it could be a bootloader or filesystem inconsistency from the snapshot state. Can you try booting the VM with a recovery ISO to check disk integrity or rebuild the bootloader?
    • J

      Every VM in a CR backup job creates an "Unhealthy VDI"

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      20
      1
      0 Votes
      20 Posts
      1k Views
      J
      This issue appears to have been resolved by a recent change.
    • P

      fell back to full and cannot delete snapshot

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      1
      1
      0 Votes
      1 Posts
      28 Views
      No one has replied
    • Tristis OrisT

      Unable to copy template

      Watching Ignoring Scheduled Pinned Locked Moved Management
      4
      0 Votes
      4 Posts
      166 Views
      Tristis OrisT
      While I was updating all the pools, a micro-update for 5 packages was released again. They come out every week now. Same issue with basic migration. Now it impossible.
    • MathieuRAM

      ACL V2 is coming soon and we need your feedbacks!

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      1
      4 Votes
      1 Posts
      93 Views
      No one has replied
    • kruessK

      XenServer 7.1.2 to XCP-ng 8.3: INTERNAL_ERROR(Not_found)

      Watching Ignoring Scheduled Pinned Locked Moved Migrate to XCP-ng
      6
      0 Votes
      6 Posts
      202 Views
      kruessK
      Good moaning... The solution was pretty simple: a toolstack restart on the master (xcp83) did get all back on track and it now allows me to move the systems with a simple shutdown/start.
    • M

      Memory Ballooning (DMC) broken since XCP-ng 8.3 January 2026 patches

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      13
      0 Votes
      13 Posts
      462 Views
      M
      I can confirm that when using Citrix/Xenserver guest utilities version 8.4 (https://github.com/xenserver/xe-guest-utilities/releases/tag/v8.4.0) memory ballooning / DMC is working fine. After live migration the RAM of the linux guest is expanded to dynamic_max again. So this issue was in fact caused by Rust based xen-guest-agent. For now I'll keep using Citrix/Xenserver guest utilities on my Linux guests until the feature is implemented in Vates rust-based guest utilities. Best regards
    • stormiS

      XCP-ng 8.3 updates announcements and testing

      Watching Ignoring Scheduled Pinned Locked Moved News
      431
      1 Votes
      431 Posts
      170k Views
      A
      @gduperrey Installed on home lab via rolling pool update and both host updated no issues and vms migrated back to 2nd host as expected this time. fingers crossed work servers have the same luck. I do have open support ticket from last round of updates for work servers. Waiting for response before installing patches.