XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • M

      Memory Ballooning (DMC) broken since XCP-ng 8.3 January 2026 patches

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      13
      0 Votes
      13 Posts
      404 Views
      M
      I can confirm that when using Citrix/Xenserver guest utilities version 8.4 (https://github.com/xenserver/xe-guest-utilities/releases/tag/v8.4.0) memory ballooning / DMC is working fine. After live migration the RAM of the linux guest is expanded to dynamic_max again. So this issue was in fact caused by Rust based xen-guest-agent. For now I'll keep using Citrix/Xenserver guest utilities on my Linux guests until the feature is implemented in Vates rust-based guest utilities. Best regards
    • S

      cleanVm: incorrect backup size in metadata

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      18
      1
      0 Votes
      18 Posts
      4k Views
      M
      @hoh This is SOOO long in the tooth........... Always was annoying. Thx for findinf a fix, looking forward to it.
    • A

      Install XO from sources.

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      15
      3 Votes
      15 Posts
      1k Views
      G
      @acebmxer I haven't tried this yet, but liking the menu you just showed!
    • Z

      Removed VM - Now have unhealthy VDI

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      2
      0 Votes
      2 Posts
      70 Views
      Z
      UPDATE: its been 2 days and as I suspected the VDI is still waiting to coalesce at a length of 1, The GC does not seem to be cleaning it up. Does anyone have suggestions for how to remove this orphaned VDI and base copy. I'm not interested in saving the VM just cleaning up the remnants of the botched migration and removal.
    • P

      Timestamp lost in Continuous Replication

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      25
      2
      0 Votes
      25 Posts
      630 Views
      florentF
      @ph7 that is a good news thank you for your patience and help
    • P

      Veeam & XCP NG webinar incoming (FR speaking)

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      2
      1
      2 Votes
      2 Posts
      101 Views
      dfrizonD
      Great News!!
    • C

      Is v8.3 NUMA aware?

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      2
      0 Votes
      2 Posts
      73 Views
      olivierlambertO
      Ping @Team-Hypervisor-Kernel
    • P

      active volcano eruption going on here =)

      Watching Ignoring Scheduled Pinned Locked Moved Off topic
      3
      5
      1 Votes
      3 Posts
      100 Views
      P
      @nikade more beautiful than scary, it goes in an usual place where no one lives as you can see on satellite view it's an effusive éruption, not explosive, i was barely 100m when taking photos and you can also poke lava with a stick if you want (but need appropriate masks because of harming gazes like sulfur) not the first time it goes to the sea, the island expands slowly no casualties except from some fishes
    • B

      cant create a private cross pool network

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      4
      0 Votes
      4 Posts
      457 Views
      M
      I am trying to setup the same VxLAN Private Network between two pools. I have been running XO from sources (commit 813514821) for a while now and encountered the same error "No PIF found in center". Therefore I updated to the latest commit 60ba5070c (still using XO v5) and now I was able to create a VxLAN private network that spans two pools without an error. I checked the networks for both pools and noticed that two networks were created per pool, so I was able to create it despite the error from the older XO version. However, using either VxLAN to connect two VMs in different pools doesn't work. Pinging a VM over that VxLAN in the same pool and host works, but pinging a VM from another Pool doesn't work. This is also the case when using GRE, Looking at the ARP Table, it seems that the VMs from different pool do show up, but as incomplete: [image: 1774432891674-screenshot-from-2026-03-25-10-59-34.png] 172.30.0.10 is the VM in the same pool as work (172.30.0.30), 172.30.0.20 is the VM from a different pool. I manually set the MTU size in each VM to 1450 just to be sure. I have two pools with one server each. This is due to CPU feature mismatch. Each host is connected via a bond of two SFP+ Ports (which have default MTU 1500) to a switch that allows all VLANs. Using normal VLAN Networks does work with multiple pools.
    • K

      IPMI/ IDRAC (XAPI)

      Watching Ignoring Scheduled Pinned Locked Moved Solved REST API
      8
      0 Votes
      8 Posts
      355 Views
      K
      @gduperrey Worked swell, thanks
    • D

      XCP-ng Windows PV tools announcements

      Watching Ignoring Scheduled Pinned Locked Moved News
      83
      0 Votes
      83 Posts
      10k Views
      A
      @dinhngtu You rock - thank you!