XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • F

      Just FYI: current update seams to break NUT dependancies

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      14
      0 Votes
      14 Posts
      273 Views
      okynnorO
      I second this request.
    • P

      Timestamp lost in Continuous Replication

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      13
      2
      0 Votes
      13 Posts
      151 Views
      P
      @florent ho nice we had as many "VMs with a timestamp in the name" as number of REPLICAs, and multiple snapshot on source VM now we have "one Replica VM with multiple snapshots" ? Veeam-replica-style... do multiple snapshots persists on source VM too ? if it's true, that's nice on the concept. but when your replica is over lvmoiscsi not so nice ps : i didnt upgrade to last XOA/XCP patchs yet
    • 1

      REQUEST: Add PATCH /vms/{id} for updating VM properties (name_description, name_label)

      Watching Ignoring Scheduled Pinned Locked Moved REST API
      4
      0 Votes
      4 Posts
      45 Views
      1
      @MathieuRA Thanks for the quick response and for flagging it, really appreciate it! No rush at all. Thank you for all the hard work being put into the REST API, it's been fantastic.
    • okynnorO

      xcp-ng patches install fail

      Watching Ignoring Scheduled Pinned Locked Moved XCP-ng
      3
      0 Votes
      3 Posts
      51 Views
      P
      https://xcp-ng.org/forum/topic/11951/just-fyi-current-update-seams-to-break-nut-dependancies
    • P

      Failed backup jobs since updating

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      7
      1
      0 Votes
      7 Posts
      88 Views
      olivierlambertO
      png @florent
    • K

      IPMI/ IDRAC (XAPI)

      Watching Ignoring Scheduled Pinned Locked Moved REST API
      3
      0 Votes
      3 Posts
      110 Views
      yannY
      yann said: @kawreh said: After updating to XCP-ng 8.3 (March 2026), IPMI / iDRAC information fails in both XO5 stable and XO6 stable (built from sources). It is thrown a XENAPI_PLUGIN_FAILURE log line for failure of the "ipmitool lan print" which works fine on the (DELL) node(s). Thanks for this report! Can you please check if just yum downgrade xcp-ng-xapi-plugins-0:1.12.0-1.xcpng8.3.noarch makes the plugin work again, and it does can you please provide the output of ipmitool lan print for both package versions? Actually, don't bother with the plugins, the regression comes from the ipmitool package. You can downgrade that one to get the functionality back, we're on it.
    • W

      VDI not showing in XO 5 from Source.

      Watching Ignoring Scheduled Pinned Locked Moved Unsolved Management
      34
      2
      0 Votes
      34 Posts
      3k Views
      bogikornelB
      @florent The following error occurred: Mar 16 18:38:10 XOA xo-server[4001]: 2026-03-16T17:38:10. 246Z xo:rest-api:error-handler INFO [GET] /vms/0691be81-7ce9-7dba-9387-5620f8e0c52f/vdis (404) XO version: Master, commit 15917 xcp-ng version: 8.3 with the latest updates. What’s interesting is that there are two xcp-ng servers, and the problem only occurs on one of them.
    • AlexanderKA

      Nested Virtualization in xcp-ng

      Watching Ignoring Scheduled Pinned Locked Moved Compute
      13
      0 Votes
      13 Posts
      199 Views
      Y
      @abudef Some quotes from the documentation to clarify the situation: https://docs.xcp-ng.org/compute/#-nested-virtualization
    • A

      Migrations after updates

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      13
      2
      0 Votes
      13 Posts
      315 Views
      Bastien NolletB
      Hi @acebmxer, I've made some tests with a small infrastructure, which helped me understand the behaviour you encounter. With the performance plan, the load balancer can trigger migrations in the following cases: to better satisfy affinity or anti-affinity constraints if a host has a memory or CPU usage exceeds a threshold (85% of the CPU critical threshold, of 1.2 times the free memory critical threshold) with vCPU balancing behaviour, if the vCPU/CPU ratio differs too much from one host to another AND at least one host has more vCPUs than CPUs with preventive behaviour, if CPU usage differs too much from one host to another AND at least one host has more than 25% CPU usage After a host restart, your VMs will be unevenly distributed, but this will not trigger a migration if there are no anti-affinity constraints to satisfy, if no memory or CPU usage thresholds are exceeded, and if no host has more CPUs than vCPUs. If you want migrations to happen after a host restart, you should probably try using the "preventive" behaviour, which can trigger migrations even if thresholds are not reached. However it's based on CPU usage, so if your VMs use a lot of memory but don't use much CPU, this might not be ideal as well. We've received very few feedback about the "preventive" behaviour, so we'd be happy to have yours. As we said before, lowering the critical thresholds might also be a solution, but I think it will make the load balancer less effective if you encounter heavy load a some point.