• 6 Votes
    145 Posts
    15k Views
    M
    @benapetr Is your XO fully up to date? I remember that there was a bug that caused this spinning wheel / infinite loading on XO6 that got fixed already. Maybe the issue you faced is related to that. I do not know about compatibility with 8.2 but can imagine that it is not supported as it is EOL so probably no compatibility guaranteed by Vates.
  • Best pratice : Add dedicated host for CR or DR.

    Backup
    2
    0 Votes
    2 Posts
    18 Views
    AtaxyaNetworkA
    @Dezerd Hi ! Just add the third host via New -> Server, it will appear as an independent host in his own pool (assuming the third host was just installed).
  • Hot-adding a resource

    XCP-ng
    5
    0 Votes
    5 Posts
    26 Views
    U
    Thank you all for your answers—now it's clear.
  • New Rust Xen guest tools

    Development
    165
    3 Votes
    165 Posts
    132k Views
    olivierlambertO
    Ping @teddyastie
  • 0 Votes
    4 Posts
    62 Views
    1
    @MathieuRA Thanks for the quick response and for flagging it, really appreciate it! No rush at all. Thank you for all the hard work being put into the REST API, it's been fantastic.
  • Just FYI: current update seams to break NUT dependancies

    XCP-ng
    14
    0 Votes
    14 Posts
    282 Views
    okynnorO
    I second this request.
  • Timestamp lost in Continuous Replication

    Backup
    13
    2
    0 Votes
    13 Posts
    178 Views
    P
    @florent ho nice we had as many "VMs with a timestamp in the name" as number of REPLICAs, and multiple snapshot on source VM now we have "one Replica VM with multiple snapshots" ? Veeam-replica-style... do multiple snapshots persists on source VM too ? if it's true, that's nice on the concept. but when your replica is over lvmoiscsi not so nice ps : i didnt upgrade to last XOA/XCP patchs yet
  • VDI not showing in XO 5 from Source.

    Unsolved Management
    34
    2
    0 Votes
    34 Posts
    3k Views
    bogikornelB
    @florent The following error occurred: Mar 16 18:38:10 XOA xo-server[4001]: 2026-03-16T17:38:10. 246Z xo:rest-api:error-handler INFO [GET] /vms/0691be81-7ce9-7dba-9387-5620f8e0c52f/vdis (404) XO version: Master, commit 15917 xcp-ng version: 8.3 with the latest updates. What’s interesting is that there are two xcp-ng servers, and the problem only occurs on one of them.
  • IPMI/ IDRAC (XAPI)

    REST API
    3
    0 Votes
    3 Posts
    115 Views
    yannY
    yann said: @kawreh said: After updating to XCP-ng 8.3 (March 2026), IPMI / iDRAC information fails in both XO5 stable and XO6 stable (built from sources). It is thrown a XENAPI_PLUGIN_FAILURE log line for failure of the "ipmitool lan print" which works fine on the (DELL) node(s). Thanks for this report! Can you please check if just yum downgrade xcp-ng-xapi-plugins-0:1.12.0-1.xcpng8.3.noarch makes the plugin work again, and it does can you please provide the output of ipmitool lan print for both package versions? Actually, don't bother with the plugins, the regression comes from the ipmitool package. You can downgrade that one to get the functionality back, we're on it.
  • Nested Virtualization in xcp-ng

    Compute
    13
    0 Votes
    13 Posts
    204 Views
    Y
    @abudef Some quotes from the documentation to clarify the situation: https://docs.xcp-ng.org/compute/#-nested-virtualization
  • Migrations after updates

    Xen Orchestra
    13
    2
    0 Votes
    13 Posts
    325 Views
    Bastien NolletB
    Hi @acebmxer, I've made some tests with a small infrastructure, which helped me understand the behaviour you encounter. With the performance plan, the load balancer can trigger migrations in the following cases: to better satisfy affinity or anti-affinity constraints if a host has a memory or CPU usage exceeds a threshold (85% of the CPU critical threshold, of 1.2 times the free memory critical threshold) with vCPU balancing behaviour, if the vCPU/CPU ratio differs too much from one host to another AND at least one host has more vCPUs than CPUs with preventive behaviour, if CPU usage differs too much from one host to another AND at least one host has more than 25% CPU usage After a host restart, your VMs will be unevenly distributed, but this will not trigger a migration if there are no anti-affinity constraints to satisfy, if no memory or CPU usage thresholds are exceeded, and if no host has more CPUs than vCPUs. If you want migrations to happen after a host restart, you should probably try using the "preventive" behaviour, which can trigger migrations even if thresholds are not reached. However it's based on CPU usage, so if your VMs use a lot of memory but don't use much CPU, this might not be ideal as well. We've received very few feedback about the "preventive" behaviour, so we'd be happy to have yours. As we said before, lowering the critical thresholds might also be a solution, but I think it will make the load balancer less effective if you encounter heavy load a some point.
  • xcp-ng patches install fail

    XCP-ng
    3
    0 Votes
    3 Posts
    64 Views
    P
    https://xcp-ng.org/forum/topic/11951/just-fyi-current-update-seams-to-break-nut-dependancies
  • Failed backup jobs since updating

    Backup
    7
    1
    0 Votes
    7 Posts
    90 Views
    olivierlambertO
    png @florent
  • Potential bug with Windows VM backup: "Body Timeout Error"

    Backup
    58
    3
    2 Votes
    58 Posts
    7k Views
    P
    @nikade Hey Nikade, Did you try to create a new job that will do a new chain ? Just for test.
  • AMD 'Barcelo' passthrough issues - any success stories?

    Hardware
    12
    1
    0 Votes
    12 Posts
    449 Views
    T
    @DustyArmstrong Thanks for responding to the GitHub issue. It’s great that more people want this working; it’s difficult to gain traction otherwise. Regarding your list, it’s correct. A reboot should be on the second place. You need to reboot only to detach your PCI device (video card) from its driver and assign it to the pciback driver instead on the next boot. This effectively creates a reservation for the device and allows you to dynamically assign it to VMs. Once your card is free from other kernel drivers, the rest doesn’t require a reboot.
  • "NOT_SUPPORTED_DURING_UPGRADE()" error after yesterday's update

    Backup
    24
    0 Votes
    24 Posts
    1k Views
    M
    @MajorP93 said: -disable HA on pool level -disable load balancer plugin -upgrade master -upgrade all other nodes -restart toolstack on master -restart toolstack on all other nodes -live migrate all VMs running on master to other node(s) -reboot master -reboot next node (live migrate all VMs running on that particular node away before doing so) -repeat until all nodes have been rebooted (one node at a time) -re-enable HA on pool level -re-enable load balancer plugin Never had any issues with that. No downtime for none of the VMs. update time again. and same issue I followed these steps -upgrade master -upgrade all other nodes -restart toolstack on master -restart toolstack on all other nodes -live migrate all VMs running on master to other node(s) -reboot master now cant migrate anything else. live migration : NOT_SUPPORTED_DURING_UPGRADE warm migration: fails and VM shuts down immediately and needs to be forced back to life CR backup to another server : NOT_SUPPORTED_DURING_UPGRADE
  • XO-Lite ne se lance plus

    French (Français)
    4
    1 Votes
    4 Posts
    66 Views
    J
    @yann Ce n'est pas de la PROD. Si ma compréhension est bonne, après recherche, le stockage vers la LUN NetAPP (iSCSI) a été perdu, la mode HA étant actif (les volumes HA n'était plus accessible). [10:05 xcp-ng-poc-1 ~]# xe vm-list The server could not join the liveset because the HA daemon could not access the heartbeat disk. [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable Error: This operation is dangerous and may cause data loss. This operation must be forced (use --force). [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable --force [10:06 xcp-ng-poc-1 ~]# xe-toolstack-restart Executing xe-toolstack-restart done. [10:07 xcp-ng-poc-1 ~]# Côté stockage [10:09 xcp-ng-poc-1 ~]# xe pbd-list sr-uuid=16ec6b11-6110-7a27-4d94-dfcc09f34d15 uuid ( RO) : be5ac5cc-bc70-4eef-8b01-a9ed98f83e23 host-uuid ( RO): 0219cb2e-46b8-4657-bfa4-c924b59e373a sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15 device-config (MRO): SCSIid: 3600a098038323566622b5a5977776557; targetIQN: iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8; targetport: 3260; target: 172.17.10.1; multihomelist: 172.17.10.1:3260,172.17.11.1:3260,172.17.1.1:3260,172.17.0.1:3260 currently-attached ( RO): false uuid ( RO) : a2dd4324-ce32-5a5e-768f-cc0df10dc49a host-uuid ( RO): cb9a2dc3-cc1d-4467-99eb-6896503b4e11 sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15 device-config (MRO): multiSession: 172.17.1.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|172.17.0.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|; target: 172.17.0.1; targetIQN: *; SCSIid: 3600a098038323566622b5a5977776557; multihomelist: 172.17.1.1:3260,172.17.10.1:3260,172.17.11.1:3260,172.17.0.1:3260 currently-attached ( RO): false [10:09 xcp-ng-poc-1 ~]# xe pbd-plug uuid=be5ac5cc-bc70-4eef-8b01-a9ed98f83e23 Error code: SR_BACKEND_FAILURE_47 Error parameters: , The SR is not available [opterr=no such volume group: VG_XenStorage-16ec6b11-6110-7a27-4d94-dfcc09f34d15], [10:10 xcp-ng-poc-1 ~]# Après ca, XO-Lite s'est correctement relancé.
  • 0 Votes
    3 Posts
    76 Views
    K
    @olivierlambert done.
  • "Guest tools status"

    Migrate to XCP-ng
    1
    0 Votes
    1 Posts
    49 Views
    No one has replied
  • XCP-ng 8.3 updates announcements and testing

    Pinned News
    413
    1 Votes
    413 Posts
    160k Views
    A
    @gduperrey Of course, the order matters. Now everything seems to be clear.