Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    A
    @gduperrey Of course, the order matters. Now everything seems to be clear.
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    T
    @DustyArmstrong Thanks for responding to the GitHub issue. It’s great that more people want this working; it’s difficult to gain traction otherwise. Regarding your list, it’s correct. A reboot should be on the second place. You need to reboot only to detach your PCI device (video card) from its driver and assign it to the pciback driver instead on the next boot. This effectively creates a reservation for the device and allows you to dynamically assign it to VMs. Once your card is free from other kernel drivers, the rest doesn’t require a reboot.
  • 3k Topics
    27k Posts
    P
    I thought I solved it.. but now the problem is back again: Yesterday, 14 March, I did the following: deleted the snapshots on the running Deb12-XO machine, so it was clean deleted the clones of the machine ran the backup job maually - with success (transferred about 30G to the other host) (still on the "old" XO version) Checked the destination host - no clone (backup) of the VM found. manually created two (full, not quick) clones of the VM so it won't be lost of anything goes sideways updated XO to the latest (2aff8) let it do the relication according to the schedule 21:00 replication success Next morning (15 Mar) 09:00 replication also ran without any issues - at least according to the log. no additional copy of the VM on the destination host (retention is set to 2, so I should have one 21:00 and room for the 09:00 too) Evening replication, 21:00, failed. Got that same error message as before: VM Backup report Global status : failure Job ID: 883e2ee8-00c8-43f8-9ecd-9f9aa7aa01d1 Run ID: 1773604800005 Mode: delta Start time: Sunday, March 15th 2026, 9:00:00 pm End time: Sunday, March 15th 2026, 9:00:13 pm Duration: a few seconds Successes: 0 / 1 Transfer size: 126 MiB 1 Failure Deb12-XO Debian 12 XO self-install Pool id: 4cc74549-71c3-31d5-f204-7106e90acd1e UUID: 30829107-2a1b-6b20-a08a-f2c1e612b2ee Start time: Sunday, March 15th 2026, 9:00:02 pm End time: Sunday, March 15th 2026, 9:00:13 pm Duration: a few seconds Error: _removeUnusedSnapshots don't handle vdi related to multiple VMs Deb12-XO - Deb12-XO - (20260315T080004Z) and [XO Backup Deb12-XO] Deb12-XO = I notice the name of the snapshot seems odd: [XO Backup Deb12-XO] Deb12-XO - Deb12-XO Maybe just a new naming convention, but I found the old one better (for my Admin VM): Admin Ubuntu 24 - Admin Ubuntu 24 - (20260201T125508Z) Admin Ubuntu 24 - Admin Ubuntu 24 - (20260208T125508Z) (I assume the double "Admin Ubuntu 24" is because the backup job name is the same as the machine name)
  • Our hyperconverged storage solution

    43 Topics
    729 Posts
    SuperDuckGuyS
    @alcoralcor Thanks for the info. I thought maybe I was using too many disks, so I've tried creating disk groups of 3-4 drives with the same issue.
  • 33 Topics
    98 Posts
    J
    @yann Ce n'est pas de la PROD. Si ma compréhension est bonne, après recherche, le stockage vers la LUN NetAPP (iSCSI) a été perdu, la mode HA étant actif (les volumes HA n'était plus accessible). [10:05 xcp-ng-poc-1 ~]# xe vm-list The server could not join the liveset because the HA daemon could not access the heartbeat disk. [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable Error: This operation is dangerous and may cause data loss. This operation must be forced (use --force). [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable --force [10:06 xcp-ng-poc-1 ~]# xe-toolstack-restart Executing xe-toolstack-restart done. [10:07 xcp-ng-poc-1 ~]# Côté stockage [10:09 xcp-ng-poc-1 ~]# xe pbd-list sr-uuid=16ec6b11-6110-7a27-4d94-dfcc09f34d15 uuid ( RO) : be5ac5cc-bc70-4eef-8b01-a9ed98f83e23 host-uuid ( RO): 0219cb2e-46b8-4657-bfa4-c924b59e373a sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15 device-config (MRO): SCSIid: 3600a098038323566622b5a5977776557; targetIQN: iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8; targetport: 3260; target: 172.17.10.1; multihomelist: 172.17.10.1:3260,172.17.11.1:3260,172.17.1.1:3260,172.17.0.1:3260 currently-attached ( RO): false uuid ( RO) : a2dd4324-ce32-5a5e-768f-cc0df10dc49a host-uuid ( RO): cb9a2dc3-cc1d-4467-99eb-6896503b4e11 sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15 device-config (MRO): multiSession: 172.17.1.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|172.17.0.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|; target: 172.17.0.1; targetIQN: *; SCSIid: 3600a098038323566622b5a5977776557; multihomelist: 172.17.1.1:3260,172.17.10.1:3260,172.17.11.1:3260,172.17.0.1:3260 currently-attached ( RO): false [10:09 xcp-ng-poc-1 ~]# xe pbd-plug uuid=be5ac5cc-bc70-4eef-8b01-a9ed98f83e23 Error code: SR_BACKEND_FAILURE_47 Error parameters: , The SR is not available [opterr=no such volume group: VG_XenStorage-16ec6b11-6110-7a27-4d94-dfcc09f34d15], [10:10 xcp-ng-poc-1 ~]# Après ca, XO-Lite s'est correctement relancé.