Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    A
    @dinhngtu You rock - thank you!
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    kruessK
    @Danp Thanks for looking into this... Restart: Yes, Pool Master was restarted after applying the XCP patches VM start: Yes, I'm trying to start the VMs on the master (also, it would not allow me to start VMs on the slaves with XenServer on it during the migration phase) Upgrade Path: Yes, vrom XS 7.1.2 to XCP 8.3 Regarding the VM_HOST_INCOMPATIBLE_VERSION: I saw those, too, but cannot think of anything obvious, as the VMs which I was able to start on the Master were of the same kind (WinSrv2019, Ubuntu Server, Win10). Of course, there might be slight differences in the metadata, but I've compared two WinSrv2019 and they are pretty close. Only "RW/MRW" differences found: HVM-boot-params (MRW): order: dc; firmware: bios HVM-boot-params (MRW): order: dc platform (MRW): ...; secureboot: false platform (MRW): ... I've removed those two param keys from the failing VM but it did not change the behaviour.
  • 3k Topics
    27k Posts
    K
    Hello Folks, I need help understanding why a VM, restored from snapshot, randomly cannot see any ISOs from any SRs. My Environment: HOSTs: XCP-ng 2-node pool at v8.3.0 on HP (ProLiant DL360p Gen8) - All patches applied XO: Community Edition at commit d1736 NETWORKING: 1Gbps Management only (no dedicated storage LAN yet - coming soon) STORAGE: Shared NFS Storage Repository (hosted on a separate TrueNAS Server. Dataset for ISO SR is showing 13 TiB capacity, ~90 GiB used) Background of the Situation: I'm developing an automated solution that leverages both SSH and the API to either provision VMs or directly SSH into a running VM and run my script. Both paths require the use of a VM Template, which I've created and have been successful in using to build the cloned VMs. As part of my development work, I've been testing functionality incrementally, and this involves sometimes starting with a fresh VM. So I will revert a snapshot and then start over. I started noticing that, at random times, one of the VMs will simply stop seeing the ISO SR. No amount of rescanning the SR seems to resolve it (see screenshot below). My only resolution, to date, has been to simply remove the VM and create a new one from the template. At first I thought this was isolated to a specific VM, but it has happened on a second VM a few moments ago, so I no longer think this is isolated. I can confirm that the ISO is present on the ISO SR and I can see it in XO. Also, that specific ISO is mounted to other VMs, so I don't suspect that the ISO is the problem - could be, but I don't think it is. Has anybody run into this problem before? If so, what did you do to resolve it? [image: 1774183284600-screenshot-2026-03-22-081722.png] . QUICK UPDATE (shortly after posting this) : So I was about to delete the VM and rebuild it from template, when a thought occurred to me - "Kismet, why not try the snapshot reversion one last time?" So I did, and wouldn't you know it, the VM is now able to see the ISO SR and attach the ISO. Strange. I don't understanding what just happened; perhaps some kind of delayed processing in the background? Or perhaps, I'm moving too fast and need to slow down? Here's a screenshot. [image: 1774184284789-screenshot-2026-03-22-084950.png]
  • Our hyperconverged storage solution

    43 Topics
    729 Posts
    SuperDuckGuyS
    @alcoralcor Thanks for the info. I thought maybe I was using too many disks, so I've tried creating disk groups of 3-4 drives with the same issue.
  • 33 Topics
    98 Posts
    J
    @yann Ce n'est pas de la PROD. Si ma compréhension est bonne, après recherche, le stockage vers la LUN NetAPP (iSCSI) a été perdu, la mode HA étant actif (les volumes HA n'était plus accessible). [10:05 xcp-ng-poc-1 ~]# xe vm-list The server could not join the liveset because the HA daemon could not access the heartbeat disk. [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable Error: This operation is dangerous and may cause data loss. This operation must be forced (use --force). [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable --force [10:06 xcp-ng-poc-1 ~]# xe-toolstack-restart Executing xe-toolstack-restart done. [10:07 xcp-ng-poc-1 ~]# Côté stockage [10:09 xcp-ng-poc-1 ~]# xe pbd-list sr-uuid=16ec6b11-6110-7a27-4d94-dfcc09f34d15 uuid ( RO) : be5ac5cc-bc70-4eef-8b01-a9ed98f83e23 host-uuid ( RO): 0219cb2e-46b8-4657-bfa4-c924b59e373a sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15 device-config (MRO): SCSIid: 3600a098038323566622b5a5977776557; targetIQN: iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8; targetport: 3260; target: 172.17.10.1; multihomelist: 172.17.10.1:3260,172.17.11.1:3260,172.17.1.1:3260,172.17.0.1:3260 currently-attached ( RO): false uuid ( RO) : a2dd4324-ce32-5a5e-768f-cc0df10dc49a host-uuid ( RO): cb9a2dc3-cc1d-4467-99eb-6896503b4e11 sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15 device-config (MRO): multiSession: 172.17.1.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|172.17.0.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|; target: 172.17.0.1; targetIQN: *; SCSIid: 3600a098038323566622b5a5977776557; multihomelist: 172.17.1.1:3260,172.17.10.1:3260,172.17.11.1:3260,172.17.0.1:3260 currently-attached ( RO): false [10:09 xcp-ng-poc-1 ~]# xe pbd-plug uuid=be5ac5cc-bc70-4eef-8b01-a9ed98f83e23 Error code: SR_BACKEND_FAILURE_47 Error parameters: , The SR is not available [opterr=no such volume group: VG_XenStorage-16ec6b11-6110-7a27-4d94-dfcc09f34d15], [10:10 xcp-ng-poc-1 ~]# Après ca, XO-Lite s'est correctement relancé.