Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    A
    @dinhngtu You rock - thank you!
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    C
    I was rebooting my servers due to the patches released last week and I notice that the BIOS in my server was not set to interleave the memory. Since I have symmetric memory sticks, it seems like it should be set to interleave. However, the BIOS does say that for NUMA aware OSs, that interleaving should be left to the OS. I don't recall now, and my notes don't say, if I intentionally left interleaving off because v8.2 I was installing at the time was NUMA aware. Memory interleaving on a Dell R630: on or off? During the last patch I turned on interleaving for my R730xd (without reading the help that says not to do it with NUMA aware OSs) and it feels faster to me, but I didn't do any real testing so that could be wishful thinking bias. It certainly doesn't seem to have hurt performance.
  • 3k Topics
    27k Posts
    JSylvia007J
    Howdy all! I have a backup configuration that's been working fine for years. It's for some non-critical VMs, but recently, one of the VMs mysteriously started failing. Since it's not critical, I just sad 'screw it', and deleted the backup and recreated the configuration. This completely new backup is still failing, and only on that one problem VM. Error: stream has ended with not enough data (actual: 397, expected: 2097152) Error: ENOENT: no such file or directory, stat '/opt/xo/mounts/9f2e49f9-4e87-444a-aa68-4cbf73f28e6d/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/7fc5396a-5383-4dab-91fe-6758eb8b7474/530abab7-9ea9-43d4-be6e-acb3fbf67065/20260323T124158Z.alias.vhd' I re-ran it, and got some more info, but that could just be that the initial backup failed in this new backup configuration... ADMIN-VM02 (xcpng01) Clean VM directory VHD check error path "/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/e57785aa-f99d-4f67-b951-5c6ac5fef518/530abab7-9ea9-43d4-be6e-acb3fbf67065/20260313T020007Z.alias.vhd" error {"generatedMessage":false,"code":"ERR_ASSERTION","actual":false,"expected":true,"operator":"==","diff":"simple"} orphan merge state mergeStatePath "/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/e57785aa-f99d-4f67-b951-5c6ac5fef518/530abab7-9ea9-43d4-be6e-acb3fbf67065/.20260313T020007Z.alias.vhd.merge.json" missingVhdPath "/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/e57785aa-f99d-4f67-b951-5c6ac5fef518/530abab7-9ea9-43d4-be6e-acb3fbf67065/20260313T020007Z.alias.vhd" missing target of alias alias "/xo-vm-backups/afe4bee2-745d-da4a-0016-c74751856556/vdis/e57785aa-f99d-4f67-b951-5c6ac5fef518/530abab7-9ea9-43d4-be6e-acb3fbf67065/20260313T020007Z.alias.vhd" Start: 2026-03-23 17:26 End: 2026-03-23 17:26 Any idea what's going on here?
  • Our hyperconverged storage solution

    43 Topics
    729 Posts
    SuperDuckGuyS
    @alcoralcor Thanks for the info. I thought maybe I was using too many disks, so I've tried creating disk groups of 3-4 drives with the same issue.
  • 33 Topics
    98 Posts
    J
    @yann Ce n'est pas de la PROD. Si ma compréhension est bonne, après recherche, le stockage vers la LUN NetAPP (iSCSI) a été perdu, la mode HA étant actif (les volumes HA n'était plus accessible). [10:05 xcp-ng-poc-1 ~]# xe vm-list The server could not join the liveset because the HA daemon could not access the heartbeat disk. [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable Error: This operation is dangerous and may cause data loss. This operation must be forced (use --force). [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable --force [10:06 xcp-ng-poc-1 ~]# xe-toolstack-restart Executing xe-toolstack-restart done. [10:07 xcp-ng-poc-1 ~]# Côté stockage [10:09 xcp-ng-poc-1 ~]# xe pbd-list sr-uuid=16ec6b11-6110-7a27-4d94-dfcc09f34d15 uuid ( RO) : be5ac5cc-bc70-4eef-8b01-a9ed98f83e23 host-uuid ( RO): 0219cb2e-46b8-4657-bfa4-c924b59e373a sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15 device-config (MRO): SCSIid: 3600a098038323566622b5a5977776557; targetIQN: iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8; targetport: 3260; target: 172.17.10.1; multihomelist: 172.17.10.1:3260,172.17.11.1:3260,172.17.1.1:3260,172.17.0.1:3260 currently-attached ( RO): false uuid ( RO) : a2dd4324-ce32-5a5e-768f-cc0df10dc49a host-uuid ( RO): cb9a2dc3-cc1d-4467-99eb-6896503b4e11 sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15 device-config (MRO): multiSession: 172.17.1.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|172.17.0.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|; target: 172.17.0.1; targetIQN: *; SCSIid: 3600a098038323566622b5a5977776557; multihomelist: 172.17.1.1:3260,172.17.10.1:3260,172.17.11.1:3260,172.17.0.1:3260 currently-attached ( RO): false [10:09 xcp-ng-poc-1 ~]# xe pbd-plug uuid=be5ac5cc-bc70-4eef-8b01-a9ed98f83e23 Error code: SR_BACKEND_FAILURE_47 Error parameters: , The SR is not available [opterr=no such volume group: VG_XenStorage-16ec6b11-6110-7a27-4d94-dfcc09f34d15], [10:10 xcp-ng-poc-1 ~]# Après ca, XO-Lite s'est correctement relancé.