Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    A
    @gduperrey Installed on home lab via rolling pool update and both host updated no issues and vms migrated back to 2nd host as expected this time. fingers crossed work servers have the same luck. I do have open support ticket from last round of updates for work servers. Waiting for response before installing patches.
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    M
    I can confirm that when using Citrix/Xenserver guest utilities version 8.4 (https://github.com/xenserver/xe-guest-utilities/releases/tag/v8.4.0) memory ballooning / DMC is working fine. After live migration the RAM of the linux guest is expanded to dynamic_max again. So this issue was in fact caused by Rust based xen-guest-agent. For now I'll keep using Citrix/Xenserver guest utilities on my Linux guests until the feature is implemented in Vates rust-based guest utilities. Best regards
  • 3k Topics
    28k Posts
    A
    @MathieuRA Think this is what you were looking for? I tried both with ndjson boolean set to false and true same output... [image: 1774629796944-screenshot-2026-03-27-124309.png] [image: 1774629830992-79100bc0-ad51-48df-9e44-f6bbd912f44d-image.jpeg] { "quickInfo": { "id": "613f541c-4bed-fc77-7ca8-2db6b68f079c", "power_state": "Halted", "uuid": "613f541c-4bed-fc77-7ca8-2db6b68f079c", "name_description": "some-random-description", "CPUs": { "number": 1 }, "mainIpAddress": "10.1.6.166", "os_version": { "name": "Alpine Linux v3.21" }, "memory": { "size": 536870912 }, "creation": { "date": "2025-10-23T14:12:05.689Z", "user": "e531b8c9-3876-4ed9-8fd2-0476d5f825c9" }, "$pool": "b7569d99-30f8-178a-7d94-801de3e29b5b", "virtualizationMode": "hvm", "tags": [], "host": "b61a5c92-700e-4966-a13b-00633f03eea8", "pvDriversDetected": false, "startTime": null }, "alarms": [], "backupsInfo": { "lastRun": [ { "backupJobId": "399f368a-a550-4cdf-9c5b-84b68912b748", "timestamp": 1762124447136, "status": "success" }, { "backupJobId": "399f368a-a550-4cdf-9c5b-84b68912b748", "timestamp": 1762038039074, "status": "success" }, { "backupJobId": "399f368a-a550-4cdf-9c5b-84b68912b748", "timestamp": 1761951645862, "status": "success" } ], "vmProtected": true, "replication": { "id": "8c2b7a25-70b9-4a1c-d6e0-9cce86d3171a", "timestamp": 1761302770000, "sr": "4cb0d74e-a7c1-0b7d-46e3-09382c012abb" }, "backupArchives": [ { "id": "1af95910-01b4-4e87-9c2f-d895cafe0776//xo-vm-backups/613f541c-4bed-fc77-7ca8-2db6b68f079c/20251102T230026Z.json", "timestamp": 1762124426346, "backupRepository": "1af95910-01b4-4e87-9c2f-d895cafe0776", "size": 0 }, { "id": "1af95910-01b4-4e87-9c2f-d895cafe0776//xo-vm-backups/613f541c-4bed-fc77-7ca8-2db6b68f079c/20251101T230026Z.json", "timestamp": 1762038026319, "backupRepository": "1af95910-01b4-4e87-9c2f-d895cafe0776", "size": 0 }, { "id": "1af95910-01b4-4e87-9c2f-d895cafe0776//xo-vm-backups/613f541c-4bed-fc77-7ca8-2db6b68f079c/20251031T230025Z.json", "timestamp": 1761951625256, "backupRepository": "1af95910-01b4-4e87-9c2f-d895cafe0776", "size": 0 } ] } } No links 400 Bad request
  • Our hyperconverged storage solution

    43 Topics
    729 Posts
    SuperDuckGuyS
    @alcoralcor Thanks for the info. I thought maybe I was using too many disks, so I've tried creating disk groups of 3-4 drives with the same issue.
  • 33 Topics
    98 Posts
    J
    @yann Ce n'est pas de la PROD. Si ma compréhension est bonne, après recherche, le stockage vers la LUN NetAPP (iSCSI) a été perdu, la mode HA étant actif (les volumes HA n'était plus accessible). [10:05 xcp-ng-poc-1 ~]# xe vm-list The server could not join the liveset because the HA daemon could not access the heartbeat disk. [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable Error: This operation is dangerous and may cause data loss. This operation must be forced (use --force). [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable --force [10:06 xcp-ng-poc-1 ~]# xe-toolstack-restart Executing xe-toolstack-restart done. [10:07 xcp-ng-poc-1 ~]# Côté stockage [10:09 xcp-ng-poc-1 ~]# xe pbd-list sr-uuid=16ec6b11-6110-7a27-4d94-dfcc09f34d15 uuid ( RO) : be5ac5cc-bc70-4eef-8b01-a9ed98f83e23 host-uuid ( RO): 0219cb2e-46b8-4657-bfa4-c924b59e373a sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15 device-config (MRO): SCSIid: 3600a098038323566622b5a5977776557; targetIQN: iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8; targetport: 3260; target: 172.17.10.1; multihomelist: 172.17.10.1:3260,172.17.11.1:3260,172.17.1.1:3260,172.17.0.1:3260 currently-attached ( RO): false uuid ( RO) : a2dd4324-ce32-5a5e-768f-cc0df10dc49a host-uuid ( RO): cb9a2dc3-cc1d-4467-99eb-6896503b4e11 sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15 device-config (MRO): multiSession: 172.17.1.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|172.17.0.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|; target: 172.17.0.1; targetIQN: *; SCSIid: 3600a098038323566622b5a5977776557; multihomelist: 172.17.1.1:3260,172.17.10.1:3260,172.17.11.1:3260,172.17.0.1:3260 currently-attached ( RO): false [10:09 xcp-ng-poc-1 ~]# xe pbd-plug uuid=be5ac5cc-bc70-4eef-8b01-a9ed98f83e23 Error code: SR_BACKEND_FAILURE_47 Error parameters: , The SR is not available [opterr=no such volume group: VG_XenStorage-16ec6b11-6110-7a27-4d94-dfcc09f34d15], [10:10 xcp-ng-poc-1 ~]# Après ca, XO-Lite s'est correctement relancé.