Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    A
    @gduperrey Of course, the order matters. Now everything seems to be clear.
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    AlexanderKA
    @abudef exactly
  • 3k Topics
    27k Posts
    P
    { "data": { "mode": "delta", "reportWhen": "failure" }, "id": "1773474108084", "jobId": "3dcd13a8-de8b-47b6-945b-12dbad9c6234", "jobName": "ContRep", "message": "backup", "scheduleId": "522d611e-7cd9-4fe0-a9e1-b409927cd8c8", "start": 1773474108084, "status": "success", "infos": [ { "data": { "vms": [ "b1940325-7c09-7342-5a90-be2185c6d5b9", "86ab334a-92dc-324c-0c42-43aad3ae3bc2", "0f5c4931-a468-e75d-fa54-e1f9da0227a1" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "b1940325-7c09-7342-5a90-be2185c6d5b9", "name_label": "PiHole wifi" }, "id": "1773474110343", "message": "backup VM", "start": 1773474110343, "status": "success", "tasks": [ { "id": "1773474111649", "message": "snapshot", "start": 1773474111649, "status": "success", "end": 1773474113141, "result": "d4b0607f-0837-c7ae-5c2d-6426995470bd" }, { "data": { "id": "4f2f7ae2-024a-9ac7-add4-ffe7d569cae7", "isFull": true, "name_label": "Q1-ContRep", "type": "SR" }, "id": "1773474113142", "message": "export", "start": 1773474113142, "status": "success", "tasks": [ { "id": "1773474114002", "message": "transfer", "start": 1773474114002, "status": "success", "tasks": [ { "id": "1773474189159", "message": "target snapshot", "start": 1773474189159, "status": "success", "end": 1773474190075, "result": "OpaqueRef:fa356b5f-4b25-d3e6-5507-6ed81c32b1d8" } ], "end": 1773474190075, "result": { "size": 4299161600 } } ], "end": 1773474190682 } ], "end": 1773474191523 }, { "data": { "type": "VM", "id": "86ab334a-92dc-324c-0c42-43aad3ae3bc2", "name_label": "Home Assistant" }, "id": "1773474191534", "message": "backup VM", "start": 1773474191534, "status": "success", "tasks": [ { "id": "1773474191707", "message": "snapshot", "start": 1773474191707, "status": "success", "end": 1773474193196, "result": "c3f038c3-7ca9-cbbb-9f84-61e1fd30c9d5" }, { "data": { "id": "4f2f7ae2-024a-9ac7-add4-ffe7d569cae7", "isFull": true, "name_label": "Q1-ContRep", "type": "SR" }, "id": "1773474193196:0", "message": "export", "start": 1773474193196, "status": "success", "tasks": [ { "id": "1773474194123", "message": "transfer", "start": 1773474194123, "status": "success", "tasks": [ { "id": "1773474462529", "message": "target snapshot", "start": 1773474462529, "status": "success", "end": 1773474463434, "result": "OpaqueRef:c13f3cab-29c8-4ef0-253f-de5998580cd9" } ], "end": 1773474463434, "result": { "size": 15548284928 } } ], "end": 1773474464311 } ], "end": 1773474466186 }, { "data": { "type": "VM", "id": "0f5c4931-a468-e75d-fa54-e1f9da0227a1", "name_label": "Sync Mate" }, "id": "1773474466193", "message": "backup VM", "start": 1773474466193, "status": "success", "tasks": [ { "id": "1773474466371", "message": "snapshot", "start": 1773474466371, "status": "success", "end": 1773474470399, "result": "36a17271-1c2f-4b26-0d86-dc0faf27fa17" }, { "data": { "id": "4f2f7ae2-024a-9ac7-add4-ffe7d569cae7", "isFull": true, "name_label": "Q1-ContRep", "type": "SR" }, "id": "1773474470399:0", "message": "export", "start": 1773474470399, "status": "success", "tasks": [ { "id": "1773474471561", "message": "transfer", "start": 1773474471561, "status": "success", "tasks": [ { "id": "1773476263789", "message": "target snapshot", "start": 1773476263789, "status": "success", "end": 1773476264925, "result": "OpaqueRef:88bd55f4-64ea-fb16-f389-4b9f42ac459f" } ], "end": 1773476264925, "result": { "size": 105526591488 } } ], "end": 1773476267354 } ], "end": 1773476268187 } ], "end": 1773476268187 }
  • Our hyperconverged storage solution

    43 Topics
    729 Posts
    SuperDuckGuyS
    @alcoralcor Thanks for the info. I thought maybe I was using too many disks, so I've tried creating disk groups of 3-4 drives with the same issue.
  • 33 Topics
    98 Posts
    J
    @yann Ce n'est pas de la PROD. Si ma compréhension est bonne, après recherche, le stockage vers la LUN NetAPP (iSCSI) a été perdu, la mode HA étant actif (les volumes HA n'était plus accessible). [10:05 xcp-ng-poc-1 ~]# xe vm-list The server could not join the liveset because the HA daemon could not access the heartbeat disk. [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable Error: This operation is dangerous and may cause data loss. This operation must be forced (use --force). [10:06 xcp-ng-poc-1 ~]# xe host-emergency-ha-disable --force [10:06 xcp-ng-poc-1 ~]# xe-toolstack-restart Executing xe-toolstack-restart done. [10:07 xcp-ng-poc-1 ~]# Côté stockage [10:09 xcp-ng-poc-1 ~]# xe pbd-list sr-uuid=16ec6b11-6110-7a27-4d94-dfcc09f34d15 uuid ( RO) : be5ac5cc-bc70-4eef-8b01-a9ed98f83e23 host-uuid ( RO): 0219cb2e-46b8-4657-bfa4-c924b59e373a sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15 device-config (MRO): SCSIid: 3600a098038323566622b5a5977776557; targetIQN: iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8; targetport: 3260; target: 172.17.10.1; multihomelist: 172.17.10.1:3260,172.17.11.1:3260,172.17.1.1:3260,172.17.0.1:3260 currently-attached ( RO): false uuid ( RO) : a2dd4324-ce32-5a5e-768f-cc0df10dc49a host-uuid ( RO): cb9a2dc3-cc1d-4467-99eb-6896503b4e11 sr-uuid ( RO): 16ec6b11-6110-7a27-4d94-dfcc09f34d15 device-config (MRO): multiSession: 172.17.1.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|172.17.0.1,3260,iqn.1992-08.com.netapp:sn.89de0ec3fba011f0be0bd039eae42297:vs.8|; target: 172.17.0.1; targetIQN: *; SCSIid: 3600a098038323566622b5a5977776557; multihomelist: 172.17.1.1:3260,172.17.10.1:3260,172.17.11.1:3260,172.17.0.1:3260 currently-attached ( RO): false [10:09 xcp-ng-poc-1 ~]# xe pbd-plug uuid=be5ac5cc-bc70-4eef-8b01-a9ed98f83e23 Error code: SR_BACKEND_FAILURE_47 Error parameters: , The SR is not available [opterr=no such volume group: VG_XenStorage-16ec6b11-6110-7a27-4d94-dfcc09f34d15], [10:10 xcp-ng-poc-1 ~]# Après ca, XO-Lite s'est correctement relancé.