Categories

  • All news regarding Xen and XCP-ng ecosystem

    139 Topics
    4k Posts
    C
    @olivierlambert said in LargeBlockSR for 4KiB blocksize disks: I don't see any reason it couldn't, but I prefer @dthenot to answer I'm able to create and use two local SRs using type largeblock, but upon restarting the host, one of them is always disconnected and I'm not sure how to reconnect it. I'm guessing it has something to do with the loop setup. Thoughts? pbd.connect { "id": "8d8b8a97-314f-8e07-56a5-170e9ac37973" } { "code": "SR_BACKEND_FAILURE_202", "params": [ "", "General backend error [opterr=Command ['vgs', '--noheadings', '-o', 'vg_name,devices', 'XSLocalLargeBlock-83636851-e064-9d16-8356-0aca60f70553', '--config', 'devices{scan=[\"/dev/\"]}'] failed (Volume group \"XSLocalLargeBlock-83636851-e064-9d16-8356-0aca60f70553\" not found Cannot process volume group XSLocalLargeBlock-83636851-e064-9d16-8356-0aca60f70553): Input/output error]", "" ], "task": { "uuid": "bd8f1c89-e04c-adbb-381c-d8cc8434ac51", "name_label": "Async.PBD.plug", "name_description": "", "allowed_operations": [], "current_operations": {}, "created": "20251124T14:58:07Z", "finished": "20251124T14:58:07Z", "status": "failure", "resident_on": "OpaqueRef:644ca431-7524-53a1-a152-33704b6cbbaf", "progress": 1, "type": "<none/>", "result": "", "error_info": [ "SR_BACKEND_FAILURE_202", "", "General backend error [opterr=Command ['vgs', '--noheadings', '-o', 'vg_name,devices', 'XSLocalLargeBlock-83636851-e064-9d16-8356-0aca60f70553', '--config', 'devices{scan=[\"/dev/\"]}'] failed (Volume group \"XSLocalLargeBlock-83636851-e064-9d16-8356-0aca60f70553\" not found Cannot process volume group XSLocalLargeBlock-83636851-e064-9d16-8356-0aca60f70553): Input/output error]", "" ], "other_config": {}, "subtask_of": "OpaqueRef:NULL", "subtasks": [], "backtrace": "(((process xapi)(filename lib/backtrace.ml)(line 210))((process xapi)(filename ocaml/xapi/storage_utils.ml)(line 145))((process xapi)(filename ocaml/xapi/xapi_pbd.ml)(line 191))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 141))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/message_forwarding.ml)(line 5776))((process xapi)(filename ocaml/xapi/rbac.ml)(line 229))((process xapi)(filename ocaml/xapi/rbac.ml)(line 239))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 78)))" }, "message": "SR_BACKEND_FAILURE_202(, General backend error [opterr=Command ['vgs', '--noheadings', '-o', 'vg_name,devices', 'XSLocalLargeBlock-83636851-e064-9d16-8356-0aca60f70553', '--config', 'devices{scan=[\"/dev/\"]}'] failed (Volume group \"XSLocalLargeBlock-83636851-e064-9d16-8356-0aca60f70553\" not found Cannot process volume group XSLocalLargeBlock-83636851-e064-9d16-8356-0aca60f70553): Input/output error], )", "name": "XapiError", "stack": "XapiError: SR_BACKEND_FAILURE_202(, General backend error [opterr=Command ['vgs', '--noheadings', '-o', 'vg_name,devices', 'XSLocalLargeBlock-83636851-e064-9d16-8356-0aca60f70553', '--config', 'devices{scan=[\"/dev/\"]}'] failed (Volume group \"XSLocalLargeBlock-83636851-e064-9d16-8356-0aca60f70553\" not found Cannot process volume group XSLocalLargeBlock-83636851-e064-9d16-8356-0aca60f70553): Input/output error], ) at Function.wrap (file:///opt/xo/xo-builds/xen-orchestra-202510021218/packages/xen-api/_XapiError.mjs:16:12) at default (file:///opt/xo/xo-builds/xen-orchestra-202510021218/packages/xen-api/_getTaskResult.mjs:13:29) at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202510021218/packages/xen-api/index.mjs:1073:24) at file:///opt/xo/xo-builds/xen-orchestra-202510021218/packages/xen-api/index.mjs:1107:14 at Array.forEach (<anonymous>) at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202510021218/packages/xen-api/index.mjs:1097:12) at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202510021218/packages/xen-api/index.mjs:1270:14)" }
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    D
    @florent Ticket#7747444 and I just opened another ticket for the other client that is having the same issue. Ticket#7748053. Support tunnels should be open for both clients. Thanks!
  • 3k Topics
    26k Posts
    J
    There are no other jobs. I've now spun up a completely separate, fresh install of XCP-ng 8.3 to test the symptoms mentioned in the OP. Steps taken Installed XCP-ng 8.3 Text console over SSH xe host-disable xe host-evacuate (not needed yet of course since it's a brand-new install) yum update Reboot Text console over SSH again Created local ISO SR xe sr-create name-label="Local ISO" type=iso device-config:location=/opt/var/iso_repository device-config:legacy_mode=true content-type=iso cd /opt/var/iso_repository wget # ISO for Ubuntu Server xe sr-scan uuid=07dcbf24-761d-1332-9cd3-d7d67de1aa22 XO Lite New VM Booted from ISO, installed Server Text console to VM over SSH apt update/upgrade installed xe-guest-utilities Installed XO from source (ronivay script) XO Import ISO for Ubuntu Mate New VM Booted from ISO, installed Mate apt update/upgrade xe-guest-utilities New CR backup job Nightly VMs: 1 (Mate) Retention: 15 Full: every 7 Exact same behaviour. After first (full) CR job run, additional (incremental) CR job runs results in one more 'unhealthy VDI'. I've engaged in no other shenanigans. Plain vanilla XCP-ng and XO. There are only two VMs on this host, the XO from source VM, and a desktop OS VM which is the only target of the CR job. There are zero exceptions in SMlog. What do you need to see?
  • Our hyperconverged storage solution

    37 Topics
    690 Posts
    ronan-aR
    @TestForEcho No ETA for now. Even before supporting QCOW2 on LINSTOR, we have several points to robustify (HA performance, potential race conditions, etc.). Regarding other important points: Supporting volumes larger than 2TB has significant impacts on synchronization, RAM usage, coalesce, etc. We need to find a way to cope with these changes. The coalesce algorithm should be changed to no longer depend on the write speed to the SR in order to prevent potential coalesce interruptions; this is even more crucial for LINSTOR. The coalesce behavior is not exactly the same for QCOW2, and we believe that currently this could negatively impact the API impl in the case of LINSTOR. In short: QCOW2 has led to changes that require a long-term investment in several topics before even considering supporting this format on XOSTOR.
  • 30 Topics
    85 Posts
    GlitchG
    @Davidj-0 Merci pour le retour, j'utilisais aussi une Debian pour mon test ^^