Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    A
    @gduperrey Installed on home lab via rolling pool update and both host updated no issues and vms migrated back to 2nd host as expected this time. fingers crossed work servers have the same luck. I do have open support ticket from last round of updates for work servers. Waiting for response before installing patches.
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    V
    Hi Everyone, I'm currently testing XCP-ng ISO modification. And i'm constently experiencing the gpg check failure [image: 1775101740152-02ecd07a-d9fe-43a7-b8de-3f6b0627bbef-image.jpeg] I understand why this happen since I'm modifying one of the rpm and the ISO itself. What I puzzled is why the ignore gpg-check I'm passing to the installer are ignored. So first I have tried to pass in --no-gpg-check and --no-repo-gpgcheck to the boot [image: 1775101909235-ddb2b6c4-d104-4b01-ab89-9351c3eebe4f-image.jpeg] But it seems not having effect I have also tried to put this in an answerfile : [image: 1775102149628-f106e6d2-2d86-4fff-8f19-a6d7da81e1da-image.jpeg] And calling it at boot [image: 1775102422663-b7b674f7-e8a0-4c9e-83fe-66c10d247355-image.jpeg] Unfortunately, in both case the gpg check seems to be triggered and as expected failing. Anyone, have any hint where could be my issue ? How can I disable gpg-check ? I'm guessing this is a syntax or config issue but can't find what it is.. Thanks in advance for any help !
  • 3k Topics
    28k Posts
    F
    Since updating to XOA 6.1.3 i have a VM that has secure boot enabled and will fail to replicate with the error: "VTPM_MAX_AMOUNT_REACHED(1)" Retrying the backup allows it to complete. "data": { "id": "69d826f4-383c-f163-b59a-8f3ea5132fd1", "isFull": false, "name_label": "C50-DR-Win-NFS3-SR1", "type": "SR" }, "id": "1775101037617", "message": "export", "start": 1775101037617, "status": "failure", "tasks": [ { "id": "1775101039730", "message": "transfer", "start": 1775101039730, "status": "failure", "end": 1775101063330, "result": { "code": "VTPM_MAX_AMOUNT_REACHED", "params": [ "1" ], "call": { "duration": 3, "method": "VTPM.create", "params": [ "* session id *", "OpaqueRef:5263e3da-0772-f8c5-5344-32e81c08c37a", false ] }, "message": "VTPM_MAX_AMOUNT_REACHED(1)", "name": "XapiError", "stack": "XapiError: VTPM_MAX_AMOUNT_REACHED(1)\n at XapiError.wrap (file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/_XapiError.mjs:16:12)\n at file:///usr/local/lib/node_modules/xo-server/node_modules/xen-api/transports/json-rpc.mjs:38:21\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)" } } ], "end": 1775101063749 } It appears to be trying to create a new TPM on the replica when one already exists? Not sure why it fails during the job run but completes during a retry but it is consistent in its behaviour.
  • Our hyperconverged storage solution

    44 Topics
    731 Posts
    olivierlambertO
    Different use cases: Ceph is better with more hosts (at least 6 or 7 minimum) while XOSTOR is better between 3 to 7/8. We might have better Ceph support in the future for large clusters.
  • 34 Topics
    101 Posts
    B
    @AtaxyaNetwork Merci becuoup pour ce retour. Je vais compléter l'article dans ce sens.