Categories

  • All news regarding Xen and XCP-ng ecosystem

    143 Topics
    4k Posts
    A
    @olivierlambert Seeing some failure/errors on CR jobs. It leaves VDIs attached to Control Domain... Next run it normally works. I have not seen this error until after the current sm update. Running XO (commit 7e144). "message": "INTERNAL_ERROR(Storage_error ([S(Illegal_transition); [[S(Activated);S(RO)];[S(Activated);S(RW)]]]))", "name": "XapiError", "stack": "XapiError: INTERNAL_ERROR(Storage_error ([S(Illegal_transition);[[S(Activated);S(RO)];[S(Activated);S(RW)]]]))\n at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/_XapiError.mjs:16:12)\n at default (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/_getTaskResult.mjs:13:29)\n at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/index.mjs:1078:24)\n at file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/index.mjs:1112:14\n at Array.forEach (<anonymous>)\n at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/index.mjs:1102:12)\n at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/index.mjs:1275:14)\n at process.processTicksAndRejections (node:internal/process/task_queues:104:5)"
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    bleaderB
    Copy Fail is documented in VSA-2026-013, we don't have one for Dirty Frag yet as we're still investigating XCP-ng side regarding it. For XOA, unattended updates should have installed the patched debian kernel, you just need to reboot it. Debian security tracker states they are both fixed: https://security-tracker.debian.org/tracker/CVE-2026-31431 https://security-tracker.debian.org/tracker/CVE-2026-43284
  • 3k Topics
    28k Posts
    P
    hello there, still XOA 6.3.3 here. since the last big XObackup & CR reworking, we have an annoying bug [image: 1778328477250-6db625da-2a61-4860-bc50-5c6adae0ae33-image.jpeg] This VM is 2.17Tb [image: 1778328523117-bc28b5ef-f37d-4bf9-b016-921309c17353-image.jpeg] how can be the KEY is 9.1Gb ? In fact the KEY size is good, but when retention of 10 points is obtained, the merging of last incremental in the first KEY point of the chain erase the size. So in our restore view, data sizes are all wrong, based on the sum of this key + 9 inc [image: 1778328703332-2e0a1587-365a-404f-9186-2b909605b3fa-image.jpeg] @florent could you do something about that ?
  • Our hyperconverged storage solution

    46 Topics
    740 Posts
    G
    @dthenot said: @ccooke Hello, You should be able to make the XOSTOR SR work again if you update sm and sm-fairlock on the other hosts. yum update sm sm-fairlock Then you should be able to re-plug the SR on the master and proceed with the RPU. Hello, Had the same problem, the command resolved the issue. It needs to be run on every host. Everything is working fine again. However, I had to complete the pool update manually.
  • 34 Topics
    102 Posts
    B
    La remarque a été intégrée dans l'article: https://www.myprivatelab.tech/xcp_lab_v2_ha#perte-master Merci encore pour le retour.