Categories

  • All news regarding Xen and XCP-ng ecosystem

    143 Topics
    4k Posts
    A
    @olivierlambert Seeing some failure/errors on CR jobs. It leaves VDIs attached to Control Domain... Next run it normally works. I have not seen this error until after the current sm update. Running XO (commit 7e144). "message": "INTERNAL_ERROR(Storage_error ([S(Illegal_transition); [[S(Activated);S(RO)];[S(Activated);S(RW)]]]))", "name": "XapiError", "stack": "XapiError: INTERNAL_ERROR(Storage_error ([S(Illegal_transition);[[S(Activated);S(RO)];[S(Activated);S(RW)]]]))\n at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/_XapiError.mjs:16:12)\n at default (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/_getTaskResult.mjs:13:29)\n at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/index.mjs:1078:24)\n at file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/index.mjs:1112:14\n at Array.forEach (<anonymous>)\n at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/index.mjs:1102:12)\n at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202605050700/packages/xen-api/index.mjs:1275:14)\n at process.processTicksAndRejections (node:internal/process/task_queues:104:5)"
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    acebmxerA
    @Chuckz See my answer toward Nested Virtualization - https://xcp-ng.org/forum/post/105019 https://docs.xcp-ng.org/compute/#-nested-virtualization
  • 3k Topics
    28k Posts
    P
    @poddingue there is a notion of appliance too (group of VMs) https://docs.xcp-ng.org/appendix/cli_reference/#appliance-commands where you can start/stop a group of VMs, never tried it, doesn't seem to have a boot order in the vAPP neither
  • Our hyperconverged storage solution

    46 Topics
    740 Posts
    G
    @dthenot said: @ccooke Hello, You should be able to make the XOSTOR SR work again if you update sm and sm-fairlock on the other hosts. yum update sm sm-fairlock Then you should be able to re-plug the SR on the master and proceed with the RPU. Hello, Had the same problem, the command resolved the issue. It needs to be run on every host. Everything is working fine again. However, I had to complete the pool update manually.
  • 34 Topics
    102 Posts
    B
    La remarque a été intégrée dans l'article: https://www.myprivatelab.tech/xcp_lab_v2_ha#perte-master Merci encore pour le retour.