Group Details Private

Top contributor

  • RE: VM, missing disk

    @MRisberg That's normal behavior when the VM isn't running.

    posted in Xen Orchestra
  • RE: S3 backup not retrying after error

    @florent No, I did not see that in the logs. I did see this problem is bigger than I thought.

    It happens more often than just causing a VM backup failure. It happens during the merge or other checks which causes the backup process to destroy (remove) parts of other VM backups.

     Clean VM directory 
     parent VHD is missing
     parent VHD is missing
     parent VHD is missing
     some VHDs linked to the backup are missing
     some VHDs linked to the backup are missing
     some VHDs linked to the backup are missing
     some VHDs linked to the backup are missing
    

    and

     Clean VM directory 
     VHD check error
     some VHDs linked to the backup are missing
    
    posted in Xen Orchestra
  • RE: S3 backup not retrying after error

    @florent Last night's failure (commit 81ae8)...

    {
          "data": {
            "type": "VM",
            "id": "f80fdf51-65e5-132d-bb2a-936bbd2814fc"
          },
          "id": "1663912365483:2",
          "message": "backup VM",
          "start": 1663912365483,
          "status": "failure",
          "tasks": [
            {
              "id": "1663912365570",
              "message": "clean-vm",
              "start": 1663912365570,
              "status": "failure",
              "end": 1663912403372,
              "result": {
                "name": "InternalError",
                "$fault": "client",
                "$metadata": {
                  "httpStatusCode": 500,
                  "extendedRequestId": "jOYV90/W5XHJFnOq1mlfpaMT/T9EV4/EnSluEni+p9TJQykrtI0cJMntJqFThy/PvX/LN0XX4xXS",
                  "attempts": 3,
                  "totalRetryDelay": 369
                },
                "Code": "InternalError",
                "Detail": "None:UnexpectedError",
                "RequestId": "85780FD1B7DFCB7C",
                "HostId": "jOYV90/W5XHJFnOq1mlfpaMT/T9EV4/EnSluEni+p9TJQykrtI0cJMntJqFThy/PvX/LN0XX4xXS",
                "message": "We encountered an internal error.  Please retry the operation again later.",
                "stack": "InternalError: We encountered an internal error.  Please retry the operation again later.\n    at throwDefaultError (/opt/xo/xo-builds/xen-orchestra-202209221033/node_modules/@aws-sdk/smithy-client/dist-cjs/default-error-handler.js:8:22)\n    at deserializeAws_restXmlGetObjectCommandError (/opt/xo/xo-builds/xen-orchestra-202209221033/node_modules/@aws-sdk/client-s3/dist-cjs/protocols/Aws_restXml.js:4356:51)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async /opt/xo/xo-builds/xen-orchestra-202209221033/node_modules/@aws-sdk/middleware-serde/dist-cjs/deserializerMiddleware.js:7:24\n    at async /opt/xo/xo-builds/xen-orchestra-202209221033/node_modules/@aws-sdk/middleware-signing/dist-cjs/middleware.js:11:20\n    at async StandardRetryStrategy.retry (/opt/xo/xo-builds/xen-orchestra-202209221033/node_modules/@aws-sdk/middleware-retry/dist-cjs/StandardRetryStrategy.js:51:46)\n    at async /opt/xo/xo-builds/xen-orchestra-202209221033/node_modules/@aws-sdk/middleware-flexible-checksums/dist-cjs/flexibleChecksumsMiddleware.js:56:20\n    at async /opt/xo/xo-builds/xen-orchestra-202209221033/node_modules/@aws-sdk/middleware-logger/dist-cjs/loggerMiddleware.js:6:22\n    at async S3Handler._createReadStream (/opt/xo/xo-builds/xen-orchestra-202209221033/@xen-orchestra/fs/dist/s3.js:261:15)\n    at async S3Handler.readFile (/opt/xo/xo-builds/xen-orchestra-202209221033/@xen-orchestra/fs/dist/abstract.js:326:18)"
              }
            },
            {
              "id": "1663912517635",
              "message": "snapshot",
              "start": 1663912517635,
              "status": "success",
              "end": 1663912520335,
              "result": "85b00101-5704-c847-8c91-8806195154b4"
            },
            {
              "data": {
                "id": "db9ad0a8-bce6-4a2b-b9fd-5c4cecf059c4",
                "isFull": false,
                "type": "remote"
              },
              "id": "1663912520336",
              "message": "export",
              "start": 1663912520336,
              "status": "success",
              "tasks": [
                {
                  "id": "1663912520634",
                  "message": "transfer",
                  "start": 1663912520634,
                  "status": "success",
                  "end": 1663912549741,
                  "result": {
                    "size": 251742720
                  }
                },
                {
                  "id": "1663912551469",
                  "message": "clean-vm",
                  "start": 1663912551469,
                  "status": "success",
                  "end": 1663912629752,
                  "result": {
                    "merge": false
                  }
                }
              ],
              "end": 1663912629752
            }
          ],
          "end": 1663912629752
        },
    
    posted in Xen Orchestra
  • RE: S3 backup not retrying after error

    @florent Thanks. I'm running it. I'll report after a few days.

    posted in Xen Orchestra
  • RE: Xen-Orchestra broken after rollback

    @maverick said in Xen-Orchestra broken after rollback:

    c5d2726faa0d373ee58371d05395fec2affaa7a5

    Why did you restore to this particular commit? Looking at it on GH, it is related to redis, as are many of the others around it.

    Perhaps you should fall back to an earlier commit, such as d87db05b2bee125305bf84d537246ee16342a198.

    posted in Xen Orchestra
  • RE: Xen-Orchestra broken after rollback

    You could review this thread to see if you have the same issue with a missing config file.

    Other options that come to mind --

    • Restore from snapshot
    • Restore from backup
    • Rebuild from scratch
    posted in Xen Orchestra
  • RE: Xen-Orchestra broken after rollback

    Have you tried rm -rf node_modules and then rebuilding with yarn; yarn build?

    posted in Xen Orchestra
  • RE: Orphan VDI snapshot after CR backup

    @olivierlambert It's still an ongoing issue (XO community commit f1ab6).

    Here is an error XO when it fails to remove the old snapshot:

    Sep 21 16:00:59 xo1 xo-server[613294]: 2022-09-21T20:00:59.229Z xo:xapi:vm WARN VM_destroy: failed to destroy VDI {
    Sep 21 16:00:59 xo1 xo-server[613294]:   error: XapiError: HANDLE_INVALID(VBD, OpaqueRef:6b28b472-e82e-4117-a0c0-b61ee894e3b5)
    Sep 21 16:00:59 xo1 xo-server[613294]:       at XapiError.wrap (/opt/xo/xo-builds/xen-orchestra-202209211219/packages/xen-api/dist/_XapiError.js:26:12)
    Sep 21 16:00:59 xo1 xo-server[613294]:       at /opt/xo/xo-builds/xen-orchestra-202209211219/packages/xen-api/dist/transports/json-rpc.js:46:30
    Sep 21 16:00:59 xo1 xo-server[613294]:       at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
    Sep 21 16:00:59 xo1 xo-server[613294]:     code: 'HANDLE_INVALID',
    Sep 21 16:00:59 xo1 xo-server[613294]:     params: [ 'VBD', 'OpaqueRef:6b28b472-e82e-4117-a0c0-b61ee894e3b5' ],
    Sep 21 16:00:59 xo1 xo-server[613294]:     call: { method: 'VBD.get_VM', params: [Array] },
    Sep 21 16:00:59 xo1 xo-server[613294]:     url: undefined,
    Sep 21 16:00:59 xo1 xo-server[613294]:     task: undefined
    Sep 21 16:00:59 xo1 xo-server[613294]:   },
    Sep 21 16:00:59 xo1 xo-server[613294]:   vdiRef: 'OpaqueRef:56e6071e-eb67-4e02-b6d1-b814ea43eeeb',
    Sep 21 16:00:59 xo1 xo-server[613294]:   vmRef: 'OpaqueRef:31957bf1-2f2b-474d-a496-e2a2460f533f'
    Sep 21 16:00:59 xo1 xo-server[613294]: }
    
    posted in Xen Orchestra
  • RE: VM, missing disk

    @MRisberg I will defer to others on advising you on how to fix your situation. If the VM's contents are important, then I would be sure to make multiple backups in case of a catastrophic event. You should be able to export the to an XVA so that it can be reimported if needed.

    posted in Xen Orchestra