XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Backup Suddenly Failing

    Scheduled Pinned Locked Moved Backup
    25 Posts 5 Posters 186 Views 5 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • P Offline
      Pilow @JSylvia007
      last edited by

      @JSylvia007 did you test create a new job, with just this VM ?
      is it still failing ?

      a new job would have a separate UUID and create new folders with clean metadatas on your NAS.

      JSylvia007J 1 Reply Last reply Reply Quote 0
      • JSylvia007J Offline
        JSylvia007 @Pilow
        last edited by

        @Pilow - I did just that. Fails in the exact same way.

        P 1 Reply Last reply Reply Quote 0
        • P Offline
          Pilow @JSylvia007
          last edited by

          @JSylvia007 long stretch test but, if you have some space on the SR where resides the VM.
          shut the VM down and full-clone it.

          try a backup of this clone.

          report back ?

          JSylvia007J 1 Reply Last reply Reply Quote 0
          • JSylvia007J Offline
            JSylvia007 @Pilow
            last edited by

            @Pilow - I can try this, but not until a bit later.

            JSylvia007J 1 Reply Last reply Reply Quote 0
            • JSylvia007J Offline
              JSylvia007 @JSylvia007
              last edited by

              @pilow & @florent - The plot thickens. I'm unable to full-clone the VM...

              {
                "id": "0mn6ds7ih",
                "properties": {
                  "method": "vm.copy",
                  "params": {
                    "vm": "afe4bee2-745d-da4a-0016-c74751856556",
                    "sr": "247ef8a6-9c10-e100-acd3-c9193f34ddc3",
                    "name": "ADMIN-VM02_COPY"
                  },
                  "name": "API call: vm.copy",
                  "userId": "b06e5d9f-a602-4b76-a7bb-b1c915712ca3",
                  "type": "api.call"
                },
                "start": 1774463552009,
                "status": "failure",
                "updatedAt": 1774464678345,
                "end": 1774464678344,
                "result": {
                  "code": "VDI_COPY_FAILED",
                  "params": [
                    "Fatal error: exception Unix.Unix_error(Unix.EIO, \"read\", \"\")\n"
                  ],
                  "task": {
                    "uuid": "555f90cc-12b7-7c2c-a2df-0f29a16a007e",
                    "name_label": "Async.VM.copy",
                    "name_description": "",
                    "allowed_operations": [],
                    "current_operations": {},
                    "created": "20260325T18:32:32Z",
                    "finished": "20260325T18:51:18Z",
                    "status": "failure",
                    "resident_on": "OpaqueRef:22c5ddea-00c6-f412-4439-536c4bbdca63",
                    "progress": 1,
                    "type": "<none/>",
                    "result": "",
                    "error_info": [
                      "VDI_COPY_FAILED",
                      "Fatal error: exception Unix.Unix_error(Unix.EIO, \"read\", \"\")\n"
                    ],
                    "other_config": {},
                    "subtask_of": "OpaqueRef:NULL",
                    "subtasks": [
                      "OpaqueRef:655cc4e3-0205-ba7d-5831-4b191ecfba9e"
                    ],
                    "backtrace": "(((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 77))((process xapi)(filename list.ml)(line 110))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 120))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 128))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 171))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 210))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 221))((process xapi)(filename list.ml)(line 121))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 223))((process xapi)(filename ocaml/xapi/xapi_vm_clone.ml)(line 461))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/xapi_vm.ml)(line 791))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 24))((process xapi)(filename ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml)(line 39))((process xapi)(filename ocaml/xapi/rbac.ml)(line 228))((process xapi)(filename ocaml/xapi/rbac.ml)(line 238))((process xapi)(filename ocaml/xapi/server_helpers.ml)(line 78)))"
                  },
                  "message": "VDI_COPY_FAILED(Fatal error: exception Unix.Unix_error(Unix.EIO, \"read\", \"\")\n)",
                  "name": "XapiError",
                  "stack": "XapiError: VDI_COPY_FAILED(Fatal error: exception Unix.Unix_error(Unix.EIO, \"read\", \"\")\n)\n    at XapiError.wrap (file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/_XapiError.mjs:16:12)\n    at default (file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/_getTaskResult.mjs:13:29)\n    at Xapi._addRecordToCache (file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/index.mjs:1078:24)\n    at file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/index.mjs:1112:14\n    at Array.forEach (<anonymous>)\n    at Xapi._processEvents (file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/index.mjs:1102:12)\n    at Xapi._watchEvents (file:///opt/xo/xo-builds/xen-orchestra-202603241416/packages/xen-api/index.mjs:1275:14)"
                }
              }
              
              P 1 Reply Last reply Reply Quote 0
              • P Offline
                Pilow @JSylvia007
                last edited by

                @JSylvia007

                      "Fatal error: exception Unix.Unix_error(Unix.EIO, \"read\", \"\")\n"
                

                mmmm SR is failing ?
                can you restore the last known good state of this VM (in parallel of the one in production) and try to backup this restored version ?

                JSylvia007J 1 Reply Last reply Reply Quote 0
                • JSylvia007J Offline
                  JSylvia007 @Pilow
                  last edited by

                  @Pilow - Well... Didn't you ruin my day yesterday... LOL.

                  Long story short... The local SR is on a RAID5 in the server. The damn thing isn't reporting that a drive has failed (hard failed, it's not even showing up in the RAID Controller anymore). The array says degraded, but fine... So it's weird to me that this could be related. I have a drive on order and plan to rebuild the array as soon as the drive arrives.

                  Once that happens, I will start investigating this again... Maybe it magically clears up.

                  P 1 Reply Last reply Reply Quote 0
                  • P Offline
                    Pilow @JSylvia007
                    last edited by

                    @JSylvia007 hoho... beware of array rebuilding on old disks

                    JSylvia007J 1 Reply Last reply Reply Quote 0
                    • JSylvia007J Offline
                      JSylvia007 @Pilow
                      last edited by

                      @Pilow - This is why I'm nervous... BUT... EVERYTHING else has a good backup. I plan to shut down all the VMs and then backup once more while I rebuild the array.
                      Fingers crossed.

                      1 Reply Last reply Reply Quote 0
                      • P Offline
                        ph7
                        last edited by

                        Living on the edge at my home lab

                        Screenshot 2026-03-26 at 19-08-50 scrutiny.png

                        In my other NAS the oldest one is closing up 11 years 😨

                        1 Reply Last reply Reply Quote 0

                        Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                        Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                        With your input, this post could be even better 💗

                        Register Login
                        • First post
                          Last post