XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    FILE RESTORE / overlapping loop device exists

    Scheduled Pinned Locked Moved Backup
    11 Posts 3 Posters 109 Views 3 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • P Offline
      Pilow
      last edited by Pilow

      Hi, on latest channel XOA, we get this error :

      {
        "id": "0miuqao5o",
        "properties": {
          "method": "backupNg.listFiles",
          "params": {
            "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
            "disk": "/xo-vm-backups/ec9e8a54-a78e-8ca8-596e-20ebeaaa4308/vdis/70dec2db-a660-4bf4-b8f9-7c90e7e45156/7fe5a104-e9a3-4e16-951c-f88ce78e3b2a/20251206T161309Z.alias.vhd",
            "path": "/",
            "partition": "6f2859cc-5df3-4c47-bd05-37d3b066f11e"
          },
          "name": "API call: backupNg.listFiles",
          "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
          "type": "api.call"
        },
        "start": 1765051845324,
        "status": "failure",
        "updatedAt": 1765051845346,
        "end": 1765051845346,
        "result": {
          "code": -32000,
          "data": {
            "code": 32,
            "killed": false,
            "signal": null,
            "cmd": "mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd",
            "stack": "Error: Command failed: mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd\nmount: /tmp/k96ucr0xyd: overlapping loop device exists for /tmp/g82oettc5oh/vhd0.\n\n    at genericNodeError (node:internal/errors:984:15)\n    at wrappedFn (node:internal/errors:538:14)\n    at ChildProcess.exithandler (node:child_process:422:12)\n    at ChildProcess.emit (node:events:518:28)\n    at ChildProcess.patchedEmit [as emit] (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at maybeClose (node:internal/child_process:1104:16)\n    at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)\n    at Process.callbackTrampoline (node:internal/async_hooks:130:17)"
          },
          "message": "Command failed: mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd\nmount: /tmp/k96ucr0xyd: overlapping loop device exists for /tmp/g82oettc5oh/vhd0.\n"
        }
      }
      

      sometimes, we can get through the volume/partition selection, but then the restoration nerver ends...

      Remote is working, tested ok.
      Remote is accessed by a XO PROXY that have been rebooted.

      Backups TO this remote is ok.
      Restoration of FULL VM from the same VM from same remote is also OK.

      Only the granular file restore that is not working...

      any idea ?

      P 1 Reply Last reply Reply Quote 0
      • P Offline
        Pilow @Pilow
        last edited by Pilow

        antoher log from listPartitions :

        {
          "id": "0miuq9mt5",
          "properties": {
            "method": "backupNg.listPartitions",
            "params": {
              "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
              "disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251206T161106Z.alias.vhd"
            },
            "name": "API call: backupNg.listPartitions",
            "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
            "type": "api.call"
          },
          "start": 1765051796921,
          "status": "failure",
          "updatedAt": 1765051856924,
          "end": 1765051856924,
          "result": {
            "url": "https://10.xxx.xxx.61/api/v1",
            "originalUrl": "https://10.xxx.xxx.61/api/v1",
            "message": "HTTP connection has timed out",
            "name": "Error",
            "stack": "Error: HTTP connection has timed out\n    at ClientRequest.<anonymous> (/usr/local/lib/node_modules/xo-server/node_modules/http-request-plus/index.js:61:25)\n    at ClientRequest.emit (node:events:518:28)\n    at ClientRequest.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at TLSSocket.emitRequestTimeout (node:_http_client:849:9)\n    at Object.onceWrapper (node:events:632:28)\n    at TLSSocket.emit (node:events:530:35)\n    at TLSSocket.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at TLSSocket.Socket._onTimeout (node:net:595:8)\n    at listOnTimeout (node:internal/timers:581:17)\n    at processTimers (node:internal/timers:519:7)"
          }
        }
        
        {
          "id": "0miunp2s1",
          "properties": {
            "method": "backupNg.listPartitions",
            "params": {
              "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
              "disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251203T161431Z.alias.vhd"
            },
            "name": "API call: backupNg.listPartitions",
            "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
            "type": "api.call"
          },
          "start": 1765047478609,
          "status": "failure",
          "updatedAt": 1765047530203,
          "end": 1765047530203,
          "result": {
            "code": -32000,
            "data": {
              "code": 5,
              "killed": false,
              "signal": null,
              "cmd": "vgchange -an cl",
              "stack": "Error: Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n  WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n  WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n  Logical volume cl/root in use.\n  Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n\n    at genericNodeError (node:internal/errors:984:15)\n    at wrappedFn (node:internal/errors:538:14)\n    at ChildProcess.exithandler (node:child_process:422:12)\n    at ChildProcess.emit (node:events:518:28)\n    at ChildProcess.patchedEmit [as emit] (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at maybeClose (node:internal/child_process:1104:16)\n    at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)\n    at Process.callbackTrampoline (node:internal/async_hooks:130:17)"
            },
            "message": "Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n  WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n  WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n  Logical volume cl/root in use.\n  Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n"
          }
        }
        
        1 Reply Last reply Reply Quote 0
        • P Offline
          Pilow
          last edited by Pilow

          on another simpler install (One host, one XOA, no proxy, SMB remote in same lan not an S3 remote), XOA 5.112.1

          same problem !

          I think something has been broken along the way @bastien-nollet @florent

          granular file restore is important for us, otherwise we have to get Veeam Agent backup instead of XO Backup

          P P 2 Replies Last reply Reply Quote 0
          • P Offline
            Pilow @Pilow
            last edited by

            any idea anyone ?

            halp needed 😃

            1 Reply Last reply Reply Quote 0
            • P Offline
              ph7 @Pilow
              last edited by

              @Pilow

              granular file restore

              I'm not sure what granular file restore means but when I read Your first post, I had to test backup/file-restore and I was able to restore files.
              XO CE 1640a of dec 03 if I remember correctly

              P 1 Reply Last reply Reply Quote 0
              • P Offline
                Pilow @ph7
                last edited by Pilow

                @ph7 that's it. I can't, and see the failed task logs I provided earlier.

                I can restore a full VM, but not its files. Either Windows or different flavor of linuxes (debian, ubuntu, alma, ...) same problems.

                I think something is wrong somewhere, but dont know where...

                P 1 Reply Last reply Reply Quote 0
                • P Offline
                  ph7 @Pilow
                  last edited by

                  @Pilow
                  I'l spin up my XOA's and test

                  P 1 Reply Last reply Reply Quote 0
                  • P Offline
                    ph7 @ph7
                    last edited by

                    Crap
                    File restore doesn't work on free version, sorry

                    P 1 Reply Last reply Reply Quote 0
                    • P Offline
                      ph7 @ph7
                      last edited by

                      but it still works on 76abf from dec 09

                      P 1 Reply Last reply Reply Quote 0
                      • P Offline
                        Pilow @ph7
                        last edited by

                        @ph7 thank you for your tests

                        some Vates dev are lurking in these forums, they will probably stumble upon this post anytime soon 😛

                        1 Reply Last reply Reply Quote 0
                        • olivierlambertO Offline
                          olivierlambert Vates 🪐 Co-Founder CEO
                          last edited by

                          Feel free to create a trial on an XOA, test stable and latest and tell us.

                          It might be also related to your environment, but if it works at some commit and not later, it's weird. @florent & @bastien-nollet maybe?

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post