XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    FILE RESTORE / overlapping loop device exists

    Scheduled Pinned Locked Moved Backup
    2 Posts 1 Posters 7 Views 1 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • P Offline
      Pilow
      last edited by Pilow

      Hi, on latest channel XOA, we get this error :

      {
        "id": "0miuqao5o",
        "properties": {
          "method": "backupNg.listFiles",
          "params": {
            "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
            "disk": "/xo-vm-backups/ec9e8a54-a78e-8ca8-596e-20ebeaaa4308/vdis/70dec2db-a660-4bf4-b8f9-7c90e7e45156/7fe5a104-e9a3-4e16-951c-f88ce78e3b2a/20251206T161309Z.alias.vhd",
            "path": "/",
            "partition": "6f2859cc-5df3-4c47-bd05-37d3b066f11e"
          },
          "name": "API call: backupNg.listFiles",
          "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
          "type": "api.call"
        },
        "start": 1765051845324,
        "status": "failure",
        "updatedAt": 1765051845346,
        "end": 1765051845346,
        "result": {
          "code": -32000,
          "data": {
            "code": 32,
            "killed": false,
            "signal": null,
            "cmd": "mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd",
            "stack": "Error: Command failed: mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd\nmount: /tmp/k96ucr0xyd: overlapping loop device exists for /tmp/g82oettc5oh/vhd0.\n\n    at genericNodeError (node:internal/errors:984:15)\n    at wrappedFn (node:internal/errors:538:14)\n    at ChildProcess.exithandler (node:child_process:422:12)\n    at ChildProcess.emit (node:events:518:28)\n    at ChildProcess.patchedEmit [as emit] (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at maybeClose (node:internal/child_process:1104:16)\n    at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)\n    at Process.callbackTrampoline (node:internal/async_hooks:130:17)"
          },
          "message": "Command failed: mount --options=loop,ro,norecovery,sizelimit=209715200,offset=1048576 --source=/tmp/g82oettc5oh/vhd0 --target=/tmp/k96ucr0xyd\nmount: /tmp/k96ucr0xyd: overlapping loop device exists for /tmp/g82oettc5oh/vhd0.\n"
        }
      }
      

      sometimes, we can get through the volume/partition selection, but then the restoration nerver ends...

      Remote is working, tested ok.
      Remote is accessed by a XO PROXY that have been rebooted.

      Backups TO this remote is ok.
      Restoration of FULL VM from the same VM from same remote is also OK.

      Only the granular file restore that is not working...

      any idea ?

      P 1 Reply Last reply Reply Quote 0
      • P Offline
        Pilow @Pilow
        last edited by Pilow

        antoher log from listPartitions :

        {
          "id": "0miuq9mt5",
          "properties": {
            "method": "backupNg.listPartitions",
            "params": {
              "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
              "disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251206T161106Z.alias.vhd"
            },
            "name": "API call: backupNg.listPartitions",
            "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
            "type": "api.call"
          },
          "start": 1765051796921,
          "status": "failure",
          "updatedAt": 1765051856924,
          "end": 1765051856924,
          "result": {
            "url": "https://10.xxx.xxx.61/api/v1",
            "originalUrl": "https://10.xxx.xxx.61/api/v1",
            "message": "HTTP connection has timed out",
            "name": "Error",
            "stack": "Error: HTTP connection has timed out\n    at ClientRequest.<anonymous> (/usr/local/lib/node_modules/xo-server/node_modules/http-request-plus/index.js:61:25)\n    at ClientRequest.emit (node:events:518:28)\n    at ClientRequest.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at TLSSocket.emitRequestTimeout (node:_http_client:849:9)\n    at Object.onceWrapper (node:events:632:28)\n    at TLSSocket.emit (node:events:530:35)\n    at TLSSocket.patchedEmit [as emit] (/usr/local/lib/node_modules/xo-server/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at TLSSocket.Socket._onTimeout (node:net:595:8)\n    at listOnTimeout (node:internal/timers:581:17)\n    at processTimers (node:internal/timers:519:7)"
          }
        }
        
        {
          "id": "0miunp2s1",
          "properties": {
            "method": "backupNg.listPartitions",
            "params": {
              "remote": "92ed64a4-e073-4fe9-8db9-11770b7ea2da",
              "disk": "/xo-vm-backups/b1eef06b-52c1-e02a-4f59-1692194e2376/vdis/87966399-d428-431d-a067-bb99a8fdd67a/5f28aed0-a08e-42ff-8e88-5c6c01d78122/20251203T161431Z.alias.vhd"
            },
            "name": "API call: backupNg.listPartitions",
            "userId": "22728371-56fa-4767-b8a5-0fd59d4b9fd8",
            "type": "api.call"
          },
          "start": 1765047478609,
          "status": "failure",
          "updatedAt": 1765047530203,
          "end": 1765047530203,
          "result": {
            "code": -32000,
            "data": {
              "code": 5,
              "killed": false,
              "signal": null,
              "cmd": "vgchange -an cl",
              "stack": "Error: Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n  WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n  WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n  Logical volume cl/root in use.\n  Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n\n    at genericNodeError (node:internal/errors:984:15)\n    at wrappedFn (node:internal/errors:538:14)\n    at ChildProcess.exithandler (node:child_process:422:12)\n    at ChildProcess.emit (node:events:518:28)\n    at ChildProcess.patchedEmit [as emit] (/usr/local/lib/node_modules/@xen-orchestra/proxy/node_modules/@xen-orchestra/log/configure.js:52:17)\n    at maybeClose (node:internal/child_process:1104:16)\n    at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)\n    at Process.callbackTrampoline (node:internal/async_hooks:130:17)"
            },
            "message": "Command failed: vgchange -an cl\nFile descriptor 27 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 28 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 35 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 38 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\nFile descriptor 39 (/dev/fuse) leaked on vgchange invocation. Parent PID 3769972: node\n  WARNING: Not using device /dev/loop3 for PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl.\n  WARNING: PV 3C2k4y-QHLd-KfQH-HACR-NVP5-HK3u-qQdEbl prefers device /dev/loop1 because device is used by LV.\n  Logical volume cl/root in use.\n  Can't deactivate volume group \"cl\" with 1 open logical volume(s)\n"
          }
        }
        
        1 Reply Last reply Reply Quote 0
        • First post
          Last post