XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Problem with file level restore from delta backup from LVM parition

    Scheduled Pinned Locked Moved Xen Orchestra
    53 Posts 9 Posters 15.0k Views 5 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • O Offline
      othmar @Danp
      last edited by

      @Danp
      yes it's the same on XOA in trial mode latest patched.

      1 Reply Last reply Reply Quote 0
      • L Offline
        lukas
        last edited by

        It's same here. Problem is in community version and in xoa trial.

        1 Reply Last reply Reply Quote 0
        • olivierlambertO Offline
          olivierlambert Vates 🪐 Co-Founder CEO
          last edited by

          There's obviously something different between the VMs you use and ours for tests. Would you be able to provide a VM export so we can deploy it on our side, backup it and see why file restore doesn't work?

          1 Reply Last reply Reply Quote 0
          • S Offline
            shadowdao
            last edited by

            I am seeing the same issue with LVM restores. Exists between Trial and Community editions.

            backupNg.listFiles
            {
              "remote": "c6f2b11a-4065-4a8b-b75f-e16bf2aeb5f5",
              "disk": "xo-vm-backups/8a0d70df-9659-b307-43f4-fc37133d9d66/vdis/2dcbfc0b-5d0a-4f84-b7ec-e89b1747e0b4/47f611b1-7c83-4319-9fcf-aad09a025edc/20200319T145807Z.vhd",
              "path": "/",
              "partition": "ea853d6d-01"
            }
            {
              "command": "mount --options=loop,ro,offset=1048576 --source=/tmp/tmp-458p174rEvrE3fU/vhdi1 --target=/tmp/tmp-458llR4TibV7F12",
              "exitCode": 32,
              "stdout": "",
              "stderr": "mount: /tmp/tmp-458llR4TibV7F12: failed to setup loop device for /tmp/tmp-458p174rEvrE3fU/vhdi1.",
              "failed": true,
              "timedOut": false,
              "isCanceled": false,
              "killed": false,
              "message": "Command failed with exit code 32: mount --options=loop,ro,offset=1048576 --source=/tmp/tmp-458p174rEvrE3fU/vhdi1 --target=/tmp/tmp-458llR4TibV7F12",
              "name": "Error",
              "stack": "Error: Command failed with exit code 32: mount --options=loop,ro,offset=1048576 --source=/tmp/tmp-458p174rEvrE3fU/vhdi1 --target=/tmp/tmp-458llR4TibV7F12
                at makeError (/opt/xen-orchestra/node_modules/execa/lib/error.js:56:11)
                at handlePromise (/opt/xen-orchestra/node_modules/execa/index.js:114:26)
                at <anonymous>"
            } 
            
            1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              Do you have a VG using multiple PVs?

              1 Reply Last reply Reply Quote 0
              • S Offline
                shadowdao
                last edited by

                That one does not. I assumed that was the case for another one that had the issue, and working to move the VMs to VHDs that are not on LVM.

                # pvdisplay
                  --- Physical volume ---
                  PV Name               /dev/xvda2
                  VG Name               centos
                  PV Size               79.51 GiB / not usable 3.00 MiB
                  Allocatable           yes
                  PE Size               4.00 MiB
                  Total PE              20354
                  Free PE               16
                  Allocated PE          20338
                  PV UUID               zkCwDd-03ZP-LhLO-drIw-5fpY-Bf7f-J99g0U
                
                1 Reply Last reply Reply Quote 0
                • olivierlambertO Offline
                  olivierlambert Vates 🪐 Co-Founder CEO
                  last edited by olivierlambert

                  Same problem on XOA latest?

                  edit: ping @julien-f and/or @badrAZ

                  1 Reply Last reply Reply Quote 0
                  • S Offline
                    shadowdao
                    last edited by

                    Both the built community edition and the XOA appliance seem to have this issue.

                    1 Reply Last reply Reply Quote 0
                    • S Offline
                      sccf
                      last edited by sccf

                      @badrAZ @julien-f

                      A little more info... I'm also having this problem, but it seems only non-Linux file systems have a problem on my side. Let me know if I can run/supply anything on this side that might be useful?

                      Debian, CentOS and Ubuntu files recoveries are all fine, but Windows Server, Windows 10 and pfSense drives are throwing the same error others have posted previously. Error seems to start a step earlier than what has been previously suggested though?

                      Server is XO Community on Debian, NFS remote (Synology NAS) all LVMs I think.

                      Selecting backup date is fine. Selecting drive from the backup gives this error:

                      Mar 25 06:05:32 BKP01 xo-server[431]: { Error: spawn fusermount ENOENT
                      Mar 25 06:05:32 BKP01 xo-server[431]:     at Process.ChildProcess._handle.onexit (internal/child_process.js:190:19)
                      Mar 25 06:05:32 BKP01 xo-server[431]:     at onErrorNT (internal/child_process.js:362:16)
                      Mar 25 06:05:32 BKP01 xo-server[431]:     at _combinedTickCallback (internal/process/next_tick.js:139:11)
                      Mar 25 06:05:32 BKP01 xo-server[431]:     at process._tickCallback (internal/process/next_tick.js:181:9)
                      Mar 25 06:05:32 BKP01 xo-server[431]:   errno: 'ENOENT',
                      Mar 25 06:05:32 BKP01 xo-server[431]:   code: 'ENOENT',
                      Mar 25 06:05:32 BKP01 xo-server[431]:   syscall: 'spawn fusermount',
                      Mar 25 06:05:32 BKP01 xo-server[431]:   path: 'fusermount',
                      Mar 25 06:05:32 BKP01 xo-server[431]:   spawnargs: [ '-uz', '/tmp/tmp-431GWsDT1lOqXyY' ],
                      Mar 25 06:05:32 BKP01 xo-server[431]:   originalMessage: 'spawn fusermount ENOENT',
                      Mar 25 06:05:32 BKP01 xo-server[431]:   command: 'fusermount -uz /tmp/tmp-431GWsDT1lOqXyY',
                      Mar 25 06:05:32 BKP01 xo-server[431]:   exitCode: undefined,
                      Mar 25 06:05:32 BKP01 xo-server[431]:   signal: undefined,
                      Mar 25 06:05:32 BKP01 xo-server[431]:   signalDescription: undefined,
                      Mar 25 06:05:32 BKP01 xo-server[431]:   stdout: '',
                      Mar 25 06:05:32 BKP01 xo-server[431]:   stderr: '',
                      Mar 25 06:05:32 BKP01 xo-server[431]:   failed: true,
                      Mar 25 06:05:32 BKP01 xo-server[431]:   timedOut: false,
                      Mar 25 06:05:32 BKP01 xo-server[431]:   isCanceled: false,
                      Mar 25 06:05:32 BKP01 xo-server[431]:   killed: false }
                      

                      Partitions still load, but then this error occurs when selecting a partition:

                      Mar 25 06:07:51 BKP01 xo-server[431]: { Error: spawn fusermount ENOENT
                      Mar 25 06:07:51 BKP01 xo-server[431]:     at Process.ChildProcess._handle.onexit (internal/child_process.js:190:19)
                      Mar 25 06:07:51 BKP01 xo-server[431]:     at onErrorNT (internal/child_process.js:362:16)
                      Mar 25 06:07:51 BKP01 xo-server[431]:     at _combinedTickCallback (internal/process/next_tick.js:139:11)
                      Mar 25 06:07:51 BKP01 xo-server[431]:     at process._tickCallback (internal/process/next_tick.js:181:9)
                      Mar 25 06:07:51 BKP01 xo-server[431]:   errno: 'ENOENT',
                      Mar 25 06:07:51 BKP01 xo-server[431]:   code: 'ENOENT',
                      Mar 25 06:07:51 BKP01 xo-server[431]:   syscall: 'spawn fusermount',
                      Mar 25 06:07:51 BKP01 xo-server[431]:   path: 'fusermount',
                      Mar 25 06:07:51 BKP01 xo-server[431]:   spawnargs: [ '-uz', '/tmp/tmp-431yl3BiFVbSmWB' ],
                      Mar 25 06:07:51 BKP01 xo-server[431]:   originalMessage: 'spawn fusermount ENOENT',
                      Mar 25 06:07:51 BKP01 xo-server[431]:   command: 'fusermount -uz /tmp/tmp-431yl3BiFVbSmWB',
                      Mar 25 06:07:51 BKP01 xo-server[431]:   exitCode: undefined,
                      Mar 25 06:07:51 BKP01 xo-server[431]:   signal: undefined,
                      Mar 25 06:07:51 BKP01 xo-server[431]:   signalDescription: undefined,
                      Mar 25 06:07:51 BKP01 xo-server[431]:   stdout: '',
                      Mar 25 06:07:51 BKP01 xo-server[431]:   stderr: '',
                      Mar 25 06:07:51 BKP01 xo-server[431]:   failed: true,
                      Mar 25 06:07:51 BKP01 xo-server[431]:   timedOut: false,
                      Mar 25 06:07:51 BKP01 xo-server[431]:   isCanceled: false,
                      Mar 25 06:07:51 BKP01 xo-server[431]:   killed: false }
                      Mar 25 06:07:51 BKP01 xo-server[431]: 2020-03-24T20:07:51.178Z xo:api WARN admin@admin.net | backupNg.listFiles(...) [135ms] =!> Error: Command failed with exit code 32: mount --options=loop,ro,offset=608174080 --source=/tmp/tmp-431yl3BiFVbSmWB/vhdi15 --target=/tmp/tmp-431ieOQYw52Cr6a
                      
                      1 Reply Last reply Reply Quote 0
                      • olivierlambertO Offline
                        olivierlambert Vates 🪐 Co-Founder CEO
                        last edited by

                        @sccf can you confirm same issue on XOA?

                        1 Reply Last reply Reply Quote 0
                        • S Offline
                          sccf
                          last edited by sccf

                          OK, so XOA is working fine on my side except for the pfSense/FreeBSD VM, but to be honest I'm not sure I have ever tried a file level restore on it before and can't see any reason why I would ever need to in future.

                          I guess I'm looking for an environment issue then, perhaps a dependency? I've got 2 independent sites (one at home, one at our local church) that are configured the same and having the same problems. Will try setting up on Ubuntu and see if that resolves.

                          1 Reply Last reply Reply Quote 0
                          • badrAZB Offline
                            badrAZ
                            last edited by badrAZ

                            @shadowdao @sccf

                            Hi,

                            This issue isn't easy to reproduce in our lab. Can you please provide a VM export with steps to reproduce this issue please?

                            It will help us to diagnostic the issue.

                            1 Reply Last reply Reply Quote 0
                            • S Offline
                              shadowdao
                              last edited by

                              The VHD is about 80GB and this is one of the smaller ones. Do you have a secure location that I can upload it. The VHD has client data on it, so even if I scrub it, I don't want accessible publicly.

                              1 Reply Last reply Reply Quote 0
                              • olivierlambertO Offline
                                olivierlambert Vates 🪐 Co-Founder CEO
                                last edited by

                                Please open a support ticket on xen-orchestra.com

                                1 Reply Last reply Reply Quote 0
                                • S Offline
                                  sccf
                                  last edited by

                                  Hi again, apologies for my tardiness. I've switched to Ubuntu server rather than Debian and all appears to be well again. I can only assume that it is an environment/dependency issue in Debian.

                                  1 Reply Last reply Reply Quote 0
                                  • W Offline
                                    wuchererbe
                                    last edited by olivierlambert

                                    Hi,

                                    i have the same Problem - im using Ubuntu 20.04 with logic volumes (LVM). If I restore a file it only shows the red triangle. Is there any solution?

                                    backupNg.listFiles
                                    {
                                      "remote": "9f5e3132-e828-4aef-b083-1053891ce2e7",
                                      "disk": "/xo-vm-backups/e28ecb04-027c-00cf-8c30-6977d1f15490/vdis/7d5759ec-475d-428a-a945-57fde5c89562/c83dbbd2-9333-4a6d-aee2-e807c97f4b4b/20220114T210004Z.vhd",
                                      "path": "/",
                                      "partition": "f80fbe2e-0ce9-448a-a56a-42f73bbb6373"
                                    }
                                    {
                                      "killed": false,
                                      "code": 32,
                                      "signal": null,
                                      "cmd": "mount --options=loop,ro,sizelimit=106297294848,offset=1075838976 --source=/tmp/y7s1u7vrnrf/vhdi10 --target=/tmp/xysii549sk9",
                                      "message": "Command failed: mount --options=loop,ro,sizelimit=106297294848,offset=1075838976 --source=/tmp/y7s1u7vrnrf/vhdi10 --target=/tmp/xysii549sk9
                                    mount: /tmp/xysii549sk9: unknown filesystem type 'LVM2_member'.
                                    ",
                                      "name": "Error",
                                      "stack": "Error: Command failed: mount --options=loop,ro,sizelimit=106297294848,offset=1075838976 --source=/tmp/y7s1u7vrnrf/vhdi10 --target=/tmp/xysii549sk9
                                    mount: /tmp/xysii549sk9: unknown filesystem type 'LVM2_member'.
                                    
                                        at ChildProcess.exithandler (child_process.js:383:12)
                                        at ChildProcess.emit (events.js:400:28)
                                        at ChildProcess.emit (domain.js:475:12)
                                        at ChildProcess.patchedEmit [as emit] (/opt/xo/xo-builds/xen-orchestra-202201061020/@xen-orchestra/log/configure.js:118:17)
                                        at maybeClose (internal/child_process.js:1058:16)
                                        at Process.ChildProcess._handle.onexit (internal/child_process.js:293:5)
                                        at Process.callbackTrampoline (internal/async_hooks.js:130:17)"
                                    }
                                    
                                    1 Reply Last reply Reply Quote 0
                                    • olivierlambertO Offline
                                      olivierlambert Vates 🪐 Co-Founder CEO
                                      last edited by

                                      Are you testing on latest XO virtual Appliance with XO also up to date?

                                      W 1 Reply Last reply Reply Quote 0
                                      • W Offline
                                        wuchererbe @olivierlambert
                                        last edited by

                                        @olivierlambert

                                        xo-server 5.86.3 and xo-web 5.91.2
                                        I also have tested it with the latest XOA.

                                        W 1 Reply Last reply Reply Quote 0
                                        • olivierlambertO Offline
                                          olivierlambert Vates 🪐 Co-Founder CEO
                                          last edited by

                                          Are you using multiple disks under the same VG or LV?

                                          1 Reply Last reply Reply Quote 0
                                          • W Offline
                                            wuchererbe @wuchererbe
                                            last edited by olivierlambert

                                            6c66bb18-4233-4aa5-b669-29f7766f74f7-image.png

                                            Currently im using only one Disk

                                            ab335d47-ab79-4135-bd80-0dfcdaadd51d-image.png

                                            
                                            Disk /dev/xvda: 100 GiB, 107374182400 bytes, 209715200 sectors
                                            Units: sectors of 1 * 512 = 512 bytes
                                            Sector size (logical/physical): 512 bytes / 512 bytes
                                            I/O size (minimum/optimal): 512 bytes / 512 bytes
                                            Disklabel type: gpt
                                            Disk identifier: B6D16CF5-BE38-48B8-8B94-CDB24AF856B2
                                            
                                            Device       Start       End   Sectors Size Type
                                            /dev/xvda1    2048      4095      2048   1M BIOS boot
                                            /dev/xvda2    4096   2101247   2097152   1G Linux filesystem
                                            /dev/xvda3 2101248 209713151 207611904  99G Linux filesystem
                                            
                                            
                                            
                                            
                                            Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 95 GiB, 102005473280 bytes, 199229440 sectors
                                            Units: sectors of 1 * 512 = 512 bytes
                                            Sector size (logical/physical): 512 bytes / 512 bytes
                                            I/O size (minimum/optimal): 512 bytes / 512 bytes
                                            
                                            W 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post