XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    File Restore : error scanning disk for recent delta backups but not old

    Scheduled Pinned Locked Moved Xen Orchestra
    28 Posts 4 Posters 6.4k Views 1 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates 🪐 Co-Founder CEO
      last edited by

      Does it ring any bell @julien-f ?

      Maybe some test using VHD utils might be interested to see if the files are readable correct.

      1 Reply Last reply Reply Quote 0
      • julien-fJ Offline
        julien-f Vates 🪐 Co-Founder XO Team
        last edited by

        That's a weird error, does it happen each time or is it more or less random?

        1 Reply Last reply Reply Quote 0
        • M Offline
          mtango
          last edited by

          Hi,

          It is not random.

          It happens every time with the same backups from May 20 and newer on this particular VM disk (i have one every day), but it works with May 19 and older. On other VMs i have a different threshold date, but same behaviour. The files on the NAS are all there. I tried a full backup and it worked.

          The backups are on a NFS share on a NAS. The remote is reachable (i've run the test).

          1 Reply Last reply Reply Quote 0
          • D Offline
            daKju
            last edited by

            hi all,
            i have similar behave like @mtango on xo-server 5.60.0 - xo-web 5.60.0 on a fresh installed Ubuntu 20.04 LTS.
            I'm able to choose the disk and then the partition.
            If i choose a boot partition with ext4, i can see all files.
            If i choose a partition with lvm, i get an error, as you can see in the logfile:

            Jun  9 13:59:46 xoa-server systemd[1]: Starting LVM event activation on device 7:3...
            Jun  9 13:59:46 xoa-server lvm[15934]:   pvscan[15934] PV /dev/loop3 online, VG cl_server2backup is complete.
            Jun  9 13:59:46 xoa-server lvm[15934]:   pvscan[15934] VG cl_server2backup run autoactivation.
            Jun  9 13:59:46 xoa-server lvm[15934]:   PVID Mq7sxO-2CQu-1UJp-ovp0-PnaR-Aumk-gRzgkY read from /dev/loop3 last written to /dev/xvda2.
            Jun  9 13:59:46 xoa-server lvm[15934]:   pvscan[15934] VG cl_server2backup not using quick activation.
            Jun  9 13:59:46 xoa-server lvm[15934]:   2 logical volume(s) in volume group "cl_server2backup" now active
            Jun  9 13:59:47 xoa-server systemd[1]: Finished LVM event activation on device 7:3.
            Jun  9 13:59:47 xoa-server systemd[1]: Started /sbin/lvm pvscan --cache 7:3.
            Jun  9 13:59:47 xoa-server systemd[977]: tmp-tmp\x2d12262nUuODahdRdV9.mount: Succeeded.
            Jun  9 13:59:47 xoa-server systemd[1]: tmp-tmp\x2d12262nUuODahdRdV9.mount: Succeeded.
            Jun  9 13:59:47 xoa-server lvm[15986]:   pvscan[15986] device 7:3 /dev/loop3 excluded by filter.
            Jun  9 13:59:47 xoa-server systemd[1]: Stopping LVM event activation on device 7:3...
            Jun  9 13:59:47 xoa-server systemd[1]: run-rbb9f6e183716490a87f59a7acc3a6db1.service: Succeeded.
            Jun  9 13:59:47 xoa-server lvm[15989]:   pvscan[15989] device 7:3 /dev/loop3 excluded by filter.
            Jun  9 13:59:47 xoa-server systemd[1]: lvm2-pvscan@7:3.service: Succeeded.
            Jun  9 13:59:47 xoa-server systemd[1]: Stopped LVM event activation on device 7:3.
            Jun  9 13:59:52 xoa-server systemd[1]: Starting LVM event activation on device 7:3...
            Jun  9 13:59:52 xoa-server lvm[16001]:   pvscan[16001] PV /dev/loop3 online, VG cl_server2backup is complete.
            Jun  9 13:59:52 xoa-server lvm[16001]:   pvscan[16001] VG cl_server2backup run autoactivation.
            Jun  9 13:59:52 xoa-server lvm[16001]:   PVID Mq7sxO-2CQu-1UJp-ovp0-PnaR-Aumk-gRzgkY read from /dev/loop3 last written to /dev/xvda2.
            Jun  9 13:59:52 xoa-server lvm[16001]:   pvscan[16001] VG cl_server2backup not using quick activation.
            Jun  9 13:59:52 xoa-server lvm[16001]:   2 logical volume(s) in volume group "cl_server2backup" now active
            Jun  9 13:59:52 xoa-server systemd[1]: Finished LVM event activation on device 7:3.
            Jun  9 13:59:53 xoa-server kernel: [ 5776.274595] XFS (loop4): Mounting V5 filesystem in no-recovery mode. Filesystem will be inconsistent.
            Jun  9 13:59:53 xoa-server systemd[977]: tmp-tmp\x2d12262c7TAY6NWwQG5.mount: Succeeded.
            Jun  9 13:59:53 xoa-server systemd[1]: tmp-tmp\x2d12262c7TAY6NWwQG5.mount: Succeeded.
            Jun  9 13:59:53 xoa-server kernel: [ 5776.295448] XFS (loop4): Unmounting Filesystem
            Jun  9 13:59:53 xoa-server systemd[1]: Started /sbin/lvm pvscan --cache 7:3.
            Jun  9 13:59:53 xoa-server systemd[1]: Stopping LVM event activation on device 7:3...
            Jun  9 13:59:53 xoa-server lvm[16069]:   pvscan[16069] device 7:3 /dev/loop3 excluded by filter.
            Jun  9 13:59:53 xoa-server lvm[16070]:   pvscan[16070] device 7:3 /dev/loop3 excluded by filter.
            Jun  9 13:59:53 xoa-server systemd[1]: lvm2-pvscan@7:3.service: Succeeded.
            Jun  9 13:59:53 xoa-server systemd[1]: Stopped LVM event activation on device 7:3.
            Jun  9 13:59:53 xoa-server systemd[1]: run-r4d20647bef3942d2a439dcf7d9b50d9b.service: Succeeded.
            Jun  9 13:59:53 xoa-server systemd[1]: tmp-tmp\x2d12262bv6erTqqwfL9.mount: Succeeded.
            Jun  9 13:59:53 xoa-server systemd[977]: tmp-tmp\x2d12262bv6erTqqwfL9.mount: Succeeded.
            Jun  9 13:59:53 xoa-server xo-server[12262]: 2020-06-09T13:59:53.545Z xo:api WARN admin@admin.net | backupNg.listFiles(...) [879ms] =!> TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string. Received undefined
            

            Any ideas ??

            THX2all

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              Same with XOA?

              1 Reply Last reply Reply Quote 0
              • M Offline
                mtango
                last edited by

                Hi,

                I don't think it is the same failure as @daKju, because in my case the failure is before that, at function backupNg.listPartitions, whereas @daKju's is with backupNg.listFiles.

                I'm not sure which function fails after that, since we're not a JavaScript shop, but I'd start looking in the file file-restore-ng.js (on master) starting at lines 303:

                303     const diskPath = handler._getFilePath('/' + diskId)
                304     const mountDir = await tmpDir()
                305     $defer.onFailure(rmdir, mountDir)
                306 
                307     await execa('vhdimount', [diskPath, mountDir])
                [...]
                
                1 Reply Last reply Reply Quote 0
                • olivierlambertO Offline
                  olivierlambert Vates 🪐 Co-Founder CEO
                  last edited by

                  @julien-f will take a look when he can, he's pretty busy right now 🙂 In the mean time, feel free to track the issue deeper on your side if you can

                  1 Reply Last reply Reply Quote 0
                  • M Offline
                    mtango
                    last edited by

                    If you want me to add additional log messages, I can do that. Just need an efficient method to:

                    • modify the source,
                    • re-compile the modified source,
                    • execute.

                    I'm not at all familiar with the build system basically.

                    I do want to help since solving this issue is very important for my company.

                    Thanks

                    1 Reply Last reply Reply Quote 0
                    • olivierlambertO Offline
                      olivierlambert Vates 🪐 Co-Founder CEO
                      last edited by

                      It's all in the install doc: https://xen-orchestra.com/docs/installation.html#fetching-the-code

                      Also, if it's really important for your company, you might be interested to have professional support for it 🙂

                      1 Reply Last reply Reply Quote 0
                      • M Offline
                        mtango
                        last edited by

                        Thanks, but exactly which command incrementally builds the code, if only a single file is modified? yarn, or yarn build? The build process is quite long, and since I am not familiar with JavaScript I need to iterate a lot.

                        Let me rephrase that: I do want to help since I am helping reproduce and investigate a bug that is very important for both our companies.

                        1 Reply Last reply Reply Quote 0
                        • olivierlambertO Offline
                          olivierlambert Vates 🪐 Co-Founder CEO
                          last edited by

                          Thing is, we don't have similar reports for now, so it's possibly something on your side. That's why having pro support if it's very urgent for you makes sense, but it's your call 🙂

                          @julien-f will give you some hints to get a rebuild on only what you need

                          1 Reply Last reply Reply Quote 0
                          • M Offline
                            mtango
                            last edited by

                            We are considering paid support. How much time do you evaluate fixing such a problem would take?

                            1 Reply Last reply Reply Quote 0
                            • olivierlambertO Offline
                              olivierlambert Vates 🪐 Co-Founder CEO
                              last edited by olivierlambert

                              First step before fixing an issue is to be able to reproduce it. If it's linked to a data issue itself on those VHDs (or during export), then it's not a "bug" per se.

                              That's why it's a bit hard to be able to give you detailed input.

                              Alternatively, you can use vhdimount on your side to access those VHDs and see if it works or not.

                              @julien-f will give you the commands when he's available.

                              edit: you can open a support ticket on xen-orchestra.com Also open a support tunnel in your XOA so we can take a look remotely

                              1 Reply Last reply Reply Quote 0
                              • M Offline
                                mtango
                                last edited by mtango

                                Following your suggestion to use vhdimount.

                                I tried to mount vhd files that work and those that don't work:

                                Working version:

                                sudo vhdimount /run/xo-server/mounts/8118efa3-4968-4e87-96b1-7612e80222b8/xo-vm-backups/6b291d91-c7cc-2588-b90b-e53c2e8e21fd/vdis/2a7b744e-d238-4ad6-a032-541c111f72ae/77ef5fea-0040-4b8e-b1ef-1704712870c6/20200519T040005Z.vhd /tmp/vhdimnt/
                                vhdimount 20170223
                                
                                xoa@yul1-xoa-001v:~$ sudo ls -lah /tmp/vhdimnt 
                                total 36K
                                dr-xr-xr-x  2 root root    0 Jun 10 14:24 .
                                drwxrwxrwt 11 root root  32K Jun 10 14:24 ..
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi1
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi10
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi11
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi12
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi13
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi14
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi15
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi16
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi17
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi18
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi19
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi2
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi20
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi21
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi22
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi23
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi24
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi25
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi26
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi27
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi28
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi29
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi3
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi30
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi31
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi32
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi33
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi34
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi35
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi36
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi37
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi38
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi39
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi4
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi40
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi41
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi42
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi43
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi44
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi45
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi46
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi47
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi48
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi49
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi5
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi50
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi51
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi52
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi53
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi54
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi55
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi56
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi57
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi58
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi59
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi6
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi60
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi61
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi62
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi63
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi64
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi65
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi66
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi67
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi68
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi69
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi7
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi70
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi71
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi72
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi73
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi74
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi75
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi76
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi77
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi78
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi79
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi8
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi80
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi81
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi82
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi83
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi84
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi85
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi86
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi87
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi88
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi89
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi9
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi90
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi91
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi92
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi93
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi94
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi95
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi96
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi97
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi98
                                -r--r--r--  1 root root 960G Jun 10 14:24 vhdi99
                                xoa@yul1-xoa-001v:~$ sudo fusermount -u /tmp/vhdimnt 
                                

                                Not working one:

                                xoa@yul1-xoa-001v:~$ sudo vhdimount /run/xo-server/mounts/8118efa3-4968-4e87-96b1-7612e80222b8/xo-vm-backups/6b291d91-c7cc-2588-b90b-e53c2e8e21fd/vdis/2a7b744e-d238-4ad6-a032-541c111f72ae/77ef5fea-0040-4b8e-b1ef-1704712870c6/20200520T040005Z.vhd /tmp/vhdimnt/
                                vhdimount 20170223
                                
                                xoa@yul1-xoa-001v:~$ sudo ls -lha /tmp/vhdimnt 
                                total 0
                                

                                The fact that vhdiXX has only 2 digits in the filename is suspicious... there are exactly 99 vhdi files in the mount directory. And May 19 corresponds to 99 days since February 11th when the first backup was made on this disk.

                                The bug is therefore not on our side, except if there's a note somewhere in the documentation that only 99 delta backups can be made.

                                Based on this:
                                https://github.com/libyal/libvhdi/blob/58c6aa277fd2b245e8ea208028238654407f364c/vhditools/mount_file_entry.c#L675

                                It looks like vhdimount doesn't like more than 99 backups.

                                1 Reply Last reply Reply Quote 0
                                • julien-fJ Offline
                                  julien-f Vates 🪐 Co-Founder XO Team
                                  last edited by

                                  @mtango said in File Restore : error scanning disk for recent delta backups but not old:

                                  It looks like vhdimount doesn't like more than 99 backups.

                                  Indeed, it's a known issue: https://github.com/vatesfr/xen-orchestra/issues/4032

                                  julien-f created this issue in vatesfr/xen-orchestra

                                  closed [File restore] vhdimount doesnt work with chains longer than 99 #4032

                                  M 1 Reply Last reply Reply Quote 0
                                  • M Offline
                                    mtango @julien-f
                                    last edited by mtango

                                    @julien-f @olivierlambert

                                    If we pay for support, are there any feasible options to remove this limitation in the next few days? Our target is to be able to have at least 365 backups.
                                    Since it is very easy to reproduce, you shouldn't need access set up, I assume.

                                    1 Reply Last reply Reply Quote 0
                                    • julien-fJ Offline
                                      julien-f Vates 🪐 Co-Founder XO Team
                                      last edited by

                                      It probably won't be easy to remove this limitation, especially in the next few days.

                                      If you can use the full backup interval setting, it should remove this issue.

                                      1 Reply Last reply Reply Quote 0
                                      • olivierlambertO Offline
                                        olivierlambert Vates 🪐 Co-Founder CEO
                                        last edited by

                                        Indeed, the full backup interval seems the right option.

                                        M 1 Reply Last reply Reply Quote 0
                                        • M Offline
                                          mtango @olivierlambert
                                          last edited by

                                          @olivierlambert @julien-f

                                          It is not clear to me if I can simply activate that option right now, in the state that I am with backups of 121 days that don't work past 99. Will vhdimount be needed to perform the first full backup or not?

                                          Also we're planning on setting that interval somewhere around 60 (2 months). Can this open issue be encountered: https://github.com/vatesfr/xen-orchestra/issues/4987 ?

                                          jcharaoui created this issue in vatesfr/xen-orchestra

                                          closed Delta backup "out of range" error in merge phase #4987

                                          1 Reply Last reply Reply Quote 0
                                          • julien-fJ Offline
                                            julien-f Vates 🪐 Co-Founder XO Team
                                            last edited by

                                            @mtango said in File Restore : error scanning disk for recent delta backups but not old:

                                            Will vhdimount be needed to perform the first full backup or not?

                                            No, it's only used for file restore.

                                            @mtango said in File Restore : error scanning disk for recent delta backups but not old:

                                            Can this open issue be encountered: https://github.com/vatesfr/xen-orchestra/issues/4987 ?

                                            We have no idea what trigger this condition, it may be old corrupted VHD files.

                                            If you want to check your backups, you can take a look at this post 🙂

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post