XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    File Restore : error scanning disk for recent delta backups but not old

    Scheduled Pinned Locked Moved Xen Orchestra
    28 Posts 4 Posters 6.4k Views 1 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M Offline
      mtango
      last edited by

      Hi,

      With XOA version 5.55.0, I am trying to do File level restore from delta backups. I'm getting the error: "Error while scanning disk" for recent backups (done in the last 2-3 weeks), but the file restore works on older backups. The delta backups are done every day (with 365-day retention) on an NFS share, and all the files seem to be there. The VM that I'm trying to restore from is an Ubuntu VM and the disk partition is a normal ext4. I've also confirmed that the Full VM restore works for one of the dates where it doesn't work with File Restore. This happens on multiple VM's although the dates after which the file restore gives the error varies from machine to machine, although all backups are done in the same way.

      Thank you!
      Vlad

      1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by olivierlambert

        Hi!

        There's no such thing as XOA 5.55.0. Latest XOA release is 5.47.1. So I assume you are using XO from the sources, and probably using telling us about xo-server version. Which is currently 5.60.0, so you are outdated. As stated here, please always stay fully up-to-date before reporting any problem: https://xen-orchestra.com/docs/community.html

        FYI, XOA is the virtual appliance we are distributing, nothing else is "XOA" (XOA means Xen Orchestra virtual Appliance)

        M 2 Replies Last reply Reply Quote 0
        • M Offline
          mtango @olivierlambert
          last edited by

          Thank you. Your assumptions are correct. I will upgrade to xo-server 5.60.0.

          1 Reply Last reply Reply Quote 0
          • M Offline
            mtango @olivierlambert
            last edited by

            @olivierlambert

            Hi

            I upgraded to the master commit (c45d00fee82) and I am running xo-server 5.60.0 and xo-web 5.60.0.

            In addition to what was mentioned in the original post, I can see the following in the logs when I'm trying to recover a file:

            Jun 09 03:36:00 yul1-xoa-001v xo-server[1040]: [load-balancer]Execute plans!
            Jun 09 03:36:08 yul1-xoa-001v xo-server[1040]: [Error: ENOENT: no such file or directory, rmdir '/tmp/tmp-1040EgwRpSwcgg4J'] {
            Jun 09 03:36:08 yul1-xoa-001v xo-server[1040]:   errno: -2,
            Jun 09 03:36:08 yul1-xoa-001v xo-server[1040]:   code: 'ENOENT',
            Jun 09 03:36:08 yul1-xoa-001v xo-server[1040]:   syscall: 'rmdir',
            Jun 09 03:36:08 yul1-xoa-001v xo-server[1040]:   path: '/tmp/tmp-1040EgwRpSwcgg4J'
            Jun 09 03:36:08 yul1-xoa-001v xo-server[1040]: }
            Jun 09 03:36:08 yul1-xoa-001v xo-server[1040]: 2020-06-09T03:36:08.059Z xo:api WARN admin | backupNg.listPartitions(...) [470ms] =!> Error: no disks found
            

            Any ideas?
            Thanks
            Vlad

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              Okay good: so it means the function backupNg.listPartitions can't find a disk.

              Before going further, can you please try with XOA on latest release channel? So we can rule out an environment problem, to see if it's really a XO bug?

              M 1 Reply Last reply Reply Quote 0
              • M Offline
                mtango @olivierlambert
                last edited by

                @olivierlambert

                Hi Olivier,

                I just tried with XOA on latest release channel and I have the exact same behaviour in the web interface, however, in the logs I see the following now:

                :39:29 xoa xo-server[16919]: [Error: ENOENT: no such file or directory, rmdir '/tmp/tmp-16919P2pLpmpYmqD2'] {
                Jun 09 09:39:29 xoa xo-server[16919]:   errno: -2,
                Jun 09 09:39:29 xoa xo-server[16919]:   code: 'ENOENT',
                Jun 09 09:39:29 xoa xo-server[16919]:   syscall: 'rmdir',
                Jun 09 09:39:29 xoa xo-server[16919]:   path: '/tmp/tmp-16919P2pLpmpYmqD2'
                Jun 09 09:39:29 xoa xo-server[16919]: }
                Jun 09 09:39:29 xoa xo-server[16919]: 2020-06-09T13:39:29.108Z xo:api WARN admin@admin.net | backupNg.listPartitions(...) [11s] =!> Error: EIO: i/o error, scandir '/tmp/tmp-16919P2pLpmpYmqD2'
                
                
                1 Reply Last reply Reply Quote 0
                • olivierlambertO Offline
                  olivierlambert Vates 🪐 Co-Founder CEO
                  last edited by

                  Does it ring any bell @julien-f ?

                  Maybe some test using VHD utils might be interested to see if the files are readable correct.

                  1 Reply Last reply Reply Quote 0
                  • julien-fJ Offline
                    julien-f Vates 🪐 Co-Founder XO Team
                    last edited by

                    That's a weird error, does it happen each time or is it more or less random?

                    1 Reply Last reply Reply Quote 0
                    • M Offline
                      mtango
                      last edited by

                      Hi,

                      It is not random.

                      It happens every time with the same backups from May 20 and newer on this particular VM disk (i have one every day), but it works with May 19 and older. On other VMs i have a different threshold date, but same behaviour. The files on the NAS are all there. I tried a full backup and it worked.

                      The backups are on a NFS share on a NAS. The remote is reachable (i've run the test).

                      1 Reply Last reply Reply Quote 0
                      • D Offline
                        daKju
                        last edited by

                        hi all,
                        i have similar behave like @mtango on xo-server 5.60.0 - xo-web 5.60.0 on a fresh installed Ubuntu 20.04 LTS.
                        I'm able to choose the disk and then the partition.
                        If i choose a boot partition with ext4, i can see all files.
                        If i choose a partition with lvm, i get an error, as you can see in the logfile:

                        Jun  9 13:59:46 xoa-server systemd[1]: Starting LVM event activation on device 7:3...
                        Jun  9 13:59:46 xoa-server lvm[15934]:   pvscan[15934] PV /dev/loop3 online, VG cl_server2backup is complete.
                        Jun  9 13:59:46 xoa-server lvm[15934]:   pvscan[15934] VG cl_server2backup run autoactivation.
                        Jun  9 13:59:46 xoa-server lvm[15934]:   PVID Mq7sxO-2CQu-1UJp-ovp0-PnaR-Aumk-gRzgkY read from /dev/loop3 last written to /dev/xvda2.
                        Jun  9 13:59:46 xoa-server lvm[15934]:   pvscan[15934] VG cl_server2backup not using quick activation.
                        Jun  9 13:59:46 xoa-server lvm[15934]:   2 logical volume(s) in volume group "cl_server2backup" now active
                        Jun  9 13:59:47 xoa-server systemd[1]: Finished LVM event activation on device 7:3.
                        Jun  9 13:59:47 xoa-server systemd[1]: Started /sbin/lvm pvscan --cache 7:3.
                        Jun  9 13:59:47 xoa-server systemd[977]: tmp-tmp\x2d12262nUuODahdRdV9.mount: Succeeded.
                        Jun  9 13:59:47 xoa-server systemd[1]: tmp-tmp\x2d12262nUuODahdRdV9.mount: Succeeded.
                        Jun  9 13:59:47 xoa-server lvm[15986]:   pvscan[15986] device 7:3 /dev/loop3 excluded by filter.
                        Jun  9 13:59:47 xoa-server systemd[1]: Stopping LVM event activation on device 7:3...
                        Jun  9 13:59:47 xoa-server systemd[1]: run-rbb9f6e183716490a87f59a7acc3a6db1.service: Succeeded.
                        Jun  9 13:59:47 xoa-server lvm[15989]:   pvscan[15989] device 7:3 /dev/loop3 excluded by filter.
                        Jun  9 13:59:47 xoa-server systemd[1]: lvm2-pvscan@7:3.service: Succeeded.
                        Jun  9 13:59:47 xoa-server systemd[1]: Stopped LVM event activation on device 7:3.
                        Jun  9 13:59:52 xoa-server systemd[1]: Starting LVM event activation on device 7:3...
                        Jun  9 13:59:52 xoa-server lvm[16001]:   pvscan[16001] PV /dev/loop3 online, VG cl_server2backup is complete.
                        Jun  9 13:59:52 xoa-server lvm[16001]:   pvscan[16001] VG cl_server2backup run autoactivation.
                        Jun  9 13:59:52 xoa-server lvm[16001]:   PVID Mq7sxO-2CQu-1UJp-ovp0-PnaR-Aumk-gRzgkY read from /dev/loop3 last written to /dev/xvda2.
                        Jun  9 13:59:52 xoa-server lvm[16001]:   pvscan[16001] VG cl_server2backup not using quick activation.
                        Jun  9 13:59:52 xoa-server lvm[16001]:   2 logical volume(s) in volume group "cl_server2backup" now active
                        Jun  9 13:59:52 xoa-server systemd[1]: Finished LVM event activation on device 7:3.
                        Jun  9 13:59:53 xoa-server kernel: [ 5776.274595] XFS (loop4): Mounting V5 filesystem in no-recovery mode. Filesystem will be inconsistent.
                        Jun  9 13:59:53 xoa-server systemd[977]: tmp-tmp\x2d12262c7TAY6NWwQG5.mount: Succeeded.
                        Jun  9 13:59:53 xoa-server systemd[1]: tmp-tmp\x2d12262c7TAY6NWwQG5.mount: Succeeded.
                        Jun  9 13:59:53 xoa-server kernel: [ 5776.295448] XFS (loop4): Unmounting Filesystem
                        Jun  9 13:59:53 xoa-server systemd[1]: Started /sbin/lvm pvscan --cache 7:3.
                        Jun  9 13:59:53 xoa-server systemd[1]: Stopping LVM event activation on device 7:3...
                        Jun  9 13:59:53 xoa-server lvm[16069]:   pvscan[16069] device 7:3 /dev/loop3 excluded by filter.
                        Jun  9 13:59:53 xoa-server lvm[16070]:   pvscan[16070] device 7:3 /dev/loop3 excluded by filter.
                        Jun  9 13:59:53 xoa-server systemd[1]: lvm2-pvscan@7:3.service: Succeeded.
                        Jun  9 13:59:53 xoa-server systemd[1]: Stopped LVM event activation on device 7:3.
                        Jun  9 13:59:53 xoa-server systemd[1]: run-r4d20647bef3942d2a439dcf7d9b50d9b.service: Succeeded.
                        Jun  9 13:59:53 xoa-server systemd[1]: tmp-tmp\x2d12262bv6erTqqwfL9.mount: Succeeded.
                        Jun  9 13:59:53 xoa-server systemd[977]: tmp-tmp\x2d12262bv6erTqqwfL9.mount: Succeeded.
                        Jun  9 13:59:53 xoa-server xo-server[12262]: 2020-06-09T13:59:53.545Z xo:api WARN admin@admin.net | backupNg.listFiles(...) [879ms] =!> TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string. Received undefined
                        

                        Any ideas ??

                        THX2all

                        1 Reply Last reply Reply Quote 0
                        • olivierlambertO Offline
                          olivierlambert Vates 🪐 Co-Founder CEO
                          last edited by

                          Same with XOA?

                          1 Reply Last reply Reply Quote 0
                          • M Offline
                            mtango
                            last edited by

                            Hi,

                            I don't think it is the same failure as @daKju, because in my case the failure is before that, at function backupNg.listPartitions, whereas @daKju's is with backupNg.listFiles.

                            I'm not sure which function fails after that, since we're not a JavaScript shop, but I'd start looking in the file file-restore-ng.js (on master) starting at lines 303:

                            303     const diskPath = handler._getFilePath('/' + diskId)
                            304     const mountDir = await tmpDir()
                            305     $defer.onFailure(rmdir, mountDir)
                            306 
                            307     await execa('vhdimount', [diskPath, mountDir])
                            [...]
                            
                            1 Reply Last reply Reply Quote 0
                            • olivierlambertO Offline
                              olivierlambert Vates 🪐 Co-Founder CEO
                              last edited by

                              @julien-f will take a look when he can, he's pretty busy right now 🙂 In the mean time, feel free to track the issue deeper on your side if you can

                              1 Reply Last reply Reply Quote 0
                              • M Offline
                                mtango
                                last edited by

                                If you want me to add additional log messages, I can do that. Just need an efficient method to:

                                • modify the source,
                                • re-compile the modified source,
                                • execute.

                                I'm not at all familiar with the build system basically.

                                I do want to help since solving this issue is very important for my company.

                                Thanks

                                1 Reply Last reply Reply Quote 0
                                • olivierlambertO Offline
                                  olivierlambert Vates 🪐 Co-Founder CEO
                                  last edited by

                                  It's all in the install doc: https://xen-orchestra.com/docs/installation.html#fetching-the-code

                                  Also, if it's really important for your company, you might be interested to have professional support for it 🙂

                                  1 Reply Last reply Reply Quote 0
                                  • M Offline
                                    mtango
                                    last edited by

                                    Thanks, but exactly which command incrementally builds the code, if only a single file is modified? yarn, or yarn build? The build process is quite long, and since I am not familiar with JavaScript I need to iterate a lot.

                                    Let me rephrase that: I do want to help since I am helping reproduce and investigate a bug that is very important for both our companies.

                                    1 Reply Last reply Reply Quote 0
                                    • olivierlambertO Offline
                                      olivierlambert Vates 🪐 Co-Founder CEO
                                      last edited by

                                      Thing is, we don't have similar reports for now, so it's possibly something on your side. That's why having pro support if it's very urgent for you makes sense, but it's your call 🙂

                                      @julien-f will give you some hints to get a rebuild on only what you need

                                      1 Reply Last reply Reply Quote 0
                                      • M Offline
                                        mtango
                                        last edited by

                                        We are considering paid support. How much time do you evaluate fixing such a problem would take?

                                        1 Reply Last reply Reply Quote 0
                                        • olivierlambertO Offline
                                          olivierlambert Vates 🪐 Co-Founder CEO
                                          last edited by olivierlambert

                                          First step before fixing an issue is to be able to reproduce it. If it's linked to a data issue itself on those VHDs (or during export), then it's not a "bug" per se.

                                          That's why it's a bit hard to be able to give you detailed input.

                                          Alternatively, you can use vhdimount on your side to access those VHDs and see if it works or not.

                                          @julien-f will give you the commands when he's available.

                                          edit: you can open a support ticket on xen-orchestra.com Also open a support tunnel in your XOA so we can take a look remotely

                                          1 Reply Last reply Reply Quote 0
                                          • M Offline
                                            mtango
                                            last edited by mtango

                                            Following your suggestion to use vhdimount.

                                            I tried to mount vhd files that work and those that don't work:

                                            Working version:

                                            sudo vhdimount /run/xo-server/mounts/8118efa3-4968-4e87-96b1-7612e80222b8/xo-vm-backups/6b291d91-c7cc-2588-b90b-e53c2e8e21fd/vdis/2a7b744e-d238-4ad6-a032-541c111f72ae/77ef5fea-0040-4b8e-b1ef-1704712870c6/20200519T040005Z.vhd /tmp/vhdimnt/
                                            vhdimount 20170223
                                            
                                            xoa@yul1-xoa-001v:~$ sudo ls -lah /tmp/vhdimnt 
                                            total 36K
                                            dr-xr-xr-x  2 root root    0 Jun 10 14:24 .
                                            drwxrwxrwt 11 root root  32K Jun 10 14:24 ..
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi1
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi10
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi11
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi12
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi13
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi14
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi15
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi16
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi17
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi18
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi19
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi2
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi20
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi21
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi22
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi23
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi24
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi25
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi26
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi27
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi28
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi29
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi3
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi30
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi31
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi32
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi33
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi34
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi35
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi36
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi37
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi38
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi39
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi4
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi40
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi41
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi42
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi43
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi44
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi45
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi46
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi47
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi48
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi49
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi5
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi50
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi51
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi52
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi53
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi54
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi55
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi56
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi57
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi58
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi59
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi6
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi60
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi61
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi62
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi63
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi64
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi65
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi66
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi67
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi68
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi69
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi7
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi70
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi71
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi72
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi73
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi74
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi75
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi76
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi77
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi78
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi79
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi8
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi80
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi81
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi82
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi83
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi84
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi85
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi86
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi87
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi88
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi89
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi9
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi90
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi91
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi92
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi93
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi94
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi95
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi96
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi97
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi98
                                            -r--r--r--  1 root root 960G Jun 10 14:24 vhdi99
                                            xoa@yul1-xoa-001v:~$ sudo fusermount -u /tmp/vhdimnt 
                                            

                                            Not working one:

                                            xoa@yul1-xoa-001v:~$ sudo vhdimount /run/xo-server/mounts/8118efa3-4968-4e87-96b1-7612e80222b8/xo-vm-backups/6b291d91-c7cc-2588-b90b-e53c2e8e21fd/vdis/2a7b744e-d238-4ad6-a032-541c111f72ae/77ef5fea-0040-4b8e-b1ef-1704712870c6/20200520T040005Z.vhd /tmp/vhdimnt/
                                            vhdimount 20170223
                                            
                                            xoa@yul1-xoa-001v:~$ sudo ls -lha /tmp/vhdimnt 
                                            total 0
                                            

                                            The fact that vhdiXX has only 2 digits in the filename is suspicious... there are exactly 99 vhdi files in the mount directory. And May 19 corresponds to 99 days since February 11th when the first backup was made on this disk.

                                            The bug is therefore not on our side, except if there's a note somewhere in the documentation that only 99 delta backups can be made.

                                            Based on this:
                                            https://github.com/libyal/libvhdi/blob/58c6aa277fd2b245e8ea208028238654407f364c/vhditools/mount_file_entry.c#L675

                                            It looks like vhdimount doesn't like more than 99 backups.

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post