File Restore : error scanning disk for recent delta backups but not old
-
Hi Olivier,
I just tried with XOA on
latest
release channel and I have the exact same behaviour in the web interface, however, in the logs I see the following now::39:29 xoa xo-server[16919]: [Error: ENOENT: no such file or directory, rmdir '/tmp/tmp-16919P2pLpmpYmqD2'] { Jun 09 09:39:29 xoa xo-server[16919]: errno: -2, Jun 09 09:39:29 xoa xo-server[16919]: code: 'ENOENT', Jun 09 09:39:29 xoa xo-server[16919]: syscall: 'rmdir', Jun 09 09:39:29 xoa xo-server[16919]: path: '/tmp/tmp-16919P2pLpmpYmqD2' Jun 09 09:39:29 xoa xo-server[16919]: } Jun 09 09:39:29 xoa xo-server[16919]: 2020-06-09T13:39:29.108Z xo:api WARN admin@admin.net | backupNg.listPartitions(...) [11s] =!> Error: EIO: i/o error, scandir '/tmp/tmp-16919P2pLpmpYmqD2'
-
Does it ring any bell @julien-f ?
Maybe some test using VHD utils might be interested to see if the files are readable correct.
-
That's a weird error, does it happen each time or is it more or less random?
-
Hi,
It is not random.
It happens every time with the same backups from May 20 and newer on this particular VM disk (i have one every day), but it works with May 19 and older. On other VMs i have a different threshold date, but same behaviour. The files on the NAS are all there. I tried a full backup and it worked.
The backups are on a NFS share on a NAS. The remote is reachable (i've run the test).
-
hi all,
i have similar behave like @mtango on xo-server 5.60.0 - xo-web 5.60.0 on a fresh installed Ubuntu 20.04 LTS.
I'm able to choose the disk and then the partition.
If i choose a boot partition with ext4, i can see all files.
If i choose a partition with lvm, i get an error, as you can see in the logfile:Jun 9 13:59:46 xoa-server systemd[1]: Starting LVM event activation on device 7:3... Jun 9 13:59:46 xoa-server lvm[15934]: pvscan[15934] PV /dev/loop3 online, VG cl_server2backup is complete. Jun 9 13:59:46 xoa-server lvm[15934]: pvscan[15934] VG cl_server2backup run autoactivation. Jun 9 13:59:46 xoa-server lvm[15934]: PVID Mq7sxO-2CQu-1UJp-ovp0-PnaR-Aumk-gRzgkY read from /dev/loop3 last written to /dev/xvda2. Jun 9 13:59:46 xoa-server lvm[15934]: pvscan[15934] VG cl_server2backup not using quick activation. Jun 9 13:59:46 xoa-server lvm[15934]: 2 logical volume(s) in volume group "cl_server2backup" now active Jun 9 13:59:47 xoa-server systemd[1]: Finished LVM event activation on device 7:3. Jun 9 13:59:47 xoa-server systemd[1]: Started /sbin/lvm pvscan --cache 7:3. Jun 9 13:59:47 xoa-server systemd[977]: tmp-tmp\x2d12262nUuODahdRdV9.mount: Succeeded. Jun 9 13:59:47 xoa-server systemd[1]: tmp-tmp\x2d12262nUuODahdRdV9.mount: Succeeded. Jun 9 13:59:47 xoa-server lvm[15986]: pvscan[15986] device 7:3 /dev/loop3 excluded by filter. Jun 9 13:59:47 xoa-server systemd[1]: Stopping LVM event activation on device 7:3... Jun 9 13:59:47 xoa-server systemd[1]: run-rbb9f6e183716490a87f59a7acc3a6db1.service: Succeeded. Jun 9 13:59:47 xoa-server lvm[15989]: pvscan[15989] device 7:3 /dev/loop3 excluded by filter. Jun 9 13:59:47 xoa-server systemd[1]: lvm2-pvscan@7:3.service: Succeeded. Jun 9 13:59:47 xoa-server systemd[1]: Stopped LVM event activation on device 7:3. Jun 9 13:59:52 xoa-server systemd[1]: Starting LVM event activation on device 7:3... Jun 9 13:59:52 xoa-server lvm[16001]: pvscan[16001] PV /dev/loop3 online, VG cl_server2backup is complete. Jun 9 13:59:52 xoa-server lvm[16001]: pvscan[16001] VG cl_server2backup run autoactivation. Jun 9 13:59:52 xoa-server lvm[16001]: PVID Mq7sxO-2CQu-1UJp-ovp0-PnaR-Aumk-gRzgkY read from /dev/loop3 last written to /dev/xvda2. Jun 9 13:59:52 xoa-server lvm[16001]: pvscan[16001] VG cl_server2backup not using quick activation. Jun 9 13:59:52 xoa-server lvm[16001]: 2 logical volume(s) in volume group "cl_server2backup" now active Jun 9 13:59:52 xoa-server systemd[1]: Finished LVM event activation on device 7:3. Jun 9 13:59:53 xoa-server kernel: [ 5776.274595] XFS (loop4): Mounting V5 filesystem in no-recovery mode. Filesystem will be inconsistent. Jun 9 13:59:53 xoa-server systemd[977]: tmp-tmp\x2d12262c7TAY6NWwQG5.mount: Succeeded. Jun 9 13:59:53 xoa-server systemd[1]: tmp-tmp\x2d12262c7TAY6NWwQG5.mount: Succeeded. Jun 9 13:59:53 xoa-server kernel: [ 5776.295448] XFS (loop4): Unmounting Filesystem Jun 9 13:59:53 xoa-server systemd[1]: Started /sbin/lvm pvscan --cache 7:3. Jun 9 13:59:53 xoa-server systemd[1]: Stopping LVM event activation on device 7:3... Jun 9 13:59:53 xoa-server lvm[16069]: pvscan[16069] device 7:3 /dev/loop3 excluded by filter. Jun 9 13:59:53 xoa-server lvm[16070]: pvscan[16070] device 7:3 /dev/loop3 excluded by filter. Jun 9 13:59:53 xoa-server systemd[1]: lvm2-pvscan@7:3.service: Succeeded. Jun 9 13:59:53 xoa-server systemd[1]: Stopped LVM event activation on device 7:3. Jun 9 13:59:53 xoa-server systemd[1]: run-r4d20647bef3942d2a439dcf7d9b50d9b.service: Succeeded. Jun 9 13:59:53 xoa-server systemd[1]: tmp-tmp\x2d12262bv6erTqqwfL9.mount: Succeeded. Jun 9 13:59:53 xoa-server systemd[977]: tmp-tmp\x2d12262bv6erTqqwfL9.mount: Succeeded. Jun 9 13:59:53 xoa-server xo-server[12262]: 2020-06-09T13:59:53.545Z xo:api WARN admin@admin.net | backupNg.listFiles(...) [879ms] =!> TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string. Received undefined
Any ideas ??
THX2all
-
Same with XOA?
-
Hi,
I don't think it is the same failure as @daKju, because in my case the failure is before that, at function
backupNg.listPartitions
, whereas @daKju's is withbackupNg.listFiles
.I'm not sure which function fails after that, since we're not a JavaScript shop, but I'd start looking in the file
file-restore-ng.js
(on master) starting at lines 303:303 const diskPath = handler._getFilePath('/' + diskId) 304 const mountDir = await tmpDir() 305 $defer.onFailure(rmdir, mountDir) 306 307 await execa('vhdimount', [diskPath, mountDir]) [...]
-
@julien-f will take a look when he can, he's pretty busy right now In the mean time, feel free to track the issue deeper on your side if you can
-
If you want me to add additional log messages, I can do that. Just need an efficient method to:
- modify the source,
- re-compile the modified source,
- execute.
I'm not at all familiar with the build system basically.
I do want to help since solving this issue is very important for my company.
Thanks
-
It's all in the install doc: https://xen-orchestra.com/docs/installation.html#fetching-the-code
Also, if it's really important for your company, you might be interested to have professional support for it
-
Thanks, but exactly which command incrementally builds the code, if only a single file is modified? yarn, or yarn build? The build process is quite long, and since I am not familiar with JavaScript I need to iterate a lot.
Let me rephrase that: I do want to help since I am helping reproduce and investigate a bug that is very important for both our companies.
-
Thing is, we don't have similar reports for now, so it's possibly something on your side. That's why having pro support if it's very urgent for you makes sense, but it's your call
@julien-f will give you some hints to get a rebuild on only what you need
-
We are considering paid support. How much time do you evaluate fixing such a problem would take?
-
First step before fixing an issue is to be able to reproduce it. If it's linked to a data issue itself on those VHDs (or during export), then it's not a "bug" per se.
That's why it's a bit hard to be able to give you detailed input.
Alternatively, you can use
vhdimount
on your side to access those VHDs and see if it works or not.@julien-f will give you the commands when he's available.
edit: you can open a support ticket on xen-orchestra.com Also open a support tunnel in your XOA so we can take a look remotely
-
Following your suggestion to use vhdimount.
I tried to mount vhd files that work and those that don't work:
Working version:
sudo vhdimount /run/xo-server/mounts/8118efa3-4968-4e87-96b1-7612e80222b8/xo-vm-backups/6b291d91-c7cc-2588-b90b-e53c2e8e21fd/vdis/2a7b744e-d238-4ad6-a032-541c111f72ae/77ef5fea-0040-4b8e-b1ef-1704712870c6/20200519T040005Z.vhd /tmp/vhdimnt/ vhdimount 20170223 xoa@yul1-xoa-001v:~$ sudo ls -lah /tmp/vhdimnt total 36K dr-xr-xr-x 2 root root 0 Jun 10 14:24 . drwxrwxrwt 11 root root 32K Jun 10 14:24 .. -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi1 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi10 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi11 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi12 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi13 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi14 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi15 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi16 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi17 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi18 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi19 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi2 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi20 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi21 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi22 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi23 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi24 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi25 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi26 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi27 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi28 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi29 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi3 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi30 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi31 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi32 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi33 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi34 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi35 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi36 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi37 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi38 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi39 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi4 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi40 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi41 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi42 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi43 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi44 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi45 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi46 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi47 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi48 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi49 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi5 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi50 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi51 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi52 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi53 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi54 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi55 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi56 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi57 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi58 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi59 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi6 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi60 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi61 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi62 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi63 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi64 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi65 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi66 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi67 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi68 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi69 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi7 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi70 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi71 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi72 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi73 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi74 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi75 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi76 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi77 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi78 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi79 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi8 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi80 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi81 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi82 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi83 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi84 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi85 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi86 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi87 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi88 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi89 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi9 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi90 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi91 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi92 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi93 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi94 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi95 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi96 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi97 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi98 -r--r--r-- 1 root root 960G Jun 10 14:24 vhdi99 xoa@yul1-xoa-001v:~$ sudo fusermount -u /tmp/vhdimnt
Not working one:
xoa@yul1-xoa-001v:~$ sudo vhdimount /run/xo-server/mounts/8118efa3-4968-4e87-96b1-7612e80222b8/xo-vm-backups/6b291d91-c7cc-2588-b90b-e53c2e8e21fd/vdis/2a7b744e-d238-4ad6-a032-541c111f72ae/77ef5fea-0040-4b8e-b1ef-1704712870c6/20200520T040005Z.vhd /tmp/vhdimnt/ vhdimount 20170223 xoa@yul1-xoa-001v:~$ sudo ls -lha /tmp/vhdimnt total 0
The fact that vhdiXX has only 2 digits in the filename is suspicious... there are exactly 99 vhdi files in the mount directory. And May 19 corresponds to 99 days since February 11th when the first backup was made on this disk.
The bug is therefore not on our side, except if there's a note somewhere in the documentation that only 99 delta backups can be made.
Based on this:
https://github.com/libyal/libvhdi/blob/58c6aa277fd2b245e8ea208028238654407f364c/vhditools/mount_file_entry.c#L675It looks like vhdimount doesn't like more than 99 backups.
-
@mtango said in File Restore : error scanning disk for recent delta backups but not old:
It looks like vhdimount doesn't like more than 99 backups.
Indeed, it's a known issue: https://github.com/vatesfr/xen-orchestra/issues/4032
-
If we pay for support, are there any feasible options to remove this limitation in the next few days? Our target is to be able to have at least 365 backups.
Since it is very easy to reproduce, you shouldn't need access set up, I assume. -
It probably won't be easy to remove this limitation, especially in the next few days.
If you can use the full backup interval setting, it should remove this issue.
-
Indeed, the full backup interval seems the right option.
-
It is not clear to me if I can simply activate that option right now, in the state that I am with backups of 121 days that don't work past 99. Will vhdimount be needed to perform the first full backup or not?
Also we're planning on setting that interval somewhere around 60 (2 months). Can this open issue be encountered: https://github.com/vatesfr/xen-orchestra/issues/4987 ?