Delta Backup Showing Success No Delta Saved
-
Another update: it does happen on Synology via a SMB share, it just does not happen as often. Going to setup a test with NFS and with the new "Store backup as multiple data blocks" option.
-
Sounds like a client side error if it happens with two different remotes on different systems, which probably explains why thereβs nothing in server side logs.
Which OS is XO running? Could this be related: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=949394
Could you try
mv
on smb share without XO involved to verify itβs not related? Sure it should report other than success if such renaming fails. -
That would be indeed interesting to see if it happens with NFS or not
-
@ronivay Good point, I am running Debian Bookworm but I could rebuild it on Ubuntu and see if that changes anything. I have tried the mv command but I am unable to reproduce the error that way. The inconsistency of the issue is what is making this harder to solve because it works most of the time.
-
@lawrencesystems only the delta backup make rename ( when merging older backup together)
I am also very interested of your findings, and by anything need to be done on XO side
-
I set it up on NFS and had it running 4 time per hour for the last couple days and there has been no rename error, but I do have a new message. On Jul 31, 2022 at 03:00:00 PM about 25 hrs after the job started I get "incorrect backup size in metadata." and this message is in every backup since then as the job is still running. The backup job still does report success and I did a test restore of the delta and they seem to restore just fine. This issue seems to only occur when using SMB. The last test was thinking of doing is changing the job to "Delete First" as I think that would skip the rename process, but I am not sure.
{ "data": { "mode": "delta", "reportWhen": "failure" }, "id": "1659186900001", "jobId": "bfbc352f-faef-437c-9de3-335a7a043aef", "jobName": "Blumira", "message": "backup", "scheduleId": "554d14f2-3613-407d-bdc1-85f9e64e3b3f", "start": 1659186900001, "status": "success", "infos": [ { "data": { "vms": [ "a66859dd-45e8-fae5-8830-ffd918780375" ] }, "message": "vms" } ], "tasks": [ { "data": { "type": "VM", "id": "a66859dd-45e8-fae5-8830-ffd918780375" }, "id": "1659186901009", "message": "backup VM", "start": 1659186901009, "status": "success", "tasks": [ { "id": "1659186902384", "message": "clean-vm", "start": 1659186902384, "status": "success", "warnings": [ { "data": { "path": "/xo-vm-backups/a66859dd-45e8-fae5-8830-ffd918780375/20220730T111523Z.json", "actual": 109490688, "expected": 43619237376 }, "message": "incorrect backup size in metadata" } ], "end": 1659186902488, "result": { "merge": false } }, { "id": "1659186902662", "message": "snapshot", "start": 1659186902662, "status": "success", "end": 1659186905027, "result": "76c7bb45-f70e-8009-5581-faaf2e2d1eea" }, { "data": { "id": "f1318c7a-5691-409e-baa2-2535f8e8a458", "isFull": false, "type": "remote" }, "id": "1659186905028", "message": "export", "start": 1659186905028, "status": "success", "tasks": [ { "id": "1659186905057", "message": "transfer", "start": 1659186905057, "status": "success", "end": 1659186909884, "result": { "size": 117881344 } }, { "id": "1659186910269", "message": "clean-vm", "start": 1659186910269, "status": "success", "end": 1659186910336, "result": { "merge": true } } ], "end": 1659186910369 } ], "end": 1659186910369 } ], "end": 1659186910370 }
-
@lawrencesystems this is a known bug, the PR is nearly finalized
https://github.com/vatesfr/xen-orchestra/pull/6331, it will be merged soon -
So we are back on a potential weird behaviour in SMB mount from Debian itself
-
Yes, beside the known bug listed above the NFS mount never had an issue and I had it running four backups per hour for a few days. The Error: EACCES: permission denied, rename so far it seems to be exclusively an error that occurs on when using an SMB mount (I have not tested S3 yet). I had talked to @tekwendell who had some suggestions such as making sure server multi channel support = no was set but the error still occurs. The error does occur on both Synology and TrueNAS SMB shares.
Below are some logs of the event happening with logs from both XO & TrueNAS (trinity in the logs). There is not really any errors that I can find on the TrueNAS side as it keep reporting that the files were open and closed with status OK. Because this error does not occur at every run it's not making it easy to troubleshoot.
trinity 1 2022-08-02T02:00:11.976688-04:00 trinity.local smbd 85507 - - xcpng opened file xo-vm-backups/a66859dd-45e8-fae5-8830-ffd918780375/vdis/bfbc352f-faef-437c-9de3-335a7a043aef/52506089-c186-4f0f-93ea-d7649fa7107b/20220802T040005Z.vhd read=Yes write=Yes (numopen=10) trinity 1 2022-08-02T02:00:11.959478-04:00 trinity.local smbd 85507 - - smbd_dirptr_get_entry mask=[*] found xo-vm-backups/a66859dd-45e8-fae5-8830-ffd918780375/vdis/bfbc352f-faef-437c-9de3-335a7a043aef/52506089-c186-4f0f-93ea-d7649fa7107b/20220802T040005Z.vhd fname=20220802T040005Z.vhd (20220802T040005Z.vhd) xo-pool-xen xo-server[875460]: path: '/run/xo-server/mounts/b0323f4d-1828-4ad1-b9bd-550f38ff6cfa/xo-vm-backups/a66859dd-45e8-fae5-8830-ffd918780375/vdis/bfbc352f-faef-437c-9de3-335a7a043aef/52506089-c186-4f0f-93ea-d7649fa7107b/20220802T040005Z.vhd', xo-pool-xen xo-server[875460]: error: [Error: EACCES: permission denied, rename '/run/xo-server/mounts/b0323f4d-1828-4ad1-b9bd-550f38ff6cfa/xo-vm-backups/a66859dd-45e8-fae5-8830-ffd918780375/vdis/bfbc352f-faef-437c-9de3-335a7a043aef/52506089-c186-4f0f-93ea-d7649fa7107b/20220802T040005Z.vhd' -> '/run/xo-server/mounts/b0323f4d-1828-4ad1-b9bd-550f38ff6cfa/xo-vm-backups/a66859dd-45e8-fae5-8830-ffd918780375/vdis/bfbc352f-faef-437c-9de3-335a7a043aef/52506089-c186-4f0f-93ea-d7649fa7107b/20220802T041522Z.vhd'] { trinity 1 2022-08-02T02:00:11.933092-04:00 trinity.local smbd 85507 - - xcpng closed file xo-vm-backups/a66859dd-45e8-fae5-8830-ffd918780375/vdis/bfbc352f-faef-437c-9de3-335a7a043aef/52506089-c186-4f0f-93ea-d7649fa7107b/20220802T040005Z.vhd (numopen=2) NT_STATUS_OK trinity 1 2022-08-02T02:00:11.936665-04:00 trinity.local smbd 85507 - - xcpng closed file xo-vm-backups/a66859dd-45e8-fae5-8830-ffd918780375/vdis/bfbc352f-faef-437c-9de3-335a7a043aef/52506089-c186-4f0f-93ea-d7649fa7107b/20220802T040005Z.vhd (numopen=0) NT_STATUS_OK trinity 1 2022-08-02T02:00:11.932835-04:00 trinity.local smbd 85507 - - xcpng opened file xo-vm-backups/a66859dd-45e8-fae5-8830-ffd918780375/vdis/bfbc352f-faef-437c-9de3-335a7a043aef/52506089-c186-4f0f-93ea-d7649fa7107b/20220802T040005Z.vhd read=No write=No (numopen=4) trinity 1 2022-08-02T02:00:11.932929-04:00 trinity.local smbd 85507 - - smbd_do_setfilepathinfo: xo-vm-backups/a66859dd-45e8-fae5-8830-ffd918780375/vdis/bfbc352f-faef-437c-9de3-335a7a043aef/52506089-c186-4f0f-93ea-d7649fa7107b/20220802T040005Z.vhd (fnum 1820097187) info_level=65290 totdata=322
-
@lawrencesystems
at latest truenas core 13.1 there were issues with smb and permissions.
i had to revert to 13.0https://www.truenas.com/community/threads/smb-not-working-after-upgrade-from-12-0-u6-to-13-0.102487/
please check this...
-
@AlexanderK This occurs on Synology as well. Without any corresponding errors coming from the TrueNAS target regarding permissions I don't see this as the same bug. It's also not occurring consistently so there is some combination of conditions that has to occur for this bug to trigger which is why I am testing so much.
-
-
@florent
Yes, I have the cifs-utils installed. -
-
-