Update: this seems to happen every time I reboot the server or, in particular, update XO. I get the same 3 errors and have to rebuild my backup schedules from scratch each time. Once rebuilt they run perfectly until the next time I update. It may be because I run it in Docker, I'm not sure, but I'd love to understand what causes this and if there's any way to rectify without the rebuild. I don't really understand it and would appreciate any insight.
I get the following 3 problems every time.
EEXIST - this happens on my configuration backups.
Error: EEXIST: file already exists, open '/run/xo-server/mounts/f5bb7b65-ddea-496b-b193-878f19ba137c/xo-config-backups/d166d7fa-5101-4aff-9e9d-11fb58ec1694/20240819T140003Z/data.json'
ENOENT - this also happens on my configuration backups, on the same job.
Error: ENOENT: no such file or directory, rmdir '/run/xo-server/mounts/f5bb7b65-ddea-496b-b193-878f19ba137c/xo-pool-metadata-backups/d166d7fa-5101-4aff-9e9d-11fb58ec1694/ff3e6fa0-6552-e96a-989c-fc8db748d984/20240729T140002Z'
LOCKFILE HELD - This happens on my VM incremental backups. This log is from a prior run a while ago, but I expect my next run will do this as I rebooted.
>> the writer IncrementalRemoteWriter has failed the step writer.beforeBackup() with error Lock file is already being held. It won't be used anymore in this job execution.
Retry the VM backup due to an error
the writer IncrementalRemoteWriter has failed the step writer.beforeBackup() with error Lock file is already being held. It won't be used anymore in this job execution.
Start: 2024-06-29 01:01
End: 2024-06-29 01:41
Duration: 41 minutes
Error: Lock file is already being held
I only have one schedule for config and one schedule for VMs. The files for the config backup don't change, I don't reboot or anything mid-backup, but it seems to totally break the chain. For the VMs, I only have one backup schedule so there should never be another job running which has the lockfile held. Something about restarting the container causes an issue - it feels like something is being cached here but the cache isn't flushed on restart so it leaves some sort of zombified file(s) behind.