@olivierlambert Do tools exist to defragment VHDs?
Latest posts made by andrewreid
-
RE: Understanding backup sizes
-
RE: Understanding backup sizes
Update: part of this is my own error: the NFS storage is on ZFS with compression;
du -Ash
shows a result of a much larger amount of data, more consistent with what the backup report is showing. Still doesn't reconcile backup size with the amount of data being stored on the VHD. -
Understanding backup sizes
I'm running XO
af5d8
and have been following along the delta backup issues in the last week or so. This is a homelab, so I cleared the decks and started with a fresh backup last night – new remote, cleared all old backups and snapshots.I'm trying to understand transfer sizes. I've configured a nightly delta backup job, and as you'd expect, the first backup was a full (given I'd started with a clean slate). I've got two remotes configured for this job: a local NFS share and a Backblaze S3 bucket:
- The e-mailed backup report states
258.3GiB
transferred - The XO backup log on the backup overview screen shows
129.15GiB
transferred - The output of
du -sh
on the NFS share has39GiB
in the folder - The S3 bucket size according to Backblaze is
46.7
GiB
I'm confused!
Equally confusing, looking into the detail of one of my container VMs,
cont-01
:- Has a reported transfer size of
59.18GiB
in the XO backup overview screen logs. - Has
3.8GiB
reported disk usage according todf -h
- The associated VHD file on disk reports
189MiB
fromdu -h
I'm missing something here. How to I make sense of this, and what is causing the disparity in sizes? Why would a VM which, seemingly has a maximum of about 4GB of data result in a reported nearly 60GB transfer? Even though, it would seem that the amount of data being stored in the remote is clearly a fraction of that.
Is this a bug, a misunderstanding by me (almost certainly) or something else?
Cheers,
Andrew
- The e-mailed backup report states
-
RE: Delta Backup Changes in 5.66 ?
I'm not sure if mine is an edge case or another "me too", but
d3071
has partially fixed the always a full backup issue on my end.I'm not sure exactly what's going on, but this morning I did wonder if it's to do with having two delta backup jobs (backing up to two different remotes), and whether something strange was happening with the snapshots between the two jobs.
For example, from my backup report for the first backup to a local NFS remote:
Half an hour or so later, that same VM yields a full backup-sized transfer:
(Which, incidentally, I don't understand how it's a 3.33GiB transfer, when the VHD size on the SR is ~111MB and the output of
df -h
on the VM shows 1.8GB used. I'm confused!)I'll roll onto
feat-add-debug-backup
and see what that yields! -
RE: Backblaze B2 as a backup remote
@olivierlambert Ah! I did read your blog post. That's fantastic. I'm in no screaming hurry to implement S3 backups, but it's great that you've got some promising work in the pipeline!
-
Backblaze B2 as a backup remote
I'm consistently seeing timeout errors at the six minute mark when attempting to use Backblaze B2 as a remote for VM backups. This doesn't happen with smaller backups (i.e., metadata), only the larger backups of VMs.
The relevant error seems to boil down to (full log) :
Error calling AWS.S3.upload: HTTP connection has timed out
I'm not sure if this is a bug per se, or a misconfiguration issue; I don't feel like it's a connectivity issue, as I'm able to make smaller backups without a problem. It's either a Backblaze thing, or there's an implementation problem in XO around the writing of large files.
I've read previous forum posts about the XO server running out of RAM and confirmed it's not – watched it closely during the backup job and it doesn't swap.
Grateful for any pointers!
Cheers,
— andrew