@Darkbeldin Done. Thanks for the reply. It took a while to make sure I was understanding it, but I think I've got the hang of it now.
Best posts made by aknisly
-
RE: 'Smart' Backup Retention Scheme
Latest posts made by aknisly
-
RE: XO (Delta) Backup to BackBlaze B2 with Error: getaddrinfo EAI_AGAIN--Workaround
@olivierlambert Well, I spoke (wrote) a bit too soon! That did indeed resolve the problem. Thanks for the reply!
-
XO (Delta) Backup to BackBlaze B2 with Error: getaddrinfo EAI_AGAIN--Workaround
This is primarily informational, and I wish I had time to work with devs to troubleshoot if they wanted that, but I'm reporting a similar issue to a post from several years ago (backblaze b2 / amazon s3 as remote in xoa). My error message was
getaddrinfo EAI_AGAIN s3.us-west-004.backblazeb2.com
I had successfully backed up the initial full with 9 VMs, mostly small, but 2 <500G. But the following attempts at a delta backup failed. I found the aforementioned thread, and bumped my XO RAM to 16, and viola!
I'm running XO Community, commit 5c0b2, XCP-ng v8.2.1.
I intend to make a note in the documentation, unless someone advises otherwise.
-
RE: 'Smart' Backup Retention Scheme
@Darkbeldin Done. Thanks for the reply. It took a while to make sure I was understanding it, but I think I've got the hang of it now.
-
RE: 'Smart' Backup Retention Scheme
@julien-f Ah, I've known about the multiple schedule features, but I hadn't (haven't) wrapped my mind around the implications. Are you saying that multiple schedules in one backup job are working off of one 'copy' of the backup? If that is the case, then all we really need is a little documentation to describe the feature--which I would be happy to write.
-
'Smart' Backup Retention Scheme
I continue to be impressed with the functionality of XCP-ng/XO; even (especially?) for a small school like ours, it is proving very useful. DR/Backups are, not surprisingly, a major consideration for us. I've latched onto a retention strategy from Duplicati--they call it 'Smart Backup Retention'--that optimizes long-term retention like this:
Assume you run your backup every day, then you have 365 backups during a year which can occupy a lot of storage space. Backup retention will delete old backups in a way that you keep [fewer] backups the older they get. For instance, you can have 7 backups for the last days, 4 backups for the last month, 12 backups for the last year. And all this is happening automatically.
I'm attempting to use this strategy with Continuous Replication and Delta Backup, but it's rather costly in terms of storage (CR) and bandwidth (DB): I have two CR jobs, one hourly, one nightly, and two DB jobs (to BackBlaze B2), one weekly and one monthly. I have included Rolling Snapshots in each of these as well. Obviously, this means multiple full backups (or snapshots) where one would suffice if we had a retention mechanism like Duplicati has designed. Is there a reasonable way to implement this? I can't help with coding, and our school can't help much financially, but we'd be willing to put a little bounty on this.
Regards!