they are created to ensure the samba mount is not removed by the os or another thread on XO
I would not delete them , especially since they are 0bytes files
Posts
-
RE: Can .keeper files be deleted from the backup share?
-
RE: Feedback on immutability
@rtjdamen said in Feedback on immutability:
@florent so this does mean it will never work when a forever incremental is used?
you can't have a immutable forever backup without having a infinite length, and an infinite
It may be possible only if we release the constraints.
The immutable script could release the immutability , merge the disks, but that means : the immutability will be lifted from time to time, and the responsibilities of the immutability script will be greater, and we'll need a way to track the vhd to merge and transmit the information to the immutability script -
RE: Feedback on immutability
@rtjdamen for the immutability to be useful, the full chain must be immutable and must never be out of immutability
the merge process can't lift/ put back the immutability , and increasing synchronization between process will extend the attack surface.
immutability duration must be longer than or equal to 2 time the full backup interval -1
the retention must be strictly longer than the immutability .for example, if you have a full backup interval of 7 a retention of 14 and immutability duration of 13 , key backup are K, delta are D. Immutable backup are in bold . unprotected chain are
strikedKDDDDDDKDDDDDD worst case, only one full chain protected
KDDDDDKDDDDDDK
KDDDDKDDDDDDKD
KDDDKDDDDDDKDD
KDDKDDDDDDKDDD
KDKDDDDDDKDDDD
KKDDDDDDKDDDDD best case almost 2 full chain protected -
RE: Feedback on immutability
@rtjdamen great work
- the immutability duration is per repository, to limit the attack surface to the bare minimum
- nothing can really be software protected against the root user. This is where physical device writable only once win
- it should ignore the cache.json.gz , but the json file containing the backup metadata are protected along the disk data. Same for the pool metadata/xo config
An additional note : to ensure that an incremental backup is really protected during n days, you must have
- a full backup interval smaller than n
- a retention greater than 2n - 1
That way an attacker won't be able to modify the base disk used for restore
-
RE: Backup from replicas possible?
@flakpyro for now there is no tag selector , but you can now select the VM list to be replicated
-
RE: New GFS strategy
The rules is "any backup that check a condition to be kept is kept", you can combine LTR retention and retention by number
so you can keep the last 3 backups , and also have a GFS strategy. It should remove most of the use case of backups with multiple schedules.
first point : any deleted backup is not recuperable, and this is a new feature, so test the feature progressively, in case we missed anything, especially on critical backups as we can iterate on the feature and improve it
we'll write more documentaion shortly
-
RE: Designing a backup strategy
We have some interesting thing in the works, but I think a chain of 84 snapshots for the CR/DR is quite long
you can mitigate the risk of chain corruption by setting the full backup interval, this will transfer a full from time to time
-
RE: CBT: the thread to centralize your feedback
@flakpyro said in CBT: the thread to centralize your feedback:
This is a completely different 5 host pool backed by a Pure storage array with SRs mounted via NFSv3, migrating a VM between hosts results in the same issue.
Before migration: [01:41 xcpng-prd-03 b04d9910-8671-750f-050e-8b55c64fbede]# cbt-util get -c -n 83035854-b5a9-4f7e-869f-abe43ddc658d.cbtlog e28065ff-342f-4eae-a910-b91842dd39ca After migration [01:41 xcpng-prd-03 b04d9910-8671-750f-050e-8b55c64fbede]# cbt-util get -c -n 83035854-b5a9-4f7e-869f-abe43ddc658d.cbtlog 00000000-0000-0000-0000-000000000000
I dont think i have anything "custom" running that would be causing this so no idea why this is happening but its happening on multiple pools for us.
This is a very interesting clue, and we will investigate it with damien
there is a lot of edges case that can happens ( a lying network/drive/... )
and most of the time , xcp/xapi are self healing, but sometimes XO have to do a little work to cleanup. The CBT should be reset correctly after storage migration.
We'll add the async call to enable/ disable CBT since it could lead to bogus state, and maybe a more in depth cleaning of cbt after a "vdi not related error " -
RE: CBT: the thread to centralize your feedback
@rtjdamen we found a clue with @Bastien-Nollet : there was a race condition between the timeframe allowed to enable CBT and the snapshot, leading to a snapshot taken before CBT , thus failing to compute correctly the list of changed block at the next backup
The fix is deployed, and we'll see this night. If everything goes well, this night will be a full, but the disks will keep CBT enabled. And the next night, we'll have delta
if everything is ok, it will be released in a second patch ( 5.100.2 )
-
RE: CBT: the thread to centralize your feedback
this branch ( already deployed on @rtjdamen systems) add a better handing of host that took too much time to compute the changed block list :
https://github.com/vatesfr/xen-orchestra/pull/8120it will be release in patch this week
I am still investigating an error that still occurs occasionally : XapiError: SR_BACKEND_FAILURE_460(, Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated], )
-
RE: NBD used even when disabled
@Tristis-Oris do you have preferNbd in your config.toml ? it was the first way to enable NBD but is not recommended anymore
-
RE: NBD used even when disabled
@Tristis-Oris I will try to reproduce it here ASAP and will keep you informed
-
RE: Immutable S3 Backups (Backblaze) and Merging; A Little Confused
@planedrop
with your setup, you'll have 15 real days of immutability at worst.
here is a little schema with Key backup ( full) and delta . A chain of backup is a key backup and it's delta descendants
mutable backups are noted with a pointK.d.dKdddddKd // in this case the first chain is not protected, because parts of the chain are mutable, but the 2 most recent
K.KdddddKddd // 2 more backups here we have the longest immutables chains
K.dddddKdddd // here is the critical part with the shortest imutable chains : only the last one
K.ddddKdddd //the protetectd chain grosto ensure a usable immutability of n day, you must have a full backup retention of at least n, and a backup retention of at least 2xn
I am not really sure on the error message that the UI show on backblaze, but the backup will be merged and deleted as soon as possible, when the object lock is lifted.
you can trust backblaze on their object lock, and you cna check the real number of backup stored by looking at the restore tab of XO -
RE: Problem with differential restore
@frank-s said in Problem with differential restore:
So my guess is it will work if I choose not to delete the snapshot??? If that is correct it is a pity as differential restore is very fast compared to regular restore. Differential restore, for instance, would be a brilliant way to recover quickly from a ransomware attack. Also, deleting the snapshot makes coalescing much faster. Am I correct in my assumptions? Is there a way it could be made to work without the snapshot?
Thank you,
Frank.differential restore work by not transferring the data that the snapshot and the backup have in common, without reading any of those
it clone the last snapshot, an then revert the block changed between the older backup and the snapshot.In theory, if you run a delta backup without destroying the data, it should provide the anchor to have this work.
-
RE: Schedules not showing any task (progress) in XO tasks
@manilx said in Schedules not showing any task (progress) in XO tasks:
checkbasevdi must be called before updateUuidAndChain for incremental backups
hi manlix, could you post the full json ?
is is the "download log" button , with the downward arrowon he top of the backup log window
-
RE: Question on backup sequence
the backup chaining does not change the way we handle settings ( retention or health check), to ease the conversion to backup sequences.
The main change is that the schedules should be disabled if you only want to have them run in sequences
-
RE: Backup Fail: Trying to add data in unsupported state
Hi,
what are the size of the failing VMs ? is there anything in the syslog before having the cipher error message ?
To be fair, uploading full backup ( > 50GB) , without knowing the size first is full of hurdles. And the xapi dont tell the size of the exports. Incremental backup with block storage completly circumvent this, by uploading separate blocks of known size
Regards
-
RE: CBT: the thread to centralize your feedback
@rtjdamen it's still fresh, but on the other hand, the worse that can happen is falling back to a full backup. So for now I would not use it on the bigger VM ( multi terabytes )
We are sure that it will be a game changer on thick provisioning ( because snapshot cost the full virtual size) or on fast changing VM , where coalescing an older snapshot is a major hurdleIf everything goes well it will be on stable by the end of july, and we'll probably enable it by default on new backup in the near future
-
RE: CBT: the thread to centralize your feedback
dataDestroy will be enable-able (not sure if it's really a word) today, in he meantime, the
Please note that the metadata snapshot won't be visible in the UI since it's not a VM Snapshot, but only the metadata of the vdi snapshots
latest commits in the fix_cbt branch add an additionnal check on dom0 connect, more error handling