@andrewreid Yes, I can't wait to share it with everybody.
Best posts made by florent
-
RE: Backblaze B2 as a backup remote
-
RE: Externalised backup LTO
@b-dietrich said in Externalised backup LTO:
Hi everyone,
I would like to know if it's possible to externalised backup on library tape with XOA ?
Is it in the roadmap for 2024 ?
I will let @olivierlambert on the backlog point. It is still planned, but there is a lot of ground work before :
- since tape can't be easily seek , so you'll have to write the backup at once no turning back to update a previously written block/metadata
- you'll have to build a catalog of tape<->backup to know which tape you'll have to use to restore, or rewrite. This is a huge change since XO don't use any database , the backup repositories are self contained : you can mount a backup repository to a new XO, and the backup will be listed . With tapes, you'll have to keep (and backup) the backup catalog. Yep , we'll need to backup the backups and ensure it's recoverable
That being said, the mirror backup feature as been built to pave the way to tape backup
For now the easiest way to do tape backup is to use full backup to a backup repository only used for this, and to mirror it to tapes. At our scale, priorities can also change if there is a big enough sponsor, that is ready to take a part of the financial load of this feature and gives us access to real world hardware and processes.
-
RE: VMware migration tool: we need your feedback!
the test with @dumarjo showed that there is still a bug during the import. I am still investigating it and will keep you informed, hopefully today or tomorrow
-
RE: VMware migration tool: we need your feedback!
@ismo-conguairta said in VMware migration tool: we need your feedback!:
I have two different behaviour on two different XO instances. Each XO instance refers to a different pool (different hosts, same xcp-ng version). In both the instances I try to connect to the same Private Virtual Datacenter based on VMware/vSphere at OVH.
In the first one I get the following error message by using the web UI: "invalid parameters" (take a look at this logfile 2023-02-28T19_25_21.933Z - XO.txt )
In the second one, I get the following error message by using the web UI "404 Not Found https://<vsphere-ip>/folder/<vm-name>/<vm-name>.vmx?dsName=<datastore-name>"
By using the xo-cli I get the "404 Not Found" on both the instances.
Regarding the "404 Not Found", I want to point out that at OVH I have a VMware datacenter (with 2 hosts) and in order to access to the storage I need to specify the parameter
dcPath=<datacenter-name>
So the right URL should be https://<vsphere-ip>/folder/<vm-name>/<vm-name>.vmx?dcPath=<datacenter-name>&dsName=<datastore-name>
Simply adding (in a static way) the dcPath specification on line :54 of esxi.mjs file makes it work.
I thought it was constant. I will look into the api to get it, and if not possible expose it in the UI
@Seclusion : noted I will look into theis error message, this one is a first for me
-
RE: VMware migration tool: we need your feedback!
@brezlord mac address and uefi should works now
-
RE: Xen-Orchestra Terraform provider and Windows
@rochemike patch done this morning
-
RE: Xen-Orchestra Terraform provider and Windows
@rochemike great, that will be even easier
can you open a support ticket and open a support tunnel ? I will connect and patch your installation
-
RE: Import from VMware fails after upgrade to XOA 5.91
@acomav you're up to date on your XOA
I pushed a new commit , fixing an async condition on the fix_xva_import_thin branch . Feel free to test on your XO from source.
-
RE: Continuous Replication job fails "TypeError: Cannot read properties of undefined (reading 'uuid')" at #isAlreadyOnHealthCheckSr
@techjeff thanks for your effort, I found the problem
can you test this branch : fix_cr_healthcheck ? (
git checkout fix_cr_healthcheck
from xen-orchestra folder) it will be merged soon
Latest posts made by florent
-
RE: Can .keeper files be deleted from the backup share?
they are created to ensure the samba mount is not removed by the os or another thread on XO
I would not delete them , especially since they are 0bytes files -
RE: Feedback on immutability
@rtjdamen said in Feedback on immutability:
@florent so this does mean it will never work when a forever incremental is used?
you can't have a immutable forever backup without having a infinite length, and an infinite
It may be possible only if we release the constraints.
The immutable script could release the immutability , merge the disks, but that means : the immutability will be lifted from time to time, and the responsibilities of the immutability script will be greater, and we'll need a way to track the vhd to merge and transmit the information to the immutability script -
RE: Feedback on immutability
@rtjdamen for the immutability to be useful, the full chain must be immutable and must never be out of immutability
the merge process can't lift/ put back the immutability , and increasing synchronization between process will extend the attack surface.
immutability duration must be longer than or equal to 2 time the full backup interval -1
the retention must be strictly longer than the immutability .for example, if you have a full backup interval of 7 a retention of 14 and immutability duration of 13 , key backup are K, delta are D. Immutable backup are in bold . unprotected chain are
strikedKDDDDDDKDDDDDD worst case, only one full chain protected
KDDDDDKDDDDDDK
KDDDDKDDDDDDKD
KDDDKDDDDDDKDD
KDDKDDDDDDKDDD
KDKDDDDDDKDDDD
KKDDDDDDKDDDDD best case almost 2 full chain protected -
RE: Feedback on immutability
@rtjdamen great work
- the immutability duration is per repository, to limit the attack surface to the bare minimum
- nothing can really be software protected against the root user. This is where physical device writable only once win
- it should ignore the cache.json.gz , but the json file containing the backup metadata are protected along the disk data. Same for the pool metadata/xo config
An additional note : to ensure that an incremental backup is really protected during n days, you must have
- a full backup interval smaller than n
- a retention greater than 2n - 1
That way an attacker won't be able to modify the base disk used for restore
-
RE: Backup from replicas possible?
@flakpyro for now there is no tag selector , but you can now select the VM list to be replicated
-
RE: New GFS strategy
The rules is "any backup that check a condition to be kept is kept", you can combine LTR retention and retention by number
so you can keep the last 3 backups , and also have a GFS strategy. It should remove most of the use case of backups with multiple schedules.
first point : any deleted backup is not recuperable, and this is a new feature, so test the feature progressively, in case we missed anything, especially on critical backups as we can iterate on the feature and improve it
we'll write more documentaion shortly
-
RE: Designing a backup strategy
We have some interesting thing in the works, but I think a chain of 84 snapshots for the CR/DR is quite long
you can mitigate the risk of chain corruption by setting the full backup interval, this will transfer a full from time to time
-
RE: CBT: the thread to centralize your feedback
@flakpyro said in CBT: the thread to centralize your feedback:
This is a completely different 5 host pool backed by a Pure storage array with SRs mounted via NFSv3, migrating a VM between hosts results in the same issue.
Before migration: [01:41 xcpng-prd-03 b04d9910-8671-750f-050e-8b55c64fbede]# cbt-util get -c -n 83035854-b5a9-4f7e-869f-abe43ddc658d.cbtlog e28065ff-342f-4eae-a910-b91842dd39ca After migration [01:41 xcpng-prd-03 b04d9910-8671-750f-050e-8b55c64fbede]# cbt-util get -c -n 83035854-b5a9-4f7e-869f-abe43ddc658d.cbtlog 00000000-0000-0000-0000-000000000000
I dont think i have anything "custom" running that would be causing this so no idea why this is happening but its happening on multiple pools for us.
This is a very interesting clue, and we will investigate it with damien
there is a lot of edges case that can happens ( a lying network/drive/... )
and most of the time , xcp/xapi are self healing, but sometimes XO have to do a little work to cleanup. The CBT should be reset correctly after storage migration.
We'll add the async call to enable/ disable CBT since it could lead to bogus state, and maybe a more in depth cleaning of cbt after a "vdi not related error " -
RE: CBT: the thread to centralize your feedback
@rtjdamen we found a clue with @Bastien-Nollet : there was a race condition between the timeframe allowed to enable CBT and the snapshot, leading to a snapshot taken before CBT , thus failing to compute correctly the list of changed block at the next backup
The fix is deployed, and we'll see this night. If everything goes well, this night will be a full, but the disks will keep CBT enabled. And the next night, we'll have delta
if everything is ok, it will be released in a second patch ( 5.100.2 )
-
RE: CBT: the thread to centralize your feedback
this branch ( already deployed on @rtjdamen systems) add a better handing of host that took too much time to compute the changed block list :
https://github.com/vatesfr/xen-orchestra/pull/8120it will be release in patch this week
I am still investigating an error that still occurs occasionally : XapiError: SR_BACKEND_FAILURE_460(, Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated], )