@andrewreid Yes, I can't wait to share it with everybody.
Best posts made by florent
-
RE: Backblaze B2 as a backup remote
-
RE: Externalised backup LTO
@b-dietrich said in Externalised backup LTO:
Hi everyone,
I would like to know if it's possible to externalised backup on library tape with XOA ?
Is it in the roadmap for 2024 ?
I will let @olivierlambert on the backlog point. It is still planned, but there is a lot of ground work before :
- since tape can't be easily seek , so you'll have to write the backup at once no turning back to update a previously written block/metadata
- you'll have to build a catalog of tape<->backup to know which tape you'll have to use to restore, or rewrite. This is a huge change since XO don't use any database , the backup repositories are self contained : you can mount a backup repository to a new XO, and the backup will be listed . With tapes, you'll have to keep (and backup) the backup catalog. Yep , we'll need to backup the backups and ensure it's recoverable
That being said, the mirror backup feature as been built to pave the way to tape backup
For now the easiest way to do tape backup is to use full backup to a backup repository only used for this, and to mirror it to tapes. At our scale, priorities can also change if there is a big enough sponsor, that is ready to take a part of the financial load of this feature and gives us access to real world hardware and processes.
-
RE: VMware migration tool: we need your feedback!
the test with @dumarjo showed that there is still a bug during the import. I am still investigating it and will keep you informed, hopefully today or tomorrow
-
RE: VMware migration tool: we need your feedback!
@ismo-conguairta said in VMware migration tool: we need your feedback!:
I have two different behaviour on two different XO instances. Each XO instance refers to a different pool (different hosts, same xcp-ng version). In both the instances I try to connect to the same Private Virtual Datacenter based on VMware/vSphere at OVH.
In the first one I get the following error message by using the web UI: "invalid parameters" (take a look at this logfile 2023-02-28T19_25_21.933Z - XO.txt )
In the second one, I get the following error message by using the web UI "404 Not Found https://<vsphere-ip>/folder/<vm-name>/<vm-name>.vmx?dsName=<datastore-name>"
By using the xo-cli I get the "404 Not Found" on both the instances.
Regarding the "404 Not Found", I want to point out that at OVH I have a VMware datacenter (with 2 hosts) and in order to access to the storage I need to specify the parameter
dcPath=<datacenter-name>
So the right URL should be https://<vsphere-ip>/folder/<vm-name>/<vm-name>.vmx?dcPath=<datacenter-name>&dsName=<datastore-name>
Simply adding (in a static way) the dcPath specification on line :54 of esxi.mjs file makes it work.
I thought it was constant. I will look into the api to get it, and if not possible expose it in the UI
@Seclusion : noted I will look into theis error message, this one is a first for me
-
RE: VMware migration tool: we need your feedback!
@brezlord mac address and uefi should works now
-
RE: Xen-Orchestra Terraform provider and Windows
@rochemike patch done this morning
-
RE: Xen-Orchestra Terraform provider and Windows
@rochemike great, that will be even easier
can you open a support ticket and open a support tunnel ? I will connect and patch your installation
-
RE: Import from VMware fails after upgrade to XOA 5.91
@acomav you're up to date on your XOA
I pushed a new commit , fixing an async condition on the fix_xva_import_thin branch . Feel free to test on your XO from source.
-
RE: Continuous Replication job fails "TypeError: Cannot read properties of undefined (reading 'uuid')" at #isAlreadyOnHealthCheckSr
@techjeff thanks for your effort, I found the problem
can you test this branch : fix_cr_healthcheck ? (
git checkout fix_cr_healthcheck
from xen-orchestra folder) it will be merged soon
Latest posts made by florent
-
RE: CBT: the thread to centralize your feedback
@rtjdamen we found a clue with @Bastien-Nollet : there was a race condition between the timeframe allowed to enable CBT and the snapshot, leading to a snapshot taken before CBT , thus failing to compute correctly the list of changed block at the next backup
The fix is deployed, and we'll see this night. If everything goes well, this night will be a full, but the disks will keep CBT enabled. And the next night, we'll have delta
if everything is ok, it will be released in a second patch ( 5.100.2 )
-
RE: CBT: the thread to centralize your feedback
this branch ( already deployed on @rtjdamen systems) add a better handing of host that took too much time to compute the changed block list :
https://github.com/vatesfr/xen-orchestra/pull/8120it will be release in patch this week
I am still investigating an error that still occurs occasionally : XapiError: SR_BACKEND_FAILURE_460(, Failed to calculate changed blocks for given VDIs. [opterr=Source and target VDI are unrelated], )
-
RE: NBD used even when disabled
@Tristis-Oris do you have preferNbd in your config.toml ? it was the first way to enable NBD but is not recommended anymore
-
RE: NBD used even when disabled
@Tristis-Oris I will try to reproduce it here ASAP and will keep you informed
-
RE: Immutable S3 Backups (Backblaze) and Merging; A Little Confused
@planedrop
with your setup, you'll have 15 real days of immutability at worst.
here is a little schema with Key backup ( full) and delta . A chain of backup is a key backup and it's delta descendants
mutable backups are noted with a pointK.d.dKdddddKd // in this case the first chain is not protected, because parts of the chain are mutable, but the 2 most recent
K.KdddddKddd // 2 more backups here we have the longest immutables chains
K.dddddKdddd // here is the critical part with the shortest imutable chains : only the last one
K.ddddKdddd //the protetectd chain grosto ensure a usable immutability of n day, you must have a full backup retention of at least n, and a backup retention of at least 2xn
I am not really sure on the error message that the UI show on backblaze, but the backup will be merged and deleted as soon as possible, when the object lock is lifted.
you can trust backblaze on their object lock, and you cna check the real number of backup stored by looking at the restore tab of XO -
RE: Problem with differential restore
@frank-s said in Problem with differential restore:
So my guess is it will work if I choose not to delete the snapshot??? If that is correct it is a pity as differential restore is very fast compared to regular restore. Differential restore, for instance, would be a brilliant way to recover quickly from a ransomware attack. Also, deleting the snapshot makes coalescing much faster. Am I correct in my assumptions? Is there a way it could be made to work without the snapshot?
Thank you,
Frank.differential restore work by not transferring the data that the snapshot and the backup have in common, without reading any of those
it clone the last snapshot, an then revert the block changed between the older backup and the snapshot.In theory, if you run a delta backup without destroying the data, it should provide the anchor to have this work.
-
RE: Schedules not showing any task (progress) in XO tasks
@manilx said in Schedules not showing any task (progress) in XO tasks:
checkbasevdi must be called before updateUuidAndChain for incremental backups
hi manlix, could you post the full json ?
is is the "download log" button , with the downward arrowon he top of the backup log window
-
RE: Question on backup sequence
the backup chaining does not change the way we handle settings ( retention or health check), to ease the conversion to backup sequences.
The main change is that the schedules should be disabled if you only want to have them run in sequences
-
RE: Backup Fail: Trying to add data in unsupported state
Hi,
what are the size of the failing VMs ? is there anything in the syslog before having the cipher error message ?
To be fair, uploading full backup ( > 50GB) , without knowing the size first is full of hurdles. And the xapi dont tell the size of the exports. Incremental backup with block storage completly circumvent this, by uploading separate blocks of known size
Regards