@manilx yes, that's it
we should change the "full backup interval" to "base/complete backup interval", to clarify what is a full . And we will do it.
@manilx yes, that's it
we should change the "full backup interval" to "base/complete backup interval", to clarify what is a full . And we will do it.
we are reworking the import code , along the new backup code ( https://xcp-ng.org/forum/topic/10664/our-future-backup-code-test-it ) to unify the differetn disk transformation path
For now:
@andrewreid Yes, I can't wait to share it with everybody.
@olivierlambert @Gheppy that is a nice catch and it gives us a interesting clue
I am currently working on it
@manilx said in Our future backup code: test it!:
@florent Yes, just fine.
ok, so that's an issue with the mirror. I reproduced this night on my lab
I am working on a fix
@olivierlambert yes, I started to work on friday, and hopefully will have a fix today
@flakpyro today the backup code use binary stream in a the vhd format. This format is limited, by design , to 2TB disks
xcp-ng team introduce the qcow2 format to handle bigger disk
By using a independant format, we'll be able to handle both vhd and qcow2 on the backup side without multiplying complexity. We'll also be able to build the adapter to handle the various vmdk sub format (rax, cowd, sesparse and stream optimized) used by v2v and import bigger disks directly
@b-dietrich said in Externalised backup LTO:
Hi everyone,
I would like to know if it's possible to externalised backup on library tape with XOA ?
Is it in the roadmap for 2024 ?
I will let @olivierlambert on the backlog point. It is still planned, but there is a lot of ground work before :
That being said, the mirror backup feature as been built to pave the way to tape backup
For now the easiest way to do tape backup is to use full backup to a backup repository only used for this, and to mirror it to tapes. At our scale, priorities can also change if there is a big enough sponsor, that is ready to take a part of the financial load of this feature and gives us access to real world hardware and processes.
@katapaltes said in XO Community edition backups dont work as of build 6b263:
Error: invalid HTTP header in response body
Error: invalid HTTP header in response body should come with a trace in the logs ( journalctl under the same acount that runs xo )
could you extract it ?
second fix is merged
Can you retest master, latest commit ?
@uwood the fix regarding the "catch" error is still in review, will probably be merged in a few hours
the second fix regarding the stuck stream ( so pool metadata, full backup and disaster recovery) is merged . This one was effectively added in friday's PR
@uwood a metadata backup shouldn't trigger any timeout
I am retesting it
It may be related to the other topic, but I see a few things :
invalid HTTP header in response body. There is an issue while trying to download the disk. The XO logs ( journalctl ) should contains lines ith the xo:xapi:vdi
key regarding this.
It may be worth to tests without purge snapshot , if this error reoccur
SIGTERM is probably more an out of memory issue on the backup worker. How muck memory does your XO have ?
Hi Lulina
you need to disable the schedul on the backup job (but don't remove them)
Then you schedule the sequence ,and backup A, B and C should run
Note that if theses job uses the sam VM ( like a delta backup followed by a mirror to S3 ) , you should enabled " merge backup synchronously" option on the advanced tab of the first job
@archw said in XO Community edition backups dont work as of build 6b263:
The last argument to .catch() must be a function, got [object Object]
thanks for the reports
we are testing the fix ( https://github.com/vatesfr/xen-orchestra/pull/8739 )
This may be bigger than a simple typo, this code last changes was 2 years ago
We reworked the timeout/retry mechanic of the unlink method as that doesn't perform as well as expected on some object storage backend, expecting new clues on how to improve the situation.
And you reports (24h after the merges) pointed us to a possible solution : We were deleting up to 10^depth files in parallel instead of 2^depth.
@Tristis-Oris we are documenting it while migrating the api to the new rest api
http://<xo_url>/rest/v0/docs/ should show you the swagger doc
@olivierlambert yes the fix is in the pipeline (on xapi side )
it won't migrate a snapshot with cbt enabled, and won't allow to disable cbt on a snapshot
@yeopil21
I would add a file config.tokenValidity.toml in /etc/xo-server with this content
[authentication]
defaultTokenValidity = '1 year'
maxTokenValidity = '1 year'