@manilx yes, that's it
we should change the "full backup interval" to "base/complete backup interval", to clarify what is a full . And we will do it.
@manilx yes, that's it
we should change the "full backup interval" to "base/complete backup interval", to clarify what is a full . And we will do it.
@andrewreid Yes, I can't wait to share it with everybody.
@b-dietrich said in Externalised backup LTO:
Hi everyone,
I would like to know if it's possible to externalised backup on library tape with XOA ?
Is it in the roadmap for 2024 ?
I will let @olivierlambert on the backlog point. It is still planned, but there is a lot of ground work before :
That being said, the mirror backup feature as been built to pave the way to tape backup
For now the easiest way to do tape backup is to use full backup to a backup repository only used for this, and to mirror it to tapes. At our scale, priorities can also change if there is a big enough sponsor, that is ready to take a part of the financial load of this feature and gives us access to real world hardware and processes.
the test with @dumarjo showed that there is still a bug during the import. I am still investigating it and will keep you informed, hopefully today or tomorrow
@ismo-conguairta said in VMware migration tool: we need your feedback!:
I have two different behaviour on two different XO instances. Each XO instance refers to a different pool (different hosts, same xcp-ng version). In both the instances I try to connect to the same Private Virtual Datacenter based on VMware/vSphere at OVH.
In the first one I get the following error message by using the web UI: "invalid parameters" (take a look at this logfile 2023-02-28T19_25_21.933Z - XO.txt )
In the second one, I get the following error message by using the web UI "404 Not Found https://<vsphere-ip>/folder/<vm-name>/<vm-name>.vmx?dsName=<datastore-name>"
By using the xo-cli I get the "404 Not Found" on both the instances.
Regarding the "404 Not Found", I want to point out that at OVH I have a VMware datacenter (with 2 hosts) and in order to access to the storage I need to specify the parameter
dcPath=<datacenter-name>
So the right URL should be https://<vsphere-ip>/folder/<vm-name>/<vm-name>.vmx?dcPath=<datacenter-name>&dsName=<datastore-name>
Simply adding (in a static way) the dcPath specification on line :54 of esxi.mjs file makes it work.
I thought it was constant. I will look into the api to get it, and if not possible expose it in the UI
@Seclusion : noted I will look into theis error message, this one is a first for me
@brezlord mac address and uefi should works now
@rochemike patch done this morning
@rochemike great, that will be even easier
can you open a support ticket and open a support tunnel ? I will connect and patch your installation
@acomav you're up to date on your XOA
I pushed a new commit , fixing an async condition on the fix_xva_import_thin branch . Feel free to test on your XO from source.
thanks for pointing the exact issue in the aws SDK. It gave use enough matter to find the fix
@manilx yes, that's it
we should change the "full backup interval" to "base/complete backup interval", to clarify what is a full . And we will do it.
@mguimond @jr-m4 I would like to be able to keep the library up to date.
Would it be possible to patch a file in your install ? and check if the backup run It will disable the checksum computation .
Note that if you use backup encryption, the checksum are already checked on restore
you can either use this branch : fix_s3_backblaze or modify directly the code in your installation in <xo>/node_modules/@xen-orchestra/fs/dist/s3.js
replace :
requestHandler: new _nodeHttpHandler.NodeHttpHandler({
socketTimeout: 600000,
httpAgent: new _http.Agent({
keepAlive: true
}),
httpsAgent: new _https.Agent({
rejectUnauthorized: !allowUnauthorized,
keepAlive: true
})
})
by
requestHandler: new _nodeHttpHandler.NodeHttpHandler({
socketTimeout: 600000,
httpAgent: new _http.Agent({
keepAlive: true
}),
httpsAgent: new _https.Agent({
rejectUnauthorized: !allowUnauthorized,
keepAlive: true
})
}),
requestChecksumCalculation: "WHEN_REQUIRED",
responseChecksumValidation: "WHEN_REQUIRED"
(the difference is
,
requestChecksumCalculation: "WHEN_REQUIRED",
responseChecksumValidation: "WHEN_REQUIRED"
There is a lot of room to improve UX in the backup form, and we'll do better when rewriting them for XO6. Mainly, there is a confusion between Full backup ( a xva file exported by xcp-ng) and the base backups during an incremental backup
Mirror incremental will retransfer the base + deltas . You can check it by looking into the backup job. A mirror job will always transfer all new data , and will then prune the backups on the targets depending on the retention set
For the schedule : the backup must be disabled, since you don't want them to run alone. ONly the schedule must be scheduled and run
@afk the agent is as dumb as possible
also if you encrypt the backup, the agent will need to decrypt the metadata to detect the chains, thus having access to the encryption key, which need getting the encryption key out of XO and transferred to the immutability agent
I think it will be easier to provide more feedback on the immutabiltiy backup, XO has access to the chain , and / or alert when something seems to be strange
@peo do you have anything in the xo logs ? can you post the json of a failed backup job ?
@peo in object storage world ( S3 compatible) , there is not really the notion of "directory", it's more a convention that / is a logical level in the file key, but rclone has errors on "Directory not empty". It let me think that rclone do things a little differently than aws s3.
We try to be as compatible as possible with S3 implementation, but each object storage has its own quirk. Thus we only supports the most used setups for now.
@SudoOracle could you post the full json log of the backup ?
(you can get it by clicking on the download button on top of a failed execution job)
if possible, one per type of failed backup job
@manilx are there multiple job running on the same VM ?