Hi @cbaguzman,
Currently there's no way of performing the vhd-cli check command on an encrypted remote, but this is also annoying for us and it shouldn't be too hard to fix, so I'll fix it.
Hi @cbaguzman,
Currently there's no way of performing the vhd-cli check command on an encrypted remote, but this is also annoying for us and it shouldn't be too hard to fix, so I'll fix it.
Hi @KPS,
Thank you for reporting this behavior. We haven't been able to reproduce the bug yet, but we'll look into it with @MathieuRA. We're a bit busy at the moment, so we probably won't be able to fix this issue before the November release.
The fix comes from @florent so all the kudos are for him 
@jshiells this value is the average load across all cores on a host. To be more precise, it is a weighted average of the last 30min of this value. Migrations are triggered if this average exceeds 85% of the critical threshold defined in the plugin configuration, which is roughly 64% if you defined the critical threshold at 75%.
Other circumstances can trigger migrations :
Hi @Pilow,
We plan to make LTR available also for mirror backups and metadata backups in the future, but we didn't have the time to do it yet.
Smart mode on mirror incremental backups would be a bit tricky to do, as it would require us to handle incomplete chains of backups, for cases when a tag is removed from a VM and then added back. We might still implement it in the future, though.
About the bug you noticed about VMs showing in backup log despite being excluded, I think it was intentional at some point, but now it would make sense to remove it. Thanks for the feedback, we'll change this.
Hi @Pilow,
This is a great idea, we'll plan it so it gets implemented in the future.
A fix has been merged on master, now the LTR should properly pick the first backup of each day, week, month and year instead of the last one: https://github.com/vatesfr/xen-orchestra/pull/9180
We plan to make it configurable during the upcoming months.
@Forza said in Full backup - new long-retention options:
Thanks for the quick feedback. Does it mean that the schedule's own retention is also honored separately in addition to the LTR?
Yes, a backup is kept if it matches one of the retention criteria, either the schedule's retention or the LTR. (the backup is not duplicated, we just check for both criteria to know if we should keep the backup or not)
The fix has been released in XO 5.110
Ok so 1s is slightly not enough, thanks for the update.
Thanks again @Pilow
I don't think the remotes being S3 changes something here.
@cbaguzman for information, I made some changes on vhd-cli so in the future we'll get a more explanatory error message when a command failed because we passed an incorrect argument: https://github.com/vatesfr/xen-orchestra/pull/9386
Hi @Pilow,
I've done some more testing and looked at the code, and I wasn't able to reproduce this behaviour once. It's also unclear to me why it can happen.
We may just add the delay as you did, but 10s is probably too long. Could you try to replace it by a 1s delay instead, and tell us if it's enough?
Hi @cbaguzman,
I tested on my own and I got the same result as you, but then I realized the AI you used both tricked us into thinking that the --chain was a valid option for the info command (it's not).
I removed this option and the command worked properly.
Can you try the same command without this option?
Hi @Pilow,
Thanks again for the feedback, I think now we have enough data to be sure it's indeed a race condition.
We noticed that the log you sent earlier in this topic is a backup job using a proxy. Could you tell if the backup jobs that ended up with a wrong status in the report were all using a proxy, or not all of them?
Thanks @Pilow for the report, I'll try to reproduce on my side to get a better undertanding of what's creates the fallback.
Thanks @Pilow for the tests.
We'll have to investigate this to fix it more properly than adding a ugly delay.
I agree, let's wait for more runs.
If it's indeed a race condition, we'll still have to figure out a better way to settle this than just adding delay
Hi @pilow,
Currently, I don't know what would cause this or why this would happen more frequently.
Could you test on your own to add some delay before sending the report, to see if it's indeed a race condition?
To do that, you just need to edit the file packages/xo-server-backup-reports/dist/index.js by adding these two lines:
const delay = ms => new Promise(resolve => setTimeout(resolve, ms));
await delay(10000);
at the beginning of the _report function, like this:
async _report(runJobId, {
type
} = {}, force) {
const delay = ms => new Promise(resolve => setTimeout(resolve, ms));
await delay(10000);
if (type === 'call') {
return;
}
then just restart xo-server.