Hi @cbaguzman,
Currently there's no way of performing the vhd-cli check command on an encrypted remote, but this is also annoying for us and it shouldn't be too hard to fix, so I'll fix it.
Hi @cbaguzman,
Currently there's no way of performing the vhd-cli check command on an encrypted remote, but this is also annoying for us and it shouldn't be too hard to fix, so I'll fix it.
Hi @KPS,
Thank you for reporting this behavior. We haven't been able to reproduce the bug yet, but we'll look into it with @MathieuRA. We're a bit busy at the moment, so we probably won't be able to fix this issue before the November release.
The fix comes from @florent so all the kudos are for him 
@jshiells this value is the average load across all cores on a host. To be more precise, it is a weighted average of the last 30min of this value. Migrations are triggered if this average exceeds 85% of the critical threshold defined in the plugin configuration, which is roughly 64% if you defined the critical threshold at 75%.
Other circumstances can trigger migrations :
Hi @Pilow,
We plan to make LTR available also for mirror backups and metadata backups in the future, but we didn't have the time to do it yet.
Smart mode on mirror incremental backups would be a bit tricky to do, as it would require us to handle incomplete chains of backups, for cases when a tag is removed from a VM and then added back. We might still implement it in the future, though.
About the bug you noticed about VMs showing in backup log despite being excluded, I think it was intentional at some point, but now it would make sense to remove it. Thanks for the feedback, we'll change this.
Hi @Pilow,
This is a great idea, we'll plan it so it gets implemented in the future.
A fix has been merged on master, now the LTR should properly pick the first backup of each day, week, month and year instead of the last one: https://github.com/vatesfr/xen-orchestra/pull/9180
We plan to make it configurable during the upcoming months.
@Forza said in Full backup - new long-retention options:
Thanks for the quick feedback. Does it mean that the schedule's own retention is also honored separately in addition to the LTR?
Yes, a backup is kept if it matches one of the retention criteria, either the schedule's retention or the LTR. (the backup is not duplicated, we just check for both criteria to know if we should keep the backup or not)
The fix has been released in XO 5.110
Thanks @ph7, I'll try to have a look to understand what's going on
Hi @Pilow,
Thanks for the report.
We are aware that there are many problems with the FLR. We would like to fix them but they are not easy to fix, and we can't give an estimation date for a fix. I've linked this topic to our investigation ticket.
For the moment, when FLR fails, we recommend to manually restore your files by following this documentation: https://github.com/vatesfr/xen-orchestra/blob/master/%40vates/fuse-vhd/README.md#restore-a-file-from-a-vhd-using-fuse-vhd-cli
@Pilow The backups kept by LTR are just regular backups with a specific tag, which doesn't change how we treat them.
If you want to avoid each of your LTR backup to depend on one another, we recommend to set a full backup interval value to your backup job, which will regularly force a full backup. (even without LTR, having an infinite chain of backups can cause problem in the long term, especially if no healthchecks are made)
Thanks @Pilow for doing the explanations.
It may be configurable in the future, but for now LTR picks the first backup of the day, week, month and year. Depending on the timezone of your XOA, the first day of the week may either be Monday or Sunday.
There was initially a bug, making LTR pick the last backup of a time period instead of the first, but this has been fixed a couple of months ago.
@ph7 Ok, thanks.
But the most important will be to do this test while some VMs are missing from the backup restore page (if it happens again)
@ph7 Ok, let's do this.
If it happens again, can you check that XO can still access your remote on which missing VM backups are stored? (by using button "test your remote" on page settings > remote)
It may just me a network issue.
Hi @acebmxer,
I've made some tests with a small infrastructure, which helped me understand the behaviour you encounter.
With the performance plan, the load balancer can trigger migrations in the following cases:
After a host restart, your VMs will be unevenly distributed, but this will not trigger a migration if there are no anti-affinity constraints to satisfy, if no memory or CPU usage thresholds are exceeded, and if no host has more CPUs than vCPUs.
If you want migrations to happen after a host restart, you should probably try using the "preventive" behaviour, which can trigger migrations even if thresholds are not reached. However it's based on CPU usage, so if your VMs use a lot of memory but don't use much CPU, this might not be ideal as well.
We've received very few feedback about the "preventive" behaviour, so we'd be happy to have yours. 
As we said before, lowering the critical thresholds might also be a solution, but I think it will make the load balancer less effective if you encounter heavy load a some point.
@Greg_E The RPU is supposed to disable the load balancer, but it's possible that when the load balancer restarts at the end of the RPU, it takes into account the host stats during the RPU, which may create some unexpected migrations.
We'll have to investigate on that. Thanks for the feedback.
@acebmxer at the moment I don't know what could cause this behaviour. I'll try to reproduce it during the following days.
I think setting the memory limit to half of the host RAM is fine if you don't expect too much load, but if you're getting a lot of RAM use on your hosts at some point, I'm not sure the load balancer will migrate VMs from a host at 90% RAM use to a host at 60% RAM use, as both exceed the limit.
Also, could you try again to reproduce the bug after changing the "performance plan behaviour" setting to conservative, to see if it changes something? The "vCPU balancing" mode is quite recent, so maybe there's some bug with it that we didn't discover yet.