Hi @cbaguzman,
Currently there's no way of performing the vhd-cli check command on an encrypted remote, but this is also annoying for us and it shouldn't be too hard to fix, so I'll fix it.
Hi @cbaguzman,
Currently there's no way of performing the vhd-cli check command on an encrypted remote, but this is also annoying for us and it shouldn't be too hard to fix, so I'll fix it.
Hi @KPS,
Thank you for reporting this behavior. We haven't been able to reproduce the bug yet, but we'll look into it with @MathieuRA. We're a bit busy at the moment, so we probably won't be able to fix this issue before the November release.
The fix comes from @florent so all the kudos are for him 
@jshiells this value is the average load across all cores on a host. To be more precise, it is a weighted average of the last 30min of this value. Migrations are triggered if this average exceeds 85% of the critical threshold defined in the plugin configuration, which is roughly 64% if you defined the critical threshold at 75%.
Other circumstances can trigger migrations :
Hi @Pilow,
We plan to make LTR available also for mirror backups and metadata backups in the future, but we didn't have the time to do it yet.
Smart mode on mirror incremental backups would be a bit tricky to do, as it would require us to handle incomplete chains of backups, for cases when a tag is removed from a VM and then added back. We might still implement it in the future, though.
About the bug you noticed about VMs showing in backup log despite being excluded, I think it was intentional at some point, but now it would make sense to remove it. Thanks for the feedback, we'll change this.
Hi @Pilow,
This is a great idea, we'll plan it so it gets implemented in the future.
A fix has been merged on master, now the LTR should properly pick the first backup of each day, week, month and year instead of the last one: https://github.com/vatesfr/xen-orchestra/pull/9180
We plan to make it configurable during the upcoming months.
@Forza said in Full backup - new long-retention options:
Thanks for the quick feedback. Does it mean that the schedule's own retention is also honored separately in addition to the LTR?
Yes, a backup is kept if it matches one of the retention criteria, either the schedule's retention or the LTR. (the backup is not duplicated, we just check for both criteria to know if we should keep the backup or not)
The fix has been released in XO 5.110
Hi @acebmxer,
I've made some tests with a small infrastructure, which helped me understand the behaviour you encounter.
With the performance plan, the load balancer can trigger migrations in the following cases:
After a host restart, your VMs will be unevenly distributed, but this will not trigger a migration if there are no anti-affinity constraints to satisfy, if no memory or CPU usage thresholds are exceeded, and if no host has more CPUs than vCPUs.
If you want migrations to happen after a host restart, you should probably try using the "preventive" behaviour, which can trigger migrations even if thresholds are not reached. However it's based on CPU usage, so if your VMs use a lot of memory but don't use much CPU, this might not be ideal as well.
We've received very few feedback about the "preventive" behaviour, so we'd be happy to have yours. 
As we said before, lowering the critical thresholds might also be a solution, but I think it will make the load balancer less effective if you encounter heavy load a some point.
@Greg_E The RPU is supposed to disable the load balancer, but it's possible that when the load balancer restarts at the end of the RPU, it takes into account the host stats during the RPU, which may create some unexpected migrations.
We'll have to investigate on that. Thanks for the feedback.
@acebmxer at the moment I don't know what could cause this behaviour. I'll try to reproduce it during the following days.
I think setting the memory limit to half of the host RAM is fine if you don't expect too much load, but if you're getting a lot of RAM use on your hosts at some point, I'm not sure the load balancer will migrate VMs from a host at 90% RAM use to a host at 60% RAM use, as both exceed the limit.
Also, could you try again to reproduce the bug after changing the "performance plan behaviour" setting to conservative, to see if it changes something? The "vCPU balancing" mode is quite recent, so maybe there's some bug with it that we didn't discover yet.
Hi @acebmxer,
I think the reason for this is a feature we recently added that prevents VMs from moving back-and-forth between hosts. VMs now have a cooldown (default 30min) between 2 load-balancer-triggered migrations
Can you try to set the migration cooldown to 0 and tell us if it fixes this behaviour? (in the "Advanced" section of the load balancer configuration)
Hi @MajorP93,
This PR is only about changing the way we delete old logs (linked to a bigger work of making backups use XO tasks instead of their own task system), it won't fix the issue discussed in this topic.
Hi @champagnecharly ,
On the XO side, it seems that this PCI has an empty string ID, which doesn't enable us to delete it.
We'll have to do some tests to find out how to prevent that.
We might have trouble reproducing the issue, so would you mind helping us with the tests?
You would need to add this piece of code on file xo-server/dist/xapi-object-to-xo.mjs before the line that start with if (isHvm) { (that should be near line 475)
if ((_vm$attachedPcis = vm.attachedPcis) !== null && _vm$attachedPcis !== void 0 && _vm$attachedPcis.includes('')) {
warn('Empty string PCI id:', otherConfig.pci);
}
then restart xo-server and look at the output of journalctl, there should be some lines looking like: 2026-01-30T09:26:17.763Z xo:server:xapi-objects-to-xo WARN Empty string PCI id:
We just merged the delay: https://github.com/vatesfr/xen-orchestra/pull/9400
We increased it to 5s to have a security margin, as the optimal delay may not be the same on different configurations.
We're still carrying a bit of investigations to see if we can find the cause of the problem, but if we don't find it we'll add this delay.
Thanks @Pilow for the tests once again 
Ok so 1s is slightly not enough, thanks for the update.
Thanks again @Pilow
I don't think the remotes being S3 changes something here.