XO Team

Developers of Xen Orchestra

Posts

  • RE: Timestamp lost in Continuous Replication

    @Pilow

    we had as many "VMs with a timestamp in the name" as number of REPLICAs, and multiple snapshot on source VM
    now we have "one Replica VM with multiple snapshots" ? Veeam-replica-style...

    we didn't look at veeam , but it's reassuring to see that we converge toward the solutions used elsewhere

    it shouldn't change anything on the source
    I am currently doing more test to see if we missed something

    edit: as an additional beenfits it should use less space on target it you have a retention > 1 since we will only have one active disk

  • RE: Best pratice : Add dedicated host for CR or DR.

    @Dezerd and you can either add the replication to the same backup job if you have a good connection between host, or create separate replication jobs

    a replication is quite neat, especially for a fast restart, since the replicated VM is ready to start

  • RE: 🛰️ XO 6: dedicated thread for all your feedback!

    @acebmxer Reproducing the focus issue, we'll fix that. Thanks for the report!

    @shanenp SR management is coming to XO 6 soon! 🙂

    @benapetr XO 6 works like XO 5: with a web server behind that handles connections to the hosts. Do you get any error messages in the console or in xo-server logs? Does XO 5 work for you?

  • RE: Timestamp lost in Continuous Replication

    @Pilow the timestamp is on the snapshot, but you're right, we can add a note on the VM with the last replications informations

    note that the older VMs replicated will be purged when we are sure they don't have any usefull data , so you will have only one VM replicated , wuth multiple snapshots

  • RE: Timestamp lost in Continuous Replication

    @ph7 yes this is expected even if we did not have time to communicate on this yet : https://github.com/vatesfr/xen-orchestra/pull/9524

    the goal is to have more symmetry between the source ( on VM with snapshots ) and the replica ( on VM with snapshots) . The end goal is to use this to be able to reverse a PRD , and to allow advanced scenarii like VM on edge site => replica on center site => backup on central servers

    but it should not do full backups, it should respect the delta / full chain

    fbeauchamp opened this pull request in vatesfr/xen-orchestra

    closed feat(backups): symetrical backups #9524

  • RE: Migrations after updates

    Hi @acebmxer,

    I've made some tests with a small infrastructure, which helped me understand the behaviour you encounter.

    With the performance plan, the load balancer can trigger migrations in the following cases:

    • to better satisfy affinity or anti-affinity constraints
    • if a host has a memory or CPU usage exceeds a threshold (85% of the CPU critical threshold, of 1.2 times the free memory critical threshold)
    • with vCPU balancing behaviour, if the vCPU/CPU ratio differs too much from one host to another AND at least one host has more vCPUs than CPUs
    • with preventive behaviour, if CPU usage differs too much from one host to another AND at least one host has more than 25% CPU usage

    After a host restart, your VMs will be unevenly distributed, but this will not trigger a migration if there are no anti-affinity constraints to satisfy, if no memory or CPU usage thresholds are exceeded, and if no host has more CPUs than vCPUs.

    If you want migrations to happen after a host restart, you should probably try using the "preventive" behaviour, which can trigger migrations even if thresholds are not reached. However it's based on CPU usage, so if your VMs use a lot of memory but don't use much CPU, this might not be ideal as well.

    We've received very few feedback about the "preventive" behaviour, so we'd be happy to have yours. 🙂

    As we said before, lowering the critical thresholds might also be a solution, but I think it will make the load balancer less effective if you encounter heavy load a some point.

  • RE: REQUEST: Add PATCH /vms/{id} for updating VM properties (name_description, name_label)

    Hi, @14wkinnersley .
    That something in our backlog but not yet planned.
    ping @gregoire, card XO-2204

  • RE: Migrations after updates

    @Greg_E The RPU is supposed to disable the load balancer, but it's possible that when the load balancer restarts at the end of the RPU, it takes into account the host stats during the RPU, which may create some unexpected migrations.

    We'll have to investigate on that. Thanks for the feedback.

  • RE: Migrations after updates

    @acebmxer at the moment I don't know what could cause this behaviour. I'll try to reproduce it during the following days.

    I think setting the memory limit to half of the host RAM is fine if you don't expect too much load, but if you're getting a lot of RAM use on your hosts at some point, I'm not sure the load balancer will migrate VMs from a host at 90% RAM use to a host at 60% RAM use, as both exceed the limit.

    Also, could you try again to reproduce the bug after changing the "performance plan behaviour" setting to conservative, to see if it changes something? The "vCPU balancing" mode is quite recent, so maybe there's some bug with it that we didn't discover yet.

  • RE: Unable to define count of CPUs during VM create

    @akihu2 Perfect, thanks.

    Yes, you can close this topic