XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Popular
    Log in to post
    • All Time
    • Day
    • Week
    • Month
    • All Topics
    • New Topics
    • Watched Topics
    • Unreplied Topics

    • All categories
    • DustyArmstrongD

      AMD 'Barcelo' passthrough issues - any success stories?

      Watching Ignoring Scheduled Pinned Locked Moved Hardware
      12
      1
      0 Votes
      12 Posts
      480 Views
      T
      @DustyArmstrong Thanks for responding to the GitHub issue. It’s great that more people want this working; it’s difficult to gain traction otherwise. Regarding your list, it’s correct. A reboot should be on the second place. You need to reboot only to detach your PCI device (video card) from its driver and assign it to the pciback driver instead on the next boot. This effectively creates a reservation for the device and allows you to dynamically assign it to VMs. Once your card is free from other kernel drivers, the rest doesn’t require a reboot.
    • U

      Loss of connection during an action BUG

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      2
      3
      0 Votes
      2 Posts
      26 Views
      P
      @User-cxs tu peux aller voir dans SETTINGS / SERVERS ? on dirait que tous les hotes master sont disable/disconnect ou alors tu as un toolstack restart inopiné...
    • K

      Suggestion: Check if requirements for backups are met before shutting down VMs...

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      3
      0 Votes
      3 Posts
      87 Views
      K
      @olivierlambert done.
    • A

      Migrations after updates

      Watching Ignoring Scheduled Pinned Locked Moved Xen Orchestra
      13
      2
      0 Votes
      13 Posts
      357 Views
      Bastien NolletB
      Hi @acebmxer, I've made some tests with a small infrastructure, which helped me understand the behaviour you encounter. With the performance plan, the load balancer can trigger migrations in the following cases: to better satisfy affinity or anti-affinity constraints if a host has a memory or CPU usage exceeds a threshold (85% of the CPU critical threshold, of 1.2 times the free memory critical threshold) with vCPU balancing behaviour, if the vCPU/CPU ratio differs too much from one host to another AND at least one host has more vCPUs than CPUs with preventive behaviour, if CPU usage differs too much from one host to another AND at least one host has more than 25% CPU usage After a host restart, your VMs will be unevenly distributed, but this will not trigger a migration if there are no anti-affinity constraints to satisfy, if no memory or CPU usage thresholds are exceeded, and if no host has more CPUs than vCPUs. If you want migrations to happen after a host restart, you should probably try using the "preventive" behaviour, which can trigger migrations even if thresholds are not reached. However it's based on CPU usage, so if your VMs use a lot of memory but don't use much CPU, this might not be ideal as well. We've received very few feedback about the "preventive" behaviour, so we'd be happy to have yours. As we said before, lowering the critical thresholds might also be a solution, but I think it will make the load balancer less effective if you encounter heavy load a some point.
    • A

      "NOT_SUPPORTED_DURING_UPGRADE()" error after yesterday's update

      Watching Ignoring Scheduled Pinned Locked Moved Backup
      24
      0 Votes
      24 Posts
      1k Views
      M
      @MajorP93 said: -disable HA on pool level -disable load balancer plugin -upgrade master -upgrade all other nodes -restart toolstack on master -restart toolstack on all other nodes -live migrate all VMs running on master to other node(s) -reboot master -reboot next node (live migrate all VMs running on that particular node away before doing so) -repeat until all nodes have been rebooted (one node at a time) -re-enable HA on pool level -re-enable load balancer plugin Never had any issues with that. No downtime for none of the VMs. update time again. and same issue I followed these steps -upgrade master -upgrade all other nodes -restart toolstack on master -restart toolstack on all other nodes -live migrate all VMs running on master to other node(s) -reboot master now cant migrate anything else. live migration : NOT_SUPPORTED_DURING_UPGRADE warm migration: fails and VM shuts down immediately and needs to be forced back to life CR backup to another server : NOT_SUPPORTED_DURING_UPGRADE