Categories

  • All news regarding Xen and XCP-ng ecosystem

    139 Topics
    4k Posts
    P
    @dinhngtu said in XCP-ng Windows PV tools announcements: @probain The canonical way is to check the product_id instead https://docs.ansible.com/projects/ansible/latest/collections/ansible/windows/win_package_module.html#parameter-product_id The ProductCode changes every time a new version of XCP-ng Windows PV tools is released, and you can get it from each release's MSI: No problem... If you ever decide to have the .exe-file as a separate item. Not bundled within the zip-file. Then I would be even happier. But until then, thanks for everything!
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    ForzaF
    @ditzy-olive I think 'warm migration' should have worked? But perhaps if your old pool lost its pool master it doesn't work?
  • 3k Topics
    26k Posts
    P
    I am running into the same issue as in this post. But I'm confused as to how one upgrades a cluster of hosts from 8.2.1 to 8.3 without massive downtime. I have 3 hosts, A, B, and C. A is the master. I moved all the workloads off of A, and then upgraded it to 8.3. I'd like to move workloads off one of the slaves, so the slave can take as long as necessary to upgrade. The upgrade is not quick. The only way to upgrade from 8.2.1 to 8.3 is to boot from the ISO, which is fine. But once a node is upgraded, I can't migrate workloads to it from the non-upgraded nodes. How do I roll this upgrade through the cluster without just taking an entire host and all its workloads offline for 45 minutes while it upgrades? I have been able to move workloads from old to new by shutting down a VM on an old node, using the copy function in Xen Orchestra to copy it to the upgraded master, and then booting the new copy. But that takes the VM offline for the duration of the copy. A few of my VMs can tolerate that, but not many. What am I missing?
  • Our hyperconverged storage solution

    38 Topics
    694 Posts
    D
    From another post I gathered that there is an auto-scan feature that run by default every 30 seconds which seems to cause a lot issue when the storage contains a lot of disks or you have a lot of storage. It is not completely clear if this auto-scan feature is actually necessary and to some customers Vates helpdesk has suggested to reduce the frequency of the scan from 30 seconds to 2 minutes and that seems to have improved the overall experience. The command would be this: xe host-param-set other-config:auto-scan-interval=120 uuid=<Host UUID> where UUID is the pool master UUID. Of course I won't run that in production without Vates support re-assurance that doing so it won't have a negative impact but I think is worth mentioning this. In my situation I can see how frequents scan would cause delay on the other tasks considering that effectively my system is always under scanning with probably the scan task itself being affected by it.
  • 31 Topics
    90 Posts
    olivierlambertO
    Yes, account aren't related