Categories

  • All news regarding Xen and XCP-ng ecosystem

    139 Topics
    4k Posts
    P
    @dinhngtu said in XCP-ng Windows PV tools announcements: @probain The canonical way is to check the product_id instead https://docs.ansible.com/projects/ansible/latest/collections/ansible/windows/win_package_module.html#parameter-product_id The ProductCode changes every time a new version of XCP-ng Windows PV tools is released, and you can get it from each release's MSI: No problem... If you ever decide to have the .exe-file as a separate item. Not bundled within the zip-file. Then I would be even happier. But until then, thanks for everything!
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    M
    I did some research and found 2 (old) forum threads where other people encountered the issue that I am currently facing (/var/log full after some time). 1 thread I found in this forum and the other one in Citrix Xenserver forum. In both cases it was recommended to check for old / incompatible XenTools since appearently they can cause this exact behaviour of filling up /var/log on the pool master. Appearently there even was a confirmed bug in one version of Citrix XenTools for this issue. My 105 virtual machines are all either Windows Server or Debian (mixed Debian 11, 12 and 13). I am using these XenTools on most of my Debian systems (latest version): https://github.com/xenserver/xe-guest-utilities I am using these XenTools on all of my Windows systems: "XenServer VM Tools for Windows 9.4.2", https://www.xenserver.com/downloads Are those XenTools expected to cause issues? What are the XenTools expected to work best with full updated XCP-ng 8.3 as of now? Best regards
  • 3k Topics
    26k Posts
    P
    @manilx there is some cheap NETGATE appliances (Netgate 1100 or 2100) to put your PFSense+ out of virtual infrastructure. this is the way.
  • Our hyperconverged storage solution

    37 Topics
    690 Posts
    ronan-aR
    @TestForEcho No ETA for now. Even before supporting QCOW2 on LINSTOR, we have several points to robustify (HA performance, potential race conditions, etc.). Regarding other important points: Supporting volumes larger than 2TB has significant impacts on synchronization, RAM usage, coalesce, etc. We need to find a way to cope with these changes. The coalesce algorithm should be changed to no longer depend on the write speed to the SR in order to prevent potential coalesce interruptions; this is even more crucial for LINSTOR. The coalesce behavior is not exactly the same for QCOW2, and we believe that currently this could negatively impact the API impl in the case of LINSTOR. In short: QCOW2 has led to changes that require a long-term investment in several topics before even considering supporting this format on XOSTOR.
  • 31 Topics
    90 Posts
    olivierlambertO
    Yes, account aren't related