Categories

  • All news regarding Xen and XCP-ng ecosystem

    139 Topics
    4k Posts
    P
    @dinhngtu said in XCP-ng Windows PV tools announcements: @probain The canonical way is to check the product_id instead https://docs.ansible.com/projects/ansible/latest/collections/ansible/windows/win_package_module.html#parameter-product_id The ProductCode changes every time a new version of XCP-ng Windows PV tools is released, and you can get it from each release's MSI: No problem... If you ever decide to have the .exe-file as a separate item. Not bundled within the zip-file. Then I would be even happier. But until then, thanks for everything!
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    bvitnikB
    @MajorP93 Amount of logging is directly proportional to the number of hosts, VMs, SRs and clients (Xen Orchestra, XCP-ng Center...). If you have a lot of those, it's rather normal to have huge logs. Now, 5 hosts and 2 SRs does not seem to be much so I wouldn't expect you to have problems with huge logs. There could be something going on there. Try restarting your hosts to clear any stuck processes and internal tasks that could potentially spam the logs. We started having problems with /var/log size when we got in a range of 15+ hosts, 10+ SRs and 1000+ VMs per pool. Unfortunately, log partition cannot be expanded as it is at the end of the disk, followed only by the swap. The workaround we did is to patch the installer to create a large 8GB log partition instead of standard 4GB. Of course, we had to reinstall all of our hosts.
  • 3k Topics
    26k Posts
    P
    @fred974 It used to be (and probably still is) that You have to be reasonably near correct time for NTP to accept any changes.
  • Our hyperconverged storage solution

    37 Topics
    690 Posts
    ronan-aR
    @TestForEcho No ETA for now. Even before supporting QCOW2 on LINSTOR, we have several points to robustify (HA performance, potential race conditions, etc.). Regarding other important points: Supporting volumes larger than 2TB has significant impacts on synchronization, RAM usage, coalesce, etc. We need to find a way to cope with these changes. The coalesce algorithm should be changed to no longer depend on the write speed to the SR in order to prevent potential coalesce interruptions; this is even more crucial for LINSTOR. The coalesce behavior is not exactly the same for QCOW2, and we believe that currently this could negatively impact the API impl in the case of LINSTOR. In short: QCOW2 has led to changes that require a long-term investment in several topics before even considering supporting this format on XOSTOR.
  • 31 Topics
    90 Posts
    olivierlambertO
    Yes, account aren't related