Categories

  • All news regarding Xen and XCP-ng ecosystem

    139 Topics
    4k Posts
    P
    @dinhngtu said in XCP-ng Windows PV tools announcements: @probain The canonical way is to check the product_id instead https://docs.ansible.com/projects/ansible/latest/collections/ansible/windows/win_package_module.html#parameter-product_id The ProductCode changes every time a new version of XCP-ng Windows PV tools is released, and you can get it from each release's MSI: No problem... If you ever decide to have the .exe-file as a separate item. Not bundled within the zip-file. Then I would be even happier. But until then, thanks for everything!
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    M
    @bvitnik Thanks for your reply! Yes, all of the XCP-ng hosts have been restarted since I started monitoring the /var/log directory due to package upgrades. I also restarted the toolstack 2 or 3 times in the time frame so I think the issue was not caused by some sort of stuck process or similar. I did some research in this regard and also noticed that most people that have an environment of my scale do not encounter this issue (I currently have 105 VMs running). So I also suspect that there is something unusual happening in my pool. I thought about circumventing this issue my implementing a remote syslog server (like graylog) that has enough storage and letting all my XCP-ng hosts write to it. I would really prefer to fix the underlying issue though. Does anybody possibly know some common things that could cause this that I could check? That would be really awesome. Thanks and best regards
  • 3k Topics
    26k Posts
    Bastien NolletB
    @Forza As the named are greyed-out, could you confirm that the delta backup job saves the backup to remotes srv04-incremental and srv12-incremental, and the mirror backup job copies the backups from srv12-incremental to srv04-incremental? (or at least confirm these are the same couple of remotes) Also, could you tell me if the VM with which you got the "No new data to upload for this VM" message was backed up by your backup job "Incremental backup every 4 hours - 8 days retention"?
  • Our hyperconverged storage solution

    37 Topics
    690 Posts
    ronan-aR
    @TestForEcho No ETA for now. Even before supporting QCOW2 on LINSTOR, we have several points to robustify (HA performance, potential race conditions, etc.). Regarding other important points: Supporting volumes larger than 2TB has significant impacts on synchronization, RAM usage, coalesce, etc. We need to find a way to cope with these changes. The coalesce algorithm should be changed to no longer depend on the write speed to the SR in order to prevent potential coalesce interruptions; this is even more crucial for LINSTOR. The coalesce behavior is not exactly the same for QCOW2, and we believe that currently this could negatively impact the API impl in the case of LINSTOR. In short: QCOW2 has led to changes that require a long-term investment in several topics before even considering supporting this format on XOSTOR.
  • 31 Topics
    90 Posts
    olivierlambertO
    Yes, account aren't related