Categories

  • All news regarding Xen and XCP-ng ecosystem

    139 Topics
    4k Posts
    M
    @uberiain at this point, when I am uninstall the old XCP-ng center software, and install the new msi, I just realized the xcp-ng keeps the settings file in Roaming folder. (C:\Users\user\AppData\Roaming\XCP-ng) When I deleted it I could re-register the servers.
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    G
    [edit] I just realized this was from 2024. So far I only have 1 of my 2022 servers changed to the XCP-ng drivers and management agent, the only thing that happened was the NIC went back to DHCP and a short struggle to shift it back to static. I use the e1000 profile when I set up a VM. This was previously using the Xen 9.4.x drivers downloaded from Xenserver. The only other glitch is that if you migrate the VM to another host in the pool, the management agent is no longer detected. But additional migrations and rolling pool updates and reboots are not bothered by this. I'm told this will be fixed in the next version. As far as the drivers themselves, I don't see any issues on this one VM. I think I have another 2022 in my lab with the XCP-ng stuff installed, I'll have to check later. I've had no issues with it. These were both installed from the new Tools ISO that is default with the latest XCP-ng 8.3.x
  • 3k Topics
    26k Posts
    G
    I haven't had too much difficulty hitting the esc key in time to get into the EFI config. Click the start VM button, quickly click away from the display area and click in the display area, then toggle the esc key until I see it take effect. I know I have a couple running at 1920x1080, but that's actually kind of a pain. I only did that to try and get a larger RDP window, RDP may be limited by the original "monitor" resolution, but this might also be fixed in later updates. This one VM has been up for a few years. (edit, yes this has been changed, VMs with a 4x3 monitor now RDP is whatever I have set).
  • Our hyperconverged storage solution

    37 Topics
    690 Posts
    ronan-aR
    @TestForEcho No ETA for now. Even before supporting QCOW2 on LINSTOR, we have several points to robustify (HA performance, potential race conditions, etc.). Regarding other important points: Supporting volumes larger than 2TB has significant impacts on synchronization, RAM usage, coalesce, etc. We need to find a way to cope with these changes. The coalesce algorithm should be changed to no longer depend on the write speed to the SR in order to prevent potential coalesce interruptions; this is even more crucial for LINSTOR. The coalesce behavior is not exactly the same for QCOW2, and we believe that currently this could negatively impact the API impl in the case of LINSTOR. In short: QCOW2 has led to changes that require a long-term investment in several topics before even considering supporting this format on XOSTOR.
  • 30 Topics
    85 Posts
    GlitchG
    @Davidj-0 Merci pour le retour, j'utilisais aussi une Debian pour mon test ^^