Categories

  • All news regarding Xen and XCP-ng ecosystem

    139 Topics
    4k Posts
    M
    @uberiain at this point, when I am uninstall the old XCP-ng center software, and install the new msi, I just realized the xcp-ng keeps the settings file in Roaming folder. (C:\Users\user\AppData\Roaming\XCP-ng) When I deleted it I could re-register the servers.
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    N
    @Pilow I run a single host, so I'm fully offline for a few days now. I know it can be recovered, I'm just not sure the right next steps. I'm just looking for the best way forward right now. I do have a metadata backup on one of the USB hard drives. Would restoring from metadata backup actually resolve this?
  • 3k Topics
    26k Posts
    mxM
    We'd recently got a relevant experience regarding this weird renaming to uuids. We had one orchestra managing one pool. ISOs were in an ISO SR, with an nfs4 serving it underneath. All fine till then. We added one second pool to the orchestra. Just a single host by itself. One of the very next days we discovered that all names in the ISO SR had been replaced by uuids. Removing/readding the sr to the new pool helped temporarily. Usual names appeared again. But after a few more days, again uuids. Where uuids were appearing, we could not select anything from the dropdown list in the console's cdrom. The list per pool was unpopulated. We tried separate the shares by offering the new pool an nfs4 share from the NAS, actually sharing the same source dir. It did mount but now there was a uuid uniq constraint that was violated, so we could not see no files at all in this new SR. It would not be an illogical thought to have an 'iso sr' attached once to the orchestra and be offered by the orchestra to all managed pools, without uuids, without uniqs etc. There seems to be an unnecessary complication here I think.
  • Our hyperconverged storage solution

    37 Topics
    690 Posts
    ronan-aR
    @TestForEcho No ETA for now. Even before supporting QCOW2 on LINSTOR, we have several points to robustify (HA performance, potential race conditions, etc.). Regarding other important points: Supporting volumes larger than 2TB has significant impacts on synchronization, RAM usage, coalesce, etc. We need to find a way to cope with these changes. The coalesce algorithm should be changed to no longer depend on the write speed to the SR in order to prevent potential coalesce interruptions; this is even more crucial for LINSTOR. The coalesce behavior is not exactly the same for QCOW2, and we believe that currently this could negatively impact the API impl in the case of LINSTOR. In short: QCOW2 has led to changes that require a long-term investment in several topics before even considering supporting this format on XOSTOR.
  • 30 Topics
    85 Posts
    GlitchG
    @Davidj-0 Merci pour le retour, j'utilisais aussi une Debian pour mon test ^^