Categories

  • All news regarding Xen and XCP-ng ecosystem

    139 Topics
    4k Posts
    M
    @uberiain at this point, when I am uninstall the old XCP-ng center software, and install the new msi, I just realized the xcp-ng keeps the settings file in Roaming folder. (C:\Users\user\AppData\Roaming\XCP-ng) When I deleted it I could re-register the servers.
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    N
    @bvitnik You're right. I added a lot of details, but neglected to mention that I'm not booting from USB. I'm really not convinced I've destroyed my system. I truly think that's an over-reaction. I think I ruined my initrd and initramfs files, yes. But that should be recoverable. I haven't done nearly as much as you think I have. The reason I haven't succeeded in that yet is because I'm not really convinced I've been doing it the right way. Since my my disks run in RAID, my system has like 6 partitions. md127p1 md127p2 md127p3 md127p4 md127p5 md127p6 From memory, p1 and p2 are very similar. However p1 doesn't include grub (/boot/efi/EFI). P4 is grub. P2 looks very similar to p1, but it includes grub. P3 is my VHDs. P5 is maybe swap, and I can't remember what the other one is. My point is that I don't believe that I've mounted everything correctly through the shell in order to be able to successfully chroot into the device and be able to run the dracut commands successfully. When I run the dracut commands, I see failures for applications that I can see in the sbin folder. So there is something that I'm missing in mounting these disks in the shell that is preventing me from solving this issue. This is why I'm here. I'm not here for lectures about the dangers of USB. Alternatively, I could boot the install media and simply perform a metadata/pool restore from backup, but I just want someone to tell me that's an actual viable option. I'm not going to simply re-install the OS. If I do, I'll clone it first, and then boot the clone and test a metadata restore. But that's a lot of work for it to fail.
  • 3k Topics
    26k Posts
    P
    @bazzacad you do not seem to have the same number of PIFs in each host I wish you do not have to, but you can swap names if needed, read the doc here https://docs.xcp-ng.org/networking/#renaming-nics
  • Our hyperconverged storage solution

    37 Topics
    690 Posts
    ronan-aR
    @TestForEcho No ETA for now. Even before supporting QCOW2 on LINSTOR, we have several points to robustify (HA performance, potential race conditions, etc.). Regarding other important points: Supporting volumes larger than 2TB has significant impacts on synchronization, RAM usage, coalesce, etc. We need to find a way to cope with these changes. The coalesce algorithm should be changed to no longer depend on the write speed to the SR in order to prevent potential coalesce interruptions; this is even more crucial for LINSTOR. The coalesce behavior is not exactly the same for QCOW2, and we believe that currently this could negatively impact the API impl in the case of LINSTOR. In short: QCOW2 has led to changes that require a long-term investment in several topics before even considering supporting this format on XOSTOR.
  • 30 Topics
    85 Posts
    GlitchG
    @Davidj-0 Merci pour le retour, j'utilisais aussi une Debian pour mon test ^^