Categories

  • All news regarding Xen and XCP-ng ecosystem

    139 Topics
    4k Posts
    M
    @uberiain at this point, when I am uninstall the old XCP-ng center software, and install the new msi, I just realized the xcp-ng keeps the settings file in Roaming folder. (C:\Users\user\AppData\Roaming\XCP-ng) When I deleted it I could re-register the servers.
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    K
    So long story short is that I was working with an arch-linux vm within xcp-ng and trying to debug why ZFSBootMenu wont work to boot my VM with my / partition on zfs. If interested the details or documented here: https://www.reddit.com/r/zfs/comments/1p1veqf/damn_struggling_to_get_zfsbootmenu_to_work/ Anyway one of the proposed methods of trying to debug the problem of why the ZFSBootMenu EFI wont boot the kernel was to try to add a serial port to the VM to gather output. Adding a serial port to a VM was something new to me and I actually tried two methods: SSH'd into the DOM image, I tried running the commands: # xl vm-list ---- This got me the VM id # # xl console -t serial <ID #> This didn't seem to produce any output so I went another method was to add a serial port that dumped it's output to a tcp port: # xe vm-param-add uuid=<uuid> param-name=platform hvm_serial=tcp::7001,server,nodelay I then from a remote host tried to see to listed on tcp port 7001: #nc -v <ip_address> 7001 and yes that didn't work either in terms of producing any output. I am going about this the correct way in terms of trying to attach a serial port? For the record if anyone has worked with ZFSBootMenu my parameters I'm adjusting here are the following: # zfs set org.zfsbootmenu:commandline="spl.spl_hostid=$(hostid) loglevel=7 rw console=ttyS0" tank/sys/arch/ROOT/default
  • 3k Topics
    26k Posts
    B
    I will tell you a little secret, you taught me more than that one command for CLI lol One more important note for my Bitwarden Thanks again bud!
  • Our hyperconverged storage solution

    37 Topics
    690 Posts
    ronan-aR
    @TestForEcho No ETA for now. Even before supporting QCOW2 on LINSTOR, we have several points to robustify (HA performance, potential race conditions, etc.). Regarding other important points: Supporting volumes larger than 2TB has significant impacts on synchronization, RAM usage, coalesce, etc. We need to find a way to cope with these changes. The coalesce algorithm should be changed to no longer depend on the write speed to the SR in order to prevent potential coalesce interruptions; this is even more crucial for LINSTOR. The coalesce behavior is not exactly the same for QCOW2, and we believe that currently this could negatively impact the API impl in the case of LINSTOR. In short: QCOW2 has led to changes that require a long-term investment in several topics before even considering supporting this format on XOSTOR.
  • 30 Topics
    85 Posts
    GlitchG
    @Davidj-0 Merci pour le retour, j'utilisais aussi une Debian pour mon test ^^