Categories

  • All news regarding Xen and XCP-ng ecosystem

    139 Topics
    4k Posts
    M
    @uberiain at this point, when I am uninstall the old XCP-ng center software, and install the new msi, I just realized the xcp-ng keeps the settings file in Roaming folder. (C:\Users\user\AppData\Roaming\XCP-ng) When I deleted it I could re-register the servers.
  • Everything related to the virtualization platform

    1k Topics
    14k Posts
    P
    @tmk do you not monitor elsewhere ? (zabbix/centreon/...)
  • 3k Topics
    26k Posts
    B
    @Pilow That seems did the trick! xo@xo-ce:/run/xo-server/mounts/LAB-NFS$ sudo mount -v -t nfs -o vers=3 192.168.221.20:/LAB-NFS /run/xo-server/mounts/LAB-NFS/ mount.nfs: timeout set for Thu Nov 20 15:42:55 2025 mount.nfs: trying text-based options 'vers=3,addr=192.168.221.20' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: trying 192.168.221.20 prog 100003 vers 3 prot TCP port 2049 mount.nfs: prog 100005, trying vers=3, prot=17 mount.nfs: trying 192.168.221.20 prog 100005 vers 3 prot UDP port 54698 mount.nfs: mount(2): Device or resource busy xo@xo-ce:/run/xo-server/mounts/LAB-NFS$ ping 192.168.221.20 PING 192.168.221.20 (192.168.221.20) 56(84) bytes of data. 64 bytes from 192.168.221.20: icmp_seq=1 ttl=64 time=0.884 ms 64 bytes from 192.168.221.20: icmp_seq=2 ttl=64 time=0.531 ms 64 bytes from 192.168.221.20: icmp_seq=3 ttl=64 time=0.507 ms 64 bytes from 192.168.221.20: icmp_seq=4 ttl=64 time=0.648 ms 64 bytes from 192.168.221.20: icmp_seq=5 ttl=64 time=0.496 ms ^C --- 192.168.221.20 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4082ms rtt min/avg/max/mdev = 0.496/0.613/0.884/0.145 ms xo@xo-ce:/run/xo-server/mounts/LAB-NFS$ Why would it time out if it does mount nfs sr from the same qnap and same permission settings?! My head is starting to smoke lol
  • Our hyperconverged storage solution

    37 Topics
    690 Posts
    ronan-aR
    @TestForEcho No ETA for now. Even before supporting QCOW2 on LINSTOR, we have several points to robustify (HA performance, potential race conditions, etc.). Regarding other important points: Supporting volumes larger than 2TB has significant impacts on synchronization, RAM usage, coalesce, etc. We need to find a way to cope with these changes. The coalesce algorithm should be changed to no longer depend on the write speed to the SR in order to prevent potential coalesce interruptions; this is even more crucial for LINSTOR. The coalesce behavior is not exactly the same for QCOW2, and we believe that currently this could negatively impact the API impl in the case of LINSTOR. In short: QCOW2 has led to changes that require a long-term investment in several topics before even considering supporting this format on XOSTOR.
  • 30 Topics
    85 Posts
    GlitchG
    @Davidj-0 Merci pour le retour, j'utilisais aussi une Debian pour mon test ^^