Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    M
    @ph7 This I can confirm, with XO VM. But discussion was about the host....
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    D
    Problem persists months later, even with latest xcpng 8.3 updates (in case it might be related)... Annyone else encountered same problem?
  • 3k Topics
    27k Posts
    K
    Hi, one of our VMs was in a stopped state this morning because our daily delta backup (with offline snapshot, 4 VMs in total in the backup job) had failed with error "NOT_SUPPORTED_DURING_UPGRADE()". The other 3 VMs were up and running. I updated the pool master yesterday and rebooted, but I did not update the second server on which the stopped VM was running. The VM had this server assigned as a home server, too. This happend: Snapshots failed for all 4 VMs, which is expected because of the update state. 3 VMs started up again on the pool master. 1 VM could not be started because it is not possible to start a VM when the host has not been updated. Is this correct? Perhaps the update logic can be enhanced by checking some things before the backup job shuts down a VM. The key questions woud be: Is the backup target available? Can the VM be started again after the offline snapshot?
  • Our hyperconverged storage solution

    43 Topics
    728 Posts
    alcoralcorA
    Hello, I'm new to xcp-ng but it seems I have the same issue. I have done some tests to resolve this (but failed) and if I remember, when i look at /var/log/audit.log in the first host of my cluster, I saw this error : ... [ failure: repodata/repomd.xml from xcp-ng-linstor: [Errno 256] No more mirrors to try. https://repo.vates.tech/xcp-ng/8/8.3/linstor/x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for xcp-ng-linstor ... but installing manually xcp-ng-linstor with yum does not solve the issue. PS : In your case, it seems you want more than 7 disks in your xostor and I think you need between 3 and 7 disks.
  • 32 Topics
    94 Posts
    olivierlambertO
    Difficile de parler de « réalité » avec les benchmarks. Vérifiez aussi l'iodepth (qui dépend du type matériel que vous avez, sur du flash/NVMe vous pouvez monter à 128 ou 256), la latence entre en compte aussi bien sûr. Le bottleneck principal est le monothreading de tapdisk, si vous testez sur plusieurs VMs différentes la somme va monter de manière assez régulière.