Categories

  • All news regarding Xen and XCP-ng ecosystem

    142 Topics
    4k Posts
    A
    @gduperrey Installed on home lab via rolling pool update and both host updated no issues and vms migrated back to 2nd host as expected this time. fingers crossed work servers have the same luck. I do have open support ticket from last round of updates for work servers. Waiting for response before installing patches.
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    DanpD
    Yes. In case you missed it, this is from https://docs.xcp-ng.org/installation/upgrade/ : [image: 1774993344472-ab571ebd-d6a7-4235-b29d-9a173e2a7f73-image.jpeg]
  • 3k Topics
    28k Posts
    florentF
    @joeymorin said: I observed similar behaviour. Two pools. Pool A composed of two hosts. Pool B is single-host. B runs a VM with XO from source. Two VMs on host A1 (on local SR), one VM on host B1 A2 (on local SR). Host A2 has a second local SR (separate physical disc) used as the target for a CR job. CR job would back up all four VMs to the second local SR on host A2. The behaviour observed was that, although the VM on B would be backed up (as expected) as a single VM with multiple snapshots (up to the 'replication retention'), the three other VMs on the same pool as the target SR would see a new full VM created for each run of the CR job. That rather quickly filled up the target SR. I noticed the situation was corrected by a commit on or about the same date reported by @ph7. Incidentally, whatever broke this, and subsequently corrected it, appears to have corrected another issue I reported here. I never got a satisfactory answer regarding that question. Questions were raised about the stability of my test environment, even though I could easily reproduce it with a completely fresh install. Thanks for the work! edit: Correction B1 A2 sometimes it's hard to find a n complete explanation without connecting to the hosts and xo, and going through a lot of logs , which is out of the scope of community support I am glad the continuous improvement of the code base fixed the issue . We will release today a new patch, because migrating from 6.2.2 to 6.3 for a full replication ( source user that updated to the intermediate version are not affected )
  • Our hyperconverged storage solution

    43 Topics
    729 Posts
    SuperDuckGuyS
    @alcoralcor Thanks for the info. I thought maybe I was using too many disks, so I've tried creating disk groups of 3-4 drives with the same issue.
  • 34 Topics
    100 Posts
    AtaxyaNetworkA
    @bvivi57 Hello ! Déjà bravo pour le travail d'écriture, faire plusieurs articles aussi long, avec screenshot et commandes CLI intégré, c'est un boulot de fou ! J'ai rapidement parcouru tous tes articles, je prendrais le temps de tout relire pour te faire un feedback complet avec quelques tips, si tu le souhaites ! Sur le dernier article, j'ai vu que tu as redémarré xo-serveur parce que tu n'arrivais pas à te reconnecter au master: tu peux (dans XO5, pas encore XO6) juste aller dans setting server, et cliquer sur le bouton "disconnect" et "reconnect". Ça va forcer une reconnexion à la XAPI, et si elle est disponible, ça va reconnecter ton pool instantanément à ton XO. Redémarrer un XO, surtout si tu as plusieurs pools, peut avoir certaines conséquences : Si tu as des backups en cours, ça va les kill instantanément, pareil pour les migrations entre deux pools différents. Il faut y aller doucement avec un systemctl restart xo-server En tout cas merci pour ces articles, je suis heureuse de voir que la communauté francophone s'intéresse de plus en plus à l'écosystème XCP-ng/XO (et je dit ça en tant qu'"early" qui utilise la solution depuis... longtemps ! )