Categories

  • All news regarding Xen and XCP-ng ecosystem

    143 Topics
    4k Posts
    A
    @rzr Always a reboot after big updates, as instructed/required.
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    O
    Hi, We are currently using an XCP-ng pool with an XOA (Xen Orchestra) instance hosted within the same pool. Recently, we encountered a crash of the Master node. At that precise moment, we had disabled High Availability (HA) for specific maintenance on the pool that had been performed a few days prior . Although the XOA VM remained operational on a Slave node, we found ourselves in a "blind management" situation: the XOA interface could no longer communicate with the pool because the XAPI entry point (the Master) was down. To avoid this scenario in the future, I would appreciate your opinion on the feasibility and best practices regarding the following points: Out-of-band management: Is it recommended to move XOA to a physical server (or a management pool) completely independent of the production pool it manages to ensure visibility in case of quorum loss? Cross-Configuration: If we have two separate pools (Pool A and Pool B), is it advisable to host XOA-A on Pool B to manage Pool A, and vice versa? High Availability (HA) Behavior: Even with HA enabled, while the system elects a new Master and the XAPI stack restarts, will there always be a period of unavailability for the XOA interface? We are looking to ensure that our management tools remain available and "visible" even in the event of a critical Master failure. Thank you in advance for your advice and for all the work done on XCP-ng. Best regards Olivier
  • 3k Topics
    28k Posts
    simonpS
    Hi, thanks for the heads-up, we will see about doing some comparison with the backups refactoring on our dev environment to check if we lost some speed and try to fix it if so. Very happy to hear that the issue is mostly resolved. We will patch this ASAP.
  • Our hyperconverged storage solution

    44 Topics
    731 Posts
    olivierlambertO
    Different use cases: Ceph is better with more hosts (at least 6 or 7 minimum) while XOSTOR is better between 3 to 7/8. We might have better Ceph support in the future for large clusters.
  • 34 Topics
    102 Posts
    B
    La remarque a été intégrée dans l'article: https://www.myprivatelab.tech/xcp_lab_v2_ha#perte-master Merci encore pour le retour.