Categories

  • All news regarding Xen and XCP-ng ecosystem

    137 Topics
    4k Posts
    gduperreyG
    @manilx We recently upgraded our Koji build system. This may have caused disruptions in this recent update release yesterday, where an XML file was generated multiple times. This has now been fixed and should not happen again. This may explain the issue encountered this time, particularly with the notification of updates via XO. Note that normally yum metadata expires after a few hours and so it should normally return to normal on its own.
  • Everything related to the virtualization platform

    1k Topics
    13k Posts
    J
    I experienced the same issue! What I found was that just doing a "yum update" was not enough, though it was needed. When I was initially getting the error, I was trying to import on a host that was not the master of the pool. After I designated that host as the pool master, the same command worked easily. I had to do both the update and import on the master only.
  • 3k Topics
    24k Posts
    C
    I have two remotes that I use for delta backups. One is on a 10Gbe network and the other only has a single 1Gbe port. Since XO writes those backups simultaneously, the backup takes a really long time and frequently randomly fails on random VMs. I was thinking it would be better if I have XO perform the backup to the 10Gbe remote running UNRAID then schedule an rsync job to mirror that backup to the 1Gbe remote on a Synology array. I've already NFS mounted the Synology share to UNRAID and confirmed I can rsync to it from a prompt, I just need to create a cron job to do it daily. In addition to, hopefully, preventing the random failures, it would let XO complete the backup a lot faster and my UNRAID server and Synology will take care of the rest, reducing the load on my main servers during the workday. The current backup takes long enough that it sometimes spills into the workday. I'm wondering if that will cause any restore problems. When XO writes to two remotes, does it write exactly the same files to both remote such that it can restore from the secondary even if it never wrote the files to that remote? I would assume they are all the same but want to make sure I don't create a set of backup files that won't be usable. Granted, I can, and will, test a restore from that other location but I don't want to destroy all the existing backups on that remote if this would be a known-bad setup. Note, I did a --dry-run with rsync and see quite a few files that it would delete and a bunch it would copy. That surprises me because that is still an active remote for the delta backups. I'm suspicious that the differences are due to previously failed backups.
  • Our hyperconverged storage solution

    32 Topics
    666 Posts
    henri9813H
    Hello @ronan-a Just to be sure, you want logs of the node which want to join or the master ? Bonne journée
  • 30 Topics
    85 Posts
    GlitchG
    @Davidj-0 Merci pour le retour, j'utilisais aussi une Debian pour mon test ^^