Categories

  • All news regarding Xen and XCP-ng ecosystem

    143 Topics
    4k Posts
    bogikornelB
    @olivierlambert said: Might be interesting to test a different cluster size and see the impact I tested it with a cluster size of 2 megabytes, and nothing changed [image: 1778012606947-qcow2-2m_bandwidth_summary.png] [image: 1778012609162-qcow2-2m_latency_summary.png]
  • Everything related to the virtualization platform

    1k Topics
    15k Posts
    D
    @vlamincktr said: @acebmxer I may just need to re-evaluate our backup strategy and adjust it so there is more time for the backups, I could also just run the daily delta's, the main issue is the weekly fulls that I run as a precaution, I'm always paranoid about something happening with the daily delta chain and having an unusable backup so i also pull dedicated weekly full backups which take a lot of time to run. I've also considered running the full backups at different days to spread them out more, sounds like one of those is my best option rather than adding more cost/complexity. I would absolutely change this backup plan, to running monthly full backups (weekly full backups are overkill for most). The backup mechanism in XO has improved a ton (since launch). Without more detail, types of VMs, workloads etc it's really difficult for anyone to offer a perfect answer, but most people here would likely agree that weekly full's aren't a benefit here. Changing the window on your backups is also an option as you mentioned, but that is only shifting when the work is being performed, not the type of work performed. If you have a 1TB server and you're backing that up daily with delta's and weekly with full backups you're backing up something like 1300 GB every week (of course this depends on your delta data change).
  • 3k Topics
    28k Posts
    acebmxerA
    @florent Just checking to make sure you got the memory dump? [image: 1778011838638-screenshot-2026-05-05-160905.png]
  • Our hyperconverged storage solution

    46 Topics
    734 Posts
    dthenotD
    @ccooke Hello, You should be able to make the XOSTOR SR work again if you update sm and sm-fairlock on the other hosts. yum update sm sm-fairlock Then you should be able to re-plug the SR on the master and proceed with the RPU.
  • 34 Topics
    102 Posts
    B
    La remarque a été intégrée dans l'article: https://www.myprivatelab.tech/xcp_lab_v2_ha#perte-master Merci encore pour le retour.