XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    SR_BACKEND_FAILURE_109

    Scheduled Pinned Locked Moved Xen Orchestra
    36 Posts 4 Posters 2.1k Views 1 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • DarkbeldinD Offline
      Darkbeldin Vates 🪐 Pro Support Team @ntithomas
      last edited by

      @ntithomas Hi,

      Each time you remove a snapshot the main disk have to coalesce the change "merge the difference" to integrate the removed snapshot. The depth is the number of removed snapshot you have to merged in the main disk. 29 is a lot it will need a lot of time.

      1 Reply Last reply Reply Quote 0
      • N Offline
        ntithomas
        last edited by

        So is there a way to speed this up or do I just have to wait for it?

        DarkbeldinD 1 Reply Last reply Reply Quote 0
        • N Offline
          ntithomas
          last edited by

          And by a lot do you mean 1 day? 2 Days? 1 Week?

          DarkbeldinD 1 Reply Last reply Reply Quote 0
          • N Offline
            ntithomas
            last edited by

            I'm trying to migrate these servers off of this pool since this pool is having some sort issue with the VDI chain being overloaded, but when I try to migrate them it says the snapshot chain is too long. What does migrating have to do with snapshots when its two completely different actions?

            DarkbeldinD 1 Reply Last reply Reply Quote 0
            • DarkbeldinD Offline
              Darkbeldin Vates 🪐 Pro Support Team @ntithomas
              last edited by

              @ntithomas No you will have to wait for it to be done you can grep the coalesce process on your host to be sure it's running.

              ps axf | grep coalesce
              
              1 Reply Last reply Reply Quote 0
              • DarkbeldinD Offline
                Darkbeldin Vates 🪐 Pro Support Team @ntithomas
                last edited by

                @ntithomas It really depend on your host ressources so can't give you a hint on that but checking the coalesce page you will see the progress.
                The thing being the more depth you have the longer it take because the merge as to happen on all the disk at the same time and that's a lot of operation to be done.

                1 Reply Last reply Reply Quote 0
                • DarkbeldinD Offline
                  Darkbeldin Vates 🪐 Pro Support Team @ntithomas
                  last edited by

                  @ntithomas When you migrate a VM you migrate the disk, so the coalesce as to happen before you can migrate the disk

                  1 Reply Last reply Reply Quote 0
                  • N Offline
                    ntithomas
                    last edited by

                    So when I grep this comes back.

                    24347 pts/4 S+ 0:00 _ grep --color=auto coalesce

                    And checking the XOA VDI to Coalesce page and I will see the progress?

                    DarkbeldinD 1 Reply Last reply Reply Quote 0
                    • DarkbeldinD Offline
                      Darkbeldin Vates 🪐 Pro Support Team @ntithomas
                      last edited by

                      @ntithomas You should have at least 2 lines apparently coalesce is not happening there. you should rescan all disk on XOA.

                      1 Reply Last reply Reply Quote 0
                      • N Offline
                        ntithomas
                        last edited by

                        Okay this is the error I get when I rescan all disk

                        SR_BACKEND_FAILURE_47(, The SR is not available [opterr=directory not mounted: /var/run/sr-mount/4e765024-1b4d-fca2-bc51-1a09dfb669b6], )

                        1 Reply Last reply Reply Quote 0
                        • N Offline
                          ntithomas
                          last edited by

                          We did recently have a failed drive that finished rebuilding a few days ago.

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post