XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    SR_BACKEND_FAILURE_109

    Scheduled Pinned Locked Moved Xen Orchestra
    36 Posts 4 Posters 2.1k Views 1 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • N Offline
      ntithomas
      last edited by

      One question does the Xen Orchestra manager have to be ran on XCPNG or can get rid of XCPNG all together?

      1 Reply Last reply Reply Quote 0
      • N Offline
        ntithomas
        last edited by

        Also getting this log error on the Xen Orchestra Web Interface.[0_1629315103732_2021-08-18T19_31_00.542Z - XO.log](Uploading 100%)

        SR_BACKEND_FAILURE_47(, The SR is not available [opterr=directory not mounted: /var/run/sr-mount/4e765024-1b4d-fca2-bc51-1a09dfb669b6], )

        1 Reply Last reply Reply Quote 0
        • olivierlambertO Offline
          olivierlambert Vates 🪐 Co-Founder CEO
          last edited by

          1. You can run Xen Orchestra anywhere you like, as long it can reach your pool master
          2. Please do as I said earlier: SR view, click on the SR in question, Advanced tab. Do you have uncoalesced disks in there?
          1 Reply Last reply Reply Quote 0
          • N Offline
            ntithomas
            last edited by

            So in the ISCSI virtual disk storage pool 1 there are 606 items I do not see the option for coalesce at all. Here's some screenshots.

            Also after researching I have found that there are 352 Items that need to be coalesce.

            On the VDI to Coalesce page what is the Depth? SR Screenshot 2.PNG SR Screenshot 1.PNG SR Screenshot 3.PNG

            DarkbeldinD 1 Reply Last reply Reply Quote 0
            • DarkbeldinD Offline
              Darkbeldin Vates 🪐 Pro Support Team @ntithomas
              last edited by

              @ntithomas Hi,

              Each time you remove a snapshot the main disk have to coalesce the change "merge the difference" to integrate the removed snapshot. The depth is the number of removed snapshot you have to merged in the main disk. 29 is a lot it will need a lot of time.

              1 Reply Last reply Reply Quote 0
              • N Offline
                ntithomas
                last edited by

                So is there a way to speed this up or do I just have to wait for it?

                DarkbeldinD 1 Reply Last reply Reply Quote 0
                • N Offline
                  ntithomas
                  last edited by

                  And by a lot do you mean 1 day? 2 Days? 1 Week?

                  DarkbeldinD 1 Reply Last reply Reply Quote 0
                  • N Offline
                    ntithomas
                    last edited by

                    I'm trying to migrate these servers off of this pool since this pool is having some sort issue with the VDI chain being overloaded, but when I try to migrate them it says the snapshot chain is too long. What does migrating have to do with snapshots when its two completely different actions?

                    DarkbeldinD 1 Reply Last reply Reply Quote 0
                    • DarkbeldinD Offline
                      Darkbeldin Vates 🪐 Pro Support Team @ntithomas
                      last edited by

                      @ntithomas No you will have to wait for it to be done you can grep the coalesce process on your host to be sure it's running.

                      ps axf | grep coalesce
                      
                      1 Reply Last reply Reply Quote 0
                      • DarkbeldinD Offline
                        Darkbeldin Vates 🪐 Pro Support Team @ntithomas
                        last edited by

                        @ntithomas It really depend on your host ressources so can't give you a hint on that but checking the coalesce page you will see the progress.
                        The thing being the more depth you have the longer it take because the merge as to happen on all the disk at the same time and that's a lot of operation to be done.

                        1 Reply Last reply Reply Quote 0
                        • DarkbeldinD Offline
                          Darkbeldin Vates 🪐 Pro Support Team @ntithomas
                          last edited by

                          @ntithomas When you migrate a VM you migrate the disk, so the coalesce as to happen before you can migrate the disk

                          1 Reply Last reply Reply Quote 0
                          • N Offline
                            ntithomas
                            last edited by

                            So when I grep this comes back.

                            24347 pts/4 S+ 0:00 _ grep --color=auto coalesce

                            And checking the XOA VDI to Coalesce page and I will see the progress?

                            DarkbeldinD 1 Reply Last reply Reply Quote 0
                            • DarkbeldinD Offline
                              Darkbeldin Vates 🪐 Pro Support Team @ntithomas
                              last edited by

                              @ntithomas You should have at least 2 lines apparently coalesce is not happening there. you should rescan all disk on XOA.

                              1 Reply Last reply Reply Quote 0
                              • N Offline
                                ntithomas
                                last edited by

                                Okay this is the error I get when I rescan all disk

                                SR_BACKEND_FAILURE_47(, The SR is not available [opterr=directory not mounted: /var/run/sr-mount/4e765024-1b4d-fca2-bc51-1a09dfb669b6], )

                                1 Reply Last reply Reply Quote 0
                                • N Offline
                                  ntithomas
                                  last edited by

                                  We did recently have a failed drive that finished rebuilding a few days ago.

                                  1 Reply Last reply Reply Quote 0
                                  • First post
                                    Last post