XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Backup fail whereas xostor cluster is "healthy"

    Scheduled Pinned Locked Moved XOSTOR
    4 Posts 3 Posters 48 Views 3 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • henri9813H Offline
      henri9813
      last edited by henri9813

      Hello,

      My post title may be missleading.

      I built a pool with 3 nodes and 3 replicas.

      • node1
      • node2
      • node3

      When i remove node 3 ( linstor forget + xo forget node ).

      The snapshot of disks are not possible anymore.

      SR_BACKEND_FAILURE_1200(, local variable 'e' referenced before assignment, )
      

      a97b9a8c-2f55-4f9a-8ec6-b9eb979f93dd-image.png

      However, i have 2 healthy node on 3. it should work no ?

      ℹ️ I don't need support, it's just an observation in our test pool, maybe a bug, i report 🙂

      Best regards

      ronan-aR 1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        Hi,

        I think this issue is being fixed soon. Adding @dthenot in the loop to be sure

        1 Reply Last reply Reply Quote 0
        • ronan-aR Offline
          ronan-a Vates 🪐 XCP-ng Team @henri9813
          last edited by

          @henri9813 Hi! Just to confirm, can you share the exception in the SMlog file of the master host? We are aware of an exception in the same style, and it is a side effect of another exception caused in the _snapshot function of LinstorSR. We have already planned to correct it in a new version of the sm, it's been a while that it's there but not critical. Thanks!

          henri9813H 1 Reply Last reply Reply Quote 0
          • henri9813H Offline
            henri9813 @ronan-a
            last edited by

            Hello @ronan-a

            I will reproduce the case, i will re-destroy one hypervisor and retrigger the case.

            Thank you @ronan-a et @olivierlambert 🙂

            If you need me to tests some special case don't hesit, we have a pool dedicated for this

            1 Reply Last reply Reply Quote 2
            • First post
              Last post