XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    SR.Scan performance withing XOSTOR

    Scheduled Pinned Locked Moved XOSTOR
    3 Posts 2 Posters 12 Views 2 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • D Offline
      denis.grilli
      last edited by

      Hello, we have a pool of 3 hosts with XOSTOR and we are having various performance issues related to tasks like start/stop VMs, VM migrations or similar which according to Vates support are down to the task waiting for Sr.scan to be completed.

      I can see sr.scan running all the time and some of them takes around 4 / 5 minutes...

      the XOSTOR storage is around 27Tb and store about 107 VMs base disk + same amount of snapshots + what I think is the current changes for each of them for a total of 348 disks.

      What your environments look likes? how long your sr.scans takes and most importantly do you have problems at start/stopping and migrating VMs?

      For instance when we migrate VMs, the VMs freeze for about 3 / 4 minutes just before re-attaching the VDI on the target host and according to support is due to the sr.scan.

      P 1 Reply Last reply Reply Quote 0
      • P Offline
        Pilow @denis.grilli
        last edited by Pilow

        @denis.grilli My experience with XOSTOR was very similar (3 hosts, 18Tb) on XCP 8.2 at the time

        but less VMs... ~20

        we had catastrophic failures, with tap-disk locking the VDIs, hard to start/stop VMs (10+ mins to start a VM ? and same VM on local RAID5 storage, or even NFS storage, 5 seconds max)

        more problems with large VDIs (1.5Tb) on XOSTOR, and backups where painful to obtain

        after many ins and outs with support, we decided to get our VMs off XOSTOR for the time being, back to local RAID5 with replicas inter-hosts. No VM mobility, but redudancy anyway.

        I think that the way XOSTOR is implemented is not really the source problem.
        the combo DRBD+smapiv1 is ok for small amount of small VMs. at scale is another story.

        we still have to upgrade to 8.3 and give it another try.

        the more we exfiltrated the VDIs of XOSTOR, the more 'normal' and expected the behavior was.

        1 Reply Last reply Reply Quote 0
        • D Offline
          denis.grilli
          last edited by

          @pilow : thanks for let me know your experience.

          I was afraid someone would say that.

          Unfortunately for us going out of XOSTOR is not so simple because we really need VM mobility to allow for host maintenance which otherwise we would not be able to perform and budgeting for a redundant external storage is not an option either.

          The annoying thing is that before start this journey with Vates we did engage with them and made them aware of our environment ( it is a migration from vmware) and no one has ever mention that could have been a problem to store so many VMs into xostor so I am really hope that support can shed some light and provide some fix to the situation.

          Overall is not a bad experience but waiting for 3 / 4 minutes for a VM to start when you are in hurry is not really great.

          1 Reply Last reply Reply Quote 0
          • First post
            Last post