XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    First SMAPIv3 driver is available in preview

    Scheduled Pinned Locked Moved Development
    64 Posts 18 Posters 16.3k Views 23 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • P Offline
      Paolo
      last edited by

      Is there an ETA for fully functional deploy (on local storage) with differential backup, live migration, statistics and so on?
      Perhaps with 8.3 stable release?
      I'm interested mainly because of bit rotting detection of ZFS.

      C 1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        Hi,

        8.3 release is eating a lot of resources, so that's the opposite: when it's out, this will leave more time to move forward on SMAPIv3 🙂

        C 1 Reply Last reply Reply Quote 1
        • C Offline
          cg @Paolo
          last edited by cg

          @Paolo If it's only for that: Any HW-RAID with DP should do the job. (in case you don't fully go for SW-RAID)

          H 1 Reply Last reply Reply Quote 0
          • H Offline
            hsnyder @cg
            last edited by

            @cg sorry if this is off topic but do you know of any HW raid controllers which actually do this? Storing checksums or whatever?

            C 1 Reply Last reply Reply Quote 0
            • H Offline
              hsnyder
              last edited by

              A question or point about the SMAPIv3 ZFS driver. I had a power failure in my testing lab last week and I noticed that the VMs with SMAPIv3 disks attached did not come back up automatically, despite being set to automatically power on. Perhaps this is related to the ZFS driver? My first thought was that there might be a race condition between the VM start and the zpool import at boot up but I don't know how to verify that.

              I just figured I would report this in case it's useful to anyone at Vates.

              1 Reply Last reply Reply Quote 0
              • C Offline
                cg @hsnyder
                last edited by

                @hsnyder AFAIK every - even not so - modern RAID controller can do 'verification read', 'disk scrubbing' or however they call it. It won't fix bitrot with single parity, but it can fix a single and detect dual bit failures.
                That's why the only option for our SAN is: RAID6 respectively any DP algorythm.

                nikadeN 1 Reply Last reply Reply Quote 1
                • C Offline
                  cg @olivierlambert
                  last edited by

                  @olivierlambert said in First SMAPIv3 driver is available in preview:

                  Hi,

                  8.3 release is eating a lot of resources, so that's the opposite: when it's out, this will leave more time to move forward on SMAPIv3 🙂

                  Lots of work means lots of changes, means: I'm exciting about it. Also sounds more like a 9.0, if that much work is going into it. 😉

                  1 Reply Last reply Reply Quote 1
                  • nikadeN Offline
                    nikade Top contributor @cg
                    last edited by

                    @cg said in First SMAPIv3 driver is available in preview:

                    @hsnyder AFAIK every - even not so - modern RAID controller can do 'verification read', 'disk scrubbing' or however they call it. It won't fix bitrot with single parity, but it can fix a single and detect dual bit failures.
                    That's why the only option for our SAN is: RAID6 respectively any DP algorythm.

                    Totally agree with you on the RAID2 / Dual Parity, thats our standard as well.

                    C 1 Reply Last reply Reply Quote 1
                    • C Offline
                      cg @nikade
                      last edited by

                      @nikade It's also RAS. The risk of a 2nd failing disc during rebuild is a lot higher than usual.
                      Our B2D2T server needs about 24 hours for that.

                      nikadeN 1 Reply Last reply Reply Quote 1
                      • nikadeN Offline
                        nikade Top contributor @cg
                        last edited by

                        @cg How big are your disks?
                        Our primary SAN has NVME SSD so rebuild is just a couple of hours, but like you said, a 2nd failure during rebuild would be a disaster so it isn't worth the risk.

                        Our secondary boxes are ZFS which requires close to 12h to rebuild, so the extra parity is good to avoid biting all your nails off 🙂

                        C 1 Reply Last reply Reply Quote 0
                        • C Offline
                          cg @nikade
                          last edited by

                          @nikade I found out the HPE MSA2060 has a full flash bundle option, wich is suprisingly cheap, so our SAN has 3.84 TB SAS SSDs - they'll be good within a few hours, but our backup server has a RAID6 with 10 TB HDDs.

                          nikadeN 1 Reply Last reply Reply Quote 0
                          • nikadeN Offline
                            nikade Top contributor @cg
                            last edited by

                            @cg Those HDD's will take their fair time to rebuild.
                            Always stressful looking how far along it is while crossing your fingers that another drive wont pop during the process.

                            1 Reply Last reply Reply Quote 1
                            • First post
                              Last post