XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    First SMAPIv3 driver is available in preview

    Scheduled Pinned Locked Moved Development
    64 Posts 18 Posters 16.3k Views 23 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • B Offline
      bufanda
      last edited by

      Hi,

      I see this driver is ZFS only at the moment. I have a question regarding ZFS though I am not really familiar with it and only started to read in to it recently (Still an ext4 enjoyer) and from what I gathered is that ZFS can be pretty memory hungy, but how does that fit into the dom0 with only 3GB RAM or so or in general with a Hypervisor where the memory should be used primarily for Guest OSes. Are there any draw backs in using ZFS. Will it perform poorly if not getting enough RAM for cache operations. Or is that more a thing for NAS that use ZFS to serve shares. Maybe an expert on that matter can enlighten me.

      Thanks and I will see if I can free up a drive in my test pool to test the driver.

      nikadeN C 2 Replies Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        That should adapt. Note we'll have a BTRFS driver this summer, that will be even better in terms of capabilities (still for local storage), so if you are not confident with ZFS, it will be a viable other option 🙂

        1 Reply Last reply Reply Quote 3
        • nikadeN Offline
          nikade Top contributor @bufanda
          last edited by

          @bufanda said in First SMAPIv3 driver is available in preview:

          Hi,

          I see this driver is ZFS only at the moment. I have a question regarding ZFS though I am not really familiar with it and only started to read in to it recently (Still an ext4 enjoyer) and from what I gathered is that ZFS can be pretty memory hungy, but how does that fit into the dom0 with only 3GB RAM or so or in general with a Hypervisor where the memory should be used primarily for Guest OSes. Are there any draw backs in using ZFS. Will it perform poorly if not getting enough RAM for cache operations. Or is that more a thing for NAS that use ZFS to serve shares. Maybe an expert on that matter can enlighten me.

          Thanks and I will see if I can free up a drive in my test pool to test the driver.

          1. You are correct, ZFS can be memory hungry if you give it a lot of RAM since it will cache read/write data in the ARC. Depending on what disks you have this will act differently, for example if you have datacenter SSD's or even NVME performance will not be a problem even with a small amount of RAM.

          2. dom0 will get more RAM if the the server has a lot of RAM, there is also a possibility to adjust the dom0 RAM value if needed. Please note that ZFS performance will not differ a lot if you have fast disks.

          ZFS was designed to be fast even with "spinning rust" if you gave the system a lot of RAM for ARC, SSD for L2ARC or SSD for SLOG to act as cache before the data hit the slower disks.
          Since then a lot has changed when it comes to SSD, NVME but also pricing of those disks.

          1 Reply Last reply Reply Quote 1
          • C Offline
            cg @bufanda
            last edited by cg

            @bufanda mostly RAM is "required" if you go for things like online-deduplication, as that needs to be handled within the RAM.
            I configured our servers to 12 GB memory for dom0 anyways, as it helps with the overall performance. 3 GB can be pretty tight if you have a bunch of VMs running.
            IIRC generally rather 8 are recommended nowadays. (unless you have a rather small environment)

            nikadeN 1 Reply Last reply Reply Quote 0
            • nikadeN Offline
              nikade Top contributor @cg
              last edited by

              @cg said in First SMAPIv3 driver is available in preview:

              @bufanda mostly RAM is "required" if you go for things like online-deduplication, as that needs to be handled within the RAM.
              I configured our servers to 12 GB memory for dom0 anyways, as it helps with the overall performance. 3 GB can be pretty tight if you have a bunch of VMs running.
              IIRC generally rather 8 are recommended nowadays. (unless you have a rather small environment)

              Yea totally, we also do 8-16Gb on our dom0 if they run a lot of VM's.

              B M 2 Replies Last reply Reply Quote 0
              • B Offline
                bufanda @nikade
                last edited by

                @nikade Yeah for my small homelab I don't need as much, but I can see it in a enterprise environment be useful yes.

                1 Reply Last reply Reply Quote 0
                • M Offline
                  manilx @nikade
                  last edited by

                  This post is deleted!
                  1 Reply Last reply Reply Quote 0
                  • P Offline
                    Paolo
                    last edited by

                    Is there an ETA for fully functional deploy (on local storage) with differential backup, live migration, statistics and so on?
                    Perhaps with 8.3 stable release?
                    I'm interested mainly because of bit rotting detection of ZFS.

                    C 1 Reply Last reply Reply Quote 0
                    • olivierlambertO Offline
                      olivierlambert Vates 🪐 Co-Founder CEO
                      last edited by

                      Hi,

                      8.3 release is eating a lot of resources, so that's the opposite: when it's out, this will leave more time to move forward on SMAPIv3 🙂

                      C 1 Reply Last reply Reply Quote 1
                      • C Offline
                        cg @Paolo
                        last edited by cg

                        @Paolo If it's only for that: Any HW-RAID with DP should do the job. (in case you don't fully go for SW-RAID)

                        H 1 Reply Last reply Reply Quote 0
                        • H Offline
                          hsnyder @cg
                          last edited by

                          @cg sorry if this is off topic but do you know of any HW raid controllers which actually do this? Storing checksums or whatever?

                          C 1 Reply Last reply Reply Quote 0
                          • H Offline
                            hsnyder
                            last edited by

                            A question or point about the SMAPIv3 ZFS driver. I had a power failure in my testing lab last week and I noticed that the VMs with SMAPIv3 disks attached did not come back up automatically, despite being set to automatically power on. Perhaps this is related to the ZFS driver? My first thought was that there might be a race condition between the VM start and the zpool import at boot up but I don't know how to verify that.

                            I just figured I would report this in case it's useful to anyone at Vates.

                            1 Reply Last reply Reply Quote 0
                            • C Offline
                              cg @hsnyder
                              last edited by

                              @hsnyder AFAIK every - even not so - modern RAID controller can do 'verification read', 'disk scrubbing' or however they call it. It won't fix bitrot with single parity, but it can fix a single and detect dual bit failures.
                              That's why the only option for our SAN is: RAID6 respectively any DP algorythm.

                              nikadeN 1 Reply Last reply Reply Quote 1
                              • C Offline
                                cg @olivierlambert
                                last edited by

                                @olivierlambert said in First SMAPIv3 driver is available in preview:

                                Hi,

                                8.3 release is eating a lot of resources, so that's the opposite: when it's out, this will leave more time to move forward on SMAPIv3 🙂

                                Lots of work means lots of changes, means: I'm exciting about it. Also sounds more like a 9.0, if that much work is going into it. 😉

                                1 Reply Last reply Reply Quote 1
                                • nikadeN Offline
                                  nikade Top contributor @cg
                                  last edited by

                                  @cg said in First SMAPIv3 driver is available in preview:

                                  @hsnyder AFAIK every - even not so - modern RAID controller can do 'verification read', 'disk scrubbing' or however they call it. It won't fix bitrot with single parity, but it can fix a single and detect dual bit failures.
                                  That's why the only option for our SAN is: RAID6 respectively any DP algorythm.

                                  Totally agree with you on the RAID2 / Dual Parity, thats our standard as well.

                                  C 1 Reply Last reply Reply Quote 1
                                  • C Offline
                                    cg @nikade
                                    last edited by

                                    @nikade It's also RAS. The risk of a 2nd failing disc during rebuild is a lot higher than usual.
                                    Our B2D2T server needs about 24 hours for that.

                                    nikadeN 1 Reply Last reply Reply Quote 1
                                    • nikadeN Offline
                                      nikade Top contributor @cg
                                      last edited by

                                      @cg How big are your disks?
                                      Our primary SAN has NVME SSD so rebuild is just a couple of hours, but like you said, a 2nd failure during rebuild would be a disaster so it isn't worth the risk.

                                      Our secondary boxes are ZFS which requires close to 12h to rebuild, so the extra parity is good to avoid biting all your nails off 🙂

                                      C 1 Reply Last reply Reply Quote 0
                                      • C Offline
                                        cg @nikade
                                        last edited by

                                        @nikade I found out the HPE MSA2060 has a full flash bundle option, wich is suprisingly cheap, so our SAN has 3.84 TB SAS SSDs - they'll be good within a few hours, but our backup server has a RAID6 with 10 TB HDDs.

                                        nikadeN 1 Reply Last reply Reply Quote 0
                                        • nikadeN Offline
                                          nikade Top contributor @cg
                                          last edited by

                                          @cg Those HDD's will take their fair time to rebuild.
                                          Always stressful looking how far along it is while crossing your fingers that another drive wont pop during the process.

                                          1 Reply Last reply Reply Quote 1
                                          • First post
                                            Last post