XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    First SMAPIv3 driver is available in preview

    Scheduled Pinned Locked Moved Development
    64 Posts 18 Posters 18.5k Views 23 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • S Offline
      SylvainB @john.c
      last edited by

      @john-c You've right, thanks for precisions.

      However, thin provisioning on iSCSI is a real blocking thing for me, and I'm sure that I'm not alone 🙂

      Will SMAPIv3 enable thin provisioning on iSCSI SRs?

      C 1 Reply Last reply Reply Quote 1
      • C Offline
        cg @john.c
        last edited by

        @john-c as well as FC. Basically all shared storages that are production ready.

        What are the up/downsides of qcow2 vs. VHDX?

        olivierlambertO 1 Reply Last reply Reply Quote 0
        • C Offline
          cg @SylvainB
          last edited by cg

          @still_at_work the question is technically wrong. It's less depending on SMAPI, moreso on the "drivers" that it'll be able to use.
          Someone needs to implement something for thin provisioned shared storage that could handle it.
          e.g. via GFS2 or something else.

          You could make your own "adapter"/"driver" (I forgot how they called it) for it, like they did with ZFS.

          1 Reply Last reply Reply Quote 0
          • olivierlambertO Online
            olivierlambert Vates 🪐 Co-Founder CEO @SylvainB
            last edited by olivierlambert

            @still_at_work The current ZFS driver doesn't have any limitation in volume size, but it's local by definition. We do not have yet a driver for iSCSI in SMAPIv3.

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Online
              olivierlambert Vates 🪐 Co-Founder CEO @cg
              last edited by

              @cg Files format (VHDX, qcow2…) require by definition a file system to be on top. When you have a single block device shared with multiple hosts, you need either a clustered filesystem (VMFS, GFS2 etc.) or something able to share the block space between hosts, and in all cases, with the right lock mechanism. That's the tricky part. As soon as you have it, the rest doesn't really matter.

              C 1 Reply Last reply Reply Quote 0
              • C Offline
                cg @olivierlambert
                last edited by

                @olivierlambert I know the problem of a shared FS, the quesion I had was rather: does qcow2 or vhdx have benefits above each other. What are pros/cons with the choice of one.
                Does it matter at all?

                1 Reply Last reply Reply Quote 0
                • olivierlambertO Online
                  olivierlambert Vates 🪐 Co-Founder CEO
                  last edited by

                  It could, but it will be likely a thin difference. Happy to test it if we can.

                  1 Reply Last reply Reply Quote 1
                  • B Offline
                    bufanda
                    last edited by

                    Hi,

                    I see this driver is ZFS only at the moment. I have a question regarding ZFS though I am not really familiar with it and only started to read in to it recently (Still an ext4 enjoyer) and from what I gathered is that ZFS can be pretty memory hungy, but how does that fit into the dom0 with only 3GB RAM or so or in general with a Hypervisor where the memory should be used primarily for Guest OSes. Are there any draw backs in using ZFS. Will it perform poorly if not getting enough RAM for cache operations. Or is that more a thing for NAS that use ZFS to serve shares. Maybe an expert on that matter can enlighten me.

                    Thanks and I will see if I can free up a drive in my test pool to test the driver.

                    nikadeN C 2 Replies Last reply Reply Quote 0
                    • olivierlambertO Online
                      olivierlambert Vates 🪐 Co-Founder CEO
                      last edited by

                      That should adapt. Note we'll have a BTRFS driver this summer, that will be even better in terms of capabilities (still for local storage), so if you are not confident with ZFS, it will be a viable other option 🙂

                      1 Reply Last reply Reply Quote 3
                      • nikadeN Offline
                        nikade Top contributor @bufanda
                        last edited by

                        @bufanda said in First SMAPIv3 driver is available in preview:

                        Hi,

                        I see this driver is ZFS only at the moment. I have a question regarding ZFS though I am not really familiar with it and only started to read in to it recently (Still an ext4 enjoyer) and from what I gathered is that ZFS can be pretty memory hungy, but how does that fit into the dom0 with only 3GB RAM or so or in general with a Hypervisor where the memory should be used primarily for Guest OSes. Are there any draw backs in using ZFS. Will it perform poorly if not getting enough RAM for cache operations. Or is that more a thing for NAS that use ZFS to serve shares. Maybe an expert on that matter can enlighten me.

                        Thanks and I will see if I can free up a drive in my test pool to test the driver.

                        1. You are correct, ZFS can be memory hungry if you give it a lot of RAM since it will cache read/write data in the ARC. Depending on what disks you have this will act differently, for example if you have datacenter SSD's or even NVME performance will not be a problem even with a small amount of RAM.

                        2. dom0 will get more RAM if the the server has a lot of RAM, there is also a possibility to adjust the dom0 RAM value if needed. Please note that ZFS performance will not differ a lot if you have fast disks.

                        ZFS was designed to be fast even with "spinning rust" if you gave the system a lot of RAM for ARC, SSD for L2ARC or SSD for SLOG to act as cache before the data hit the slower disks.
                        Since then a lot has changed when it comes to SSD, NVME but also pricing of those disks.

                        1 Reply Last reply Reply Quote 1
                        • C Offline
                          cg @bufanda
                          last edited by cg

                          @bufanda mostly RAM is "required" if you go for things like online-deduplication, as that needs to be handled within the RAM.
                          I configured our servers to 12 GB memory for dom0 anyways, as it helps with the overall performance. 3 GB can be pretty tight if you have a bunch of VMs running.
                          IIRC generally rather 8 are recommended nowadays. (unless you have a rather small environment)

                          nikadeN 1 Reply Last reply Reply Quote 0
                          • nikadeN Offline
                            nikade Top contributor @cg
                            last edited by

                            @cg said in First SMAPIv3 driver is available in preview:

                            @bufanda mostly RAM is "required" if you go for things like online-deduplication, as that needs to be handled within the RAM.
                            I configured our servers to 12 GB memory for dom0 anyways, as it helps with the overall performance. 3 GB can be pretty tight if you have a bunch of VMs running.
                            IIRC generally rather 8 are recommended nowadays. (unless you have a rather small environment)

                            Yea totally, we also do 8-16Gb on our dom0 if they run a lot of VM's.

                            B M 2 Replies Last reply Reply Quote 0
                            • B Offline
                              bufanda @nikade
                              last edited by

                              @nikade Yeah for my small homelab I don't need as much, but I can see it in a enterprise environment be useful yes.

                              1 Reply Last reply Reply Quote 0
                              • M Offline
                                manilx @nikade
                                last edited by

                                This post is deleted!
                                1 Reply Last reply Reply Quote 0
                                • P Offline
                                  Paolo
                                  last edited by

                                  Is there an ETA for fully functional deploy (on local storage) with differential backup, live migration, statistics and so on?
                                  Perhaps with 8.3 stable release?
                                  I'm interested mainly because of bit rotting detection of ZFS.

                                  C 1 Reply Last reply Reply Quote 0
                                  • olivierlambertO Online
                                    olivierlambert Vates 🪐 Co-Founder CEO
                                    last edited by

                                    Hi,

                                    8.3 release is eating a lot of resources, so that's the opposite: when it's out, this will leave more time to move forward on SMAPIv3 🙂

                                    C 1 Reply Last reply Reply Quote 1
                                    • C Offline
                                      cg @Paolo
                                      last edited by cg

                                      @Paolo If it's only for that: Any HW-RAID with DP should do the job. (in case you don't fully go for SW-RAID)

                                      H 1 Reply Last reply Reply Quote 0
                                      • H Offline
                                        hsnyder @cg
                                        last edited by

                                        @cg sorry if this is off topic but do you know of any HW raid controllers which actually do this? Storing checksums or whatever?

                                        C 1 Reply Last reply Reply Quote 0
                                        • H Offline
                                          hsnyder
                                          last edited by

                                          A question or point about the SMAPIv3 ZFS driver. I had a power failure in my testing lab last week and I noticed that the VMs with SMAPIv3 disks attached did not come back up automatically, despite being set to automatically power on. Perhaps this is related to the ZFS driver? My first thought was that there might be a race condition between the VM start and the zpool import at boot up but I don't know how to verify that.

                                          I just figured I would report this in case it's useful to anyone at Vates.

                                          1 Reply Last reply Reply Quote 0
                                          • C Offline
                                            cg @hsnyder
                                            last edited by

                                            @hsnyder AFAIK every - even not so - modern RAID controller can do 'verification read', 'disk scrubbing' or however they call it. It won't fix bitrot with single parity, but it can fix a single and detect dual bit failures.
                                            That's why the only option for our SAN is: RAID6 respectively any DP algorythm.

                                            nikadeN 1 Reply Last reply Reply Quote 1
                                            • First post
                                              Last post