XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    First SMAPIv3 driver is available in preview

    Scheduled Pinned Locked Moved Development
    64 Posts 18 Posters 16.4k Views 23 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • H Offline
      hsnyder
      last edited by hsnyder

      I've started using the SMAPIv3 driver too. It's working well so far. I'm keeping my VM boot disks on md raid1, and using a zfs mirror via SMAPIv3 for large data disks.

      I have a question about backups... Is it safe to use syncoid to directly synchronize the ZFS volumes to an external backup? syncoid creates a snapshot at the start of the send process. But, I also have rolling snapshots configured through Xen-Orchestra. Will the syncoid snapshot mess up Xen-Orchestra?

      If this isn't safe or isn't a good idea, I'll just use rsync to back up the filesystem contents inside the VM that the volume is mounted to...

      1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        On my side, I have no idea, because I never used syncoid. Have you asked their dev about this?

        R 1 Reply Last reply Reply Quote 0
        • R Offline
          rfx77 @olivierlambert
          last edited by

          @olivierlambert @hsnyder

          if i understand correctly i would rephrase the question this way:

          does xen-orchestra name the snapshots in a way which is unique to xen-orchestra and does xoa know which snapshots belong to it or does it use the latest snapshots no matter how they are named.

          @hsnyder: i dont think you can simply use zfs snapshots without xen snapshots because it dont think that they will be crash-consistent.

          if syncoid is similar to zrepl you have to check that is doesnt prune the zfs snapshots from xoa.

          H yannY 2 Replies Last reply Reply Quote 1
          • olivierlambertO Offline
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            Question for @yann probably then 🙂

            1 Reply Last reply Reply Quote 0
            • H Offline
              hsnyder @rfx77
              last edited by

              @rfx77 Thanks for clarifying my question, your reading of it was correct.

              I've just realized that syncoid has an option, --no-sync-snap, which I think avoids creating a dedicated snapshot for the purpose of the transfer, and instead just transfers over the pre-existing snapshots. If that's indeed what it does, then this solves all potential problems, because the existing snapshots are taken from xen-orchestra. I'll do a test to confirm this is indeed the behavior and then will reply again.

              R 1 Reply Last reply Reply Quote 1
              • yannY Offline
                yann Vates 🪐 XCP-ng Team @rfx77
                last edited by

                If I understand he question correctly, the requirement is that the snapshot naming convention by ZFS-vol and by syncoid don't collide.
                What convention is syncoid using? The current ZFS-vol driver just assigns a unique integer name to each volume/snapshot, and there would be an error when it attempts to create a snapshot with a new integer name that another tool would have created on its own.

                1 Reply Last reply Reply Quote 1
                • R Offline
                  rfx77 @hsnyder
                  last edited by rfx77

                  @hsnyder Hi!

                  I would let syncoid do a snapshot, check the name and look if there could be any potential naming conflict. if thats not the case i would keep it as it was.
                  you can check if syncoid keeps the snapshots on the target

                  anyhow i would recommend zrepl for your tasks. its the tool used by nearly anyone who does zfs replication things. We are extensively using it for many Hub-Spoke sync architectures.

                  H 1 Reply Last reply Reply Quote 0
                  • H Offline
                    hsnyder @rfx77
                    last edited by

                    @rfx77 thanks for the recommendations. I looked into zrepl and it seems like a good solution as well. However, since I'm using this new zfs beta driver in production, I've decided I'm going to do the backup at the VM filesystem level, i.e. with rsync, instead of at the ZFS level. I figure that strategy is slightly safer in the event of bugs with the driver. I know that's debatable - it would depend on the bug, but this approach feels safer to me.

                    1 Reply Last reply Reply Quote 0
                    • S Offline
                      SylvainB
                      last edited by SylvainB

                      Hello @olivierlambert ,

                      I am joining this topic as I have a few questions about SMAPIv3:

                      • Will it allow provisioning of VDIs larger than 2TB?

                      • Will it enable thin provisioning on iSCSI SRs?

                      Currently, the blockers I encounter are related to my iSCSI storage. This is a major differentiating factor compared to other vendors, and resolving these blockers would significantly increase your market share.

                      Thanks !

                      nikadeN J olivierlambertO 3 Replies Last reply Reply Quote 0
                      • nikadeN Offline
                        nikade Top contributor @SylvainB
                        last edited by

                        @still_at_work said in First SMAPIv3 driver is available in preview:

                        Hello @olivierlambert ,

                        I am joining this topic as I have a few questions about SMAPIv3:

                        • Will it allow provisioning of VDIs larger than 2TB?

                        • Will it enable thin provisioning on iSCSI SRs?

                        Currently, the blockers I encounter are related to my iSCSI storage. This is a major differentiating factor compared to other vendors, and resolving these blockers would significantly increase your market share.

                        Thanks !

                        What blockers regarding iSCSI storage? Let me guess, thin provisioning and the 2Tb VDI size limit.

                        1 Reply Last reply Reply Quote 0
                        • J Offline
                          john.c @SylvainB
                          last edited by john.c

                          @still_at_work said in First SMAPIv3 driver is available in preview:

                          Hello @olivierlambert ,

                          I am joining this topic as I have a few questions about SMAPIv3:

                          • Will it allow provisioning of VDIs larger than 2TB?

                          • Will it enable thin provisioning on iSCSI SRs?

                          Currently, the blockers I encounter are related to my iSCSI storage. This is a major differentiating factor compared to other vendors, and resolving these blockers would significantly increase your market share.

                          Thanks !

                          @still_at_work The size limit of the VDI is due to the file format used for these, which is VHD (https://en.wikipedia.org/wiki/VHD_(file_format)). This format can't support more than 2TB, it's known about and are dealing with the issue. It will likely result in a change or addition of a new VDI format likely to be qcow2 unless necessary software for VHDX format is fully open sourced and software for Xen is created which enables create, read, write and use of this format.

                          It's not a limitation of iSCSI as it also emerges with both NFS and SMB based connections.

                          S C 2 Replies Last reply Reply Quote 2
                          • S Offline
                            SylvainB @john.c
                            last edited by

                            @john-c You've right, thanks for precisions.

                            However, thin provisioning on iSCSI is a real blocking thing for me, and I'm sure that I'm not alone 🙂

                            Will SMAPIv3 enable thin provisioning on iSCSI SRs?

                            C 1 Reply Last reply Reply Quote 1
                            • C Offline
                              cg @john.c
                              last edited by

                              @john-c as well as FC. Basically all shared storages that are production ready.

                              What are the up/downsides of qcow2 vs. VHDX?

                              olivierlambertO 1 Reply Last reply Reply Quote 0
                              • C Offline
                                cg @SylvainB
                                last edited by cg

                                @still_at_work the question is technically wrong. It's less depending on SMAPI, moreso on the "drivers" that it'll be able to use.
                                Someone needs to implement something for thin provisioned shared storage that could handle it.
                                e.g. via GFS2 or something else.

                                You could make your own "adapter"/"driver" (I forgot how they called it) for it, like they did with ZFS.

                                1 Reply Last reply Reply Quote 0
                                • olivierlambertO Offline
                                  olivierlambert Vates 🪐 Co-Founder CEO @SylvainB
                                  last edited by olivierlambert

                                  @still_at_work The current ZFS driver doesn't have any limitation in volume size, but it's local by definition. We do not have yet a driver for iSCSI in SMAPIv3.

                                  1 Reply Last reply Reply Quote 0
                                  • olivierlambertO Offline
                                    olivierlambert Vates 🪐 Co-Founder CEO @cg
                                    last edited by

                                    @cg Files format (VHDX, qcow2…) require by definition a file system to be on top. When you have a single block device shared with multiple hosts, you need either a clustered filesystem (VMFS, GFS2 etc.) or something able to share the block space between hosts, and in all cases, with the right lock mechanism. That's the tricky part. As soon as you have it, the rest doesn't really matter.

                                    C 1 Reply Last reply Reply Quote 0
                                    • C Offline
                                      cg @olivierlambert
                                      last edited by

                                      @olivierlambert I know the problem of a shared FS, the quesion I had was rather: does qcow2 or vhdx have benefits above each other. What are pros/cons with the choice of one.
                                      Does it matter at all?

                                      1 Reply Last reply Reply Quote 0
                                      • olivierlambertO Offline
                                        olivierlambert Vates 🪐 Co-Founder CEO
                                        last edited by

                                        It could, but it will be likely a thin difference. Happy to test it if we can.

                                        1 Reply Last reply Reply Quote 1
                                        • B Offline
                                          bufanda
                                          last edited by

                                          Hi,

                                          I see this driver is ZFS only at the moment. I have a question regarding ZFS though I am not really familiar with it and only started to read in to it recently (Still an ext4 enjoyer) and from what I gathered is that ZFS can be pretty memory hungy, but how does that fit into the dom0 with only 3GB RAM or so or in general with a Hypervisor where the memory should be used primarily for Guest OSes. Are there any draw backs in using ZFS. Will it perform poorly if not getting enough RAM for cache operations. Or is that more a thing for NAS that use ZFS to serve shares. Maybe an expert on that matter can enlighten me.

                                          Thanks and I will see if I can free up a drive in my test pool to test the driver.

                                          nikadeN C 2 Replies Last reply Reply Quote 0
                                          • olivierlambertO Offline
                                            olivierlambert Vates 🪐 Co-Founder CEO
                                            last edited by

                                            That should adapt. Note we'll have a BTRFS driver this summer, that will be even better in terms of capabilities (still for local storage), so if you are not confident with ZFS, it will be a viable other option 🙂

                                            1 Reply Last reply Reply Quote 3
                                            • First post
                                              Last post