XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    First SMAPIv3 driver is available in preview

    Scheduled Pinned Locked Moved Development
    64 Posts 18 Posters 16.4k Views 23 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Online
      olivierlambert Vates 🪐 Co-Founder CEO
      last edited by

      The goal is to test the fact that it runs OK for a bit, so we are sure to not miss anything. Fio is your friend to benchmark in a VM, remember that it's still blktap behind, so if you want better performance numbers, do it with multiple VDIs at once.

      gskgerG 1 Reply Last reply Reply Quote 1
      • gskgerG Offline
        gskger Top contributor @olivierlambert
        last edited by

        I tested SMAPIv1 on XCP 8.2.1 against SMAPIv3 on XCP 8.3b2 using the same host (a HP ProDesk 400 G6 with a i5-10500T CPU, 32GB RAM). A 1 TB Samsung 860 EVO SSD drive was used as the test SR, while XCP was booted from a 512 M.2 KIOXA NVMe drive. Fio (fio-3.37) was compiled from source on an up-to-date Debian 12 VM (2 vCPU, 4 GiB RAM, 32GiB drive) which was copied twice so that three identical VM could run fio in parallel.

        After an initial fio run to create the files, a script run three sequential write and read tests (e.g. fio --name=fio --ioengine=libaio --randrepeat=1 --direct=1 --fallocate=none --ramp_time=10 --size=4G --iodepth=64 --loops=50 --group_reporting --numjobs=1 --rw=write --bs=1M). The script first ran on one VM, followed by a run on three VMs in parallel. IOPs and bandwidths were averaged.

        d4beab9b-1328-4d67-8794-49b45093572b-grafik.png

        v1-1VM are the results for one VM on a SMAPIv1 SR (XCP 8.2.1) while v3-3VM are the results for three VMs in parallel an a SMAPIv3 SR (XCP 8.3b2).

        While I'm not sure if this approach is really valid (e.g. the average load of the host went through the roof when three VMs performed fio in parallel), it does suggest that the bandwidth of SMAPIv3 is not yet en-par to that of SMAPIv1. But I could be wrong and this is an early previews of SMAPIv3. Looking forward to more performance results on SMAPIv3.

        1 Reply Last reply Reply Quote 1
        • olivierlambertO Online
          olivierlambert Vates 🪐 Co-Founder CEO
          last edited by

          Hi,

          I'm not sure to understand. What kind of SMAPIv1 SR did you try to compare with ZFS on v3?

          R 1 Reply Last reply Reply Quote 0
          • R Offline
            rfx77 @olivierlambert
            last edited by

            @olivierlambert

            Can you provide a link to the github repo where we can find the source-code of this smapiv3 driver?

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Online
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              https://github.com/xcp-ng/xcp-ng-xapi-storage

              R 1 Reply Last reply Reply Quote 0
              • R Offline
                rfx77 @olivierlambert
                last edited by

                @olivierlambert
                i meant the source for this package: xcp-ng-xapi-storage-volume-zfsvol

                so that we can see how this new driver is implemented

                1 Reply Last reply Reply Quote 0
                • olivierlambertO Online
                  olivierlambert Vates 🪐 Co-Founder CEO
                  last edited by

                  That's inside the repo I posted 🙂

                  1 Reply Last reply Reply Quote 0
                  • C Offline
                    CJ
                    last edited by

                    Has anyone tried a backup using the new driver? I created a new test pool with one of my previous hosts and made SMAPIv3 ZFS storage. I can create a VM just fine, but when I try and add it to my existing backup job, it keeps erroring out with "stream has ended with not enough data (actual: 485, expected: 512)"

                    Is this expected?

                    1 Reply Last reply Reply Quote 0
                    • olivierlambertO Online
                      olivierlambert Vates 🪐 Co-Founder CEO
                      last edited by

                      You can only do full backup for now, not incremental.

                      C 1 Reply Last reply Reply Quote 0
                      • C Offline
                        CJ @olivierlambert
                        last edited by

                        @olivierlambert Since it's the first backup, it should be full, correct? Does Delta backup not work at all even if force full is enabled?

                        1 Reply Last reply Reply Quote 0
                        • olivierlambertO Online
                          olivierlambert Vates 🪐 Co-Founder CEO
                          last edited by

                          I mean the backup feature, it only works with XVA underneath (so the full backup feature that is doing a full everytime)

                          1 Reply Last reply Reply Quote 1
                          • H Offline
                            hsnyder
                            last edited by hsnyder

                            I've started using the SMAPIv3 driver too. It's working well so far. I'm keeping my VM boot disks on md raid1, and using a zfs mirror via SMAPIv3 for large data disks.

                            I have a question about backups... Is it safe to use syncoid to directly synchronize the ZFS volumes to an external backup? syncoid creates a snapshot at the start of the send process. But, I also have rolling snapshots configured through Xen-Orchestra. Will the syncoid snapshot mess up Xen-Orchestra?

                            If this isn't safe or isn't a good idea, I'll just use rsync to back up the filesystem contents inside the VM that the volume is mounted to...

                            1 Reply Last reply Reply Quote 0
                            • olivierlambertO Online
                              olivierlambert Vates 🪐 Co-Founder CEO
                              last edited by

                              On my side, I have no idea, because I never used syncoid. Have you asked their dev about this?

                              R 1 Reply Last reply Reply Quote 0
                              • R Offline
                                rfx77 @olivierlambert
                                last edited by

                                @olivierlambert @hsnyder

                                if i understand correctly i would rephrase the question this way:

                                does xen-orchestra name the snapshots in a way which is unique to xen-orchestra and does xoa know which snapshots belong to it or does it use the latest snapshots no matter how they are named.

                                @hsnyder: i dont think you can simply use zfs snapshots without xen snapshots because it dont think that they will be crash-consistent.

                                if syncoid is similar to zrepl you have to check that is doesnt prune the zfs snapshots from xoa.

                                H yannY 2 Replies Last reply Reply Quote 1
                                • olivierlambertO Online
                                  olivierlambert Vates 🪐 Co-Founder CEO
                                  last edited by

                                  Question for @yann probably then 🙂

                                  1 Reply Last reply Reply Quote 0
                                  • H Offline
                                    hsnyder @rfx77
                                    last edited by

                                    @rfx77 Thanks for clarifying my question, your reading of it was correct.

                                    I've just realized that syncoid has an option, --no-sync-snap, which I think avoids creating a dedicated snapshot for the purpose of the transfer, and instead just transfers over the pre-existing snapshots. If that's indeed what it does, then this solves all potential problems, because the existing snapshots are taken from xen-orchestra. I'll do a test to confirm this is indeed the behavior and then will reply again.

                                    R 1 Reply Last reply Reply Quote 1
                                    • yannY Offline
                                      yann Vates 🪐 XCP-ng Team @rfx77
                                      last edited by

                                      If I understand he question correctly, the requirement is that the snapshot naming convention by ZFS-vol and by syncoid don't collide.
                                      What convention is syncoid using? The current ZFS-vol driver just assigns a unique integer name to each volume/snapshot, and there would be an error when it attempts to create a snapshot with a new integer name that another tool would have created on its own.

                                      1 Reply Last reply Reply Quote 1
                                      • R Offline
                                        rfx77 @hsnyder
                                        last edited by rfx77

                                        @hsnyder Hi!

                                        I would let syncoid do a snapshot, check the name and look if there could be any potential naming conflict. if thats not the case i would keep it as it was.
                                        you can check if syncoid keeps the snapshots on the target

                                        anyhow i would recommend zrepl for your tasks. its the tool used by nearly anyone who does zfs replication things. We are extensively using it for many Hub-Spoke sync architectures.

                                        H 1 Reply Last reply Reply Quote 0
                                        • H Offline
                                          hsnyder @rfx77
                                          last edited by

                                          @rfx77 thanks for the recommendations. I looked into zrepl and it seems like a good solution as well. However, since I'm using this new zfs beta driver in production, I've decided I'm going to do the backup at the VM filesystem level, i.e. with rsync, instead of at the ZFS level. I figure that strategy is slightly safer in the event of bugs with the driver. I know that's debatable - it would depend on the bug, but this approach feels safer to me.

                                          1 Reply Last reply Reply Quote 0
                                          • S Offline
                                            SylvainB
                                            last edited by SylvainB

                                            Hello @olivierlambert ,

                                            I am joining this topic as I have a few questions about SMAPIv3:

                                            • Will it allow provisioning of VDIs larger than 2TB?

                                            • Will it enable thin provisioning on iSCSI SRs?

                                            Currently, the blockers I encounter are related to my iSCSI storage. This is a major differentiating factor compared to other vendors, and resolving these blockers would significantly increase your market share.

                                            Thanks !

                                            nikadeN J olivierlambertO 3 Replies Last reply Reply Quote 0
                                            • First post
                                              Last post