First SMAPIv3 driver is available in preview
-
@rfx77 thanks for the recommendations. I looked into zrepl and it seems like a good solution as well. However, since I'm using this new zfs beta driver in production, I've decided I'm going to do the backup at the VM filesystem level, i.e. with rsync, instead of at the ZFS level. I figure that strategy is slightly safer in the event of bugs with the driver. I know that's debatable - it would depend on the bug, but this approach feels safer to me.
-
Hello @olivierlambert ,
I am joining this topic as I have a few questions about SMAPIv3:
-
Will it allow provisioning of VDIs larger than 2TB?
-
Will it enable thin provisioning on iSCSI SRs?
Currently, the blockers I encounter are related to my iSCSI storage. This is a major differentiating factor compared to other vendors, and resolving these blockers would significantly increase your market share.
Thanks !
-
-
@still_at_work said in First SMAPIv3 driver is available in preview:
Hello @olivierlambert ,
I am joining this topic as I have a few questions about SMAPIv3:
-
Will it allow provisioning of VDIs larger than 2TB?
-
Will it enable thin provisioning on iSCSI SRs?
Currently, the blockers I encounter are related to my iSCSI storage. This is a major differentiating factor compared to other vendors, and resolving these blockers would significantly increase your market share.
Thanks !
What blockers regarding iSCSI storage? Let me guess, thin provisioning and the 2Tb VDI size limit.
-
-
@still_at_work said in First SMAPIv3 driver is available in preview:
Hello @olivierlambert ,
I am joining this topic as I have a few questions about SMAPIv3:
-
Will it allow provisioning of VDIs larger than 2TB?
-
Will it enable thin provisioning on iSCSI SRs?
Currently, the blockers I encounter are related to my iSCSI storage. This is a major differentiating factor compared to other vendors, and resolving these blockers would significantly increase your market share.
Thanks !
@still_at_work The size limit of the VDI is due to the file format used for these, which is VHD (https://en.wikipedia.org/wiki/VHD_(file_format)). This format can't support more than 2TB, it's known about and are dealing with the issue. It will likely result in a change or addition of a new VDI format likely to be qcow2 unless necessary software for VHDX format is fully open sourced and software for Xen is created which enables create, read, write and use of this format.
It's not a limitation of iSCSI as it also emerges with both NFS and SMB based connections.
-
-
@john-c You've right, thanks for precisions.
However, thin provisioning on iSCSI is a real blocking thing for me, and I'm sure that I'm not alone
Will SMAPIv3 enable thin provisioning on iSCSI SRs?
-
@john-c as well as FC. Basically all shared storages that are production ready.
What are the up/downsides of qcow2 vs. VHDX?
-
@still_at_work the question is technically wrong. It's less depending on SMAPI, moreso on the "drivers" that it'll be able to use.
Someone needs to implement something for thin provisioned shared storage that could handle it.
e.g. via GFS2 or something else.You could make your own "adapter"/"driver" (I forgot how they called it) for it, like they did with ZFS.
-
olivierlambert Vates 🪐 Co-Founder CEOlast edited by olivierlambert 6 Nov 2024, 18:17 11 Jun 2024, 16:17
@still_at_work The current ZFS driver doesn't have any limitation in volume size, but it's local by definition. We do not have yet a driver for iSCSI in SMAPIv3.
-
@cg Files format (VHDX, qcow2…) require by definition a file system to be on top. When you have a single block device shared with multiple hosts, you need either a clustered filesystem (VMFS, GFS2 etc.) or something able to share the block space between hosts, and in all cases, with the right lock mechanism. That's the tricky part. As soon as you have it, the rest doesn't really matter.
-
@olivierlambert I know the problem of a shared FS, the quesion I had was rather: does qcow2 or vhdx have benefits above each other. What are pros/cons with the choice of one.
Does it matter at all? -
It could, but it will be likely a thin difference. Happy to test it if we can.
-
Hi,
I see this driver is ZFS only at the moment. I have a question regarding ZFS though I am not really familiar with it and only started to read in to it recently (Still an ext4 enjoyer) and from what I gathered is that ZFS can be pretty memory hungy, but how does that fit into the dom0 with only 3GB RAM or so or in general with a Hypervisor where the memory should be used primarily for Guest OSes. Are there any draw backs in using ZFS. Will it perform poorly if not getting enough RAM for cache operations. Or is that more a thing for NAS that use ZFS to serve shares. Maybe an expert on that matter can enlighten me.
Thanks and I will see if I can free up a drive in my test pool to test the driver.
-
That should adapt. Note we'll have a BTRFS driver this summer, that will be even better in terms of capabilities (still for local storage), so if you are not confident with ZFS, it will be a viable other option
-
@bufanda said in First SMAPIv3 driver is available in preview:
Hi,
I see this driver is ZFS only at the moment. I have a question regarding ZFS though I am not really familiar with it and only started to read in to it recently (Still an ext4 enjoyer) and from what I gathered is that ZFS can be pretty memory hungy, but how does that fit into the dom0 with only 3GB RAM or so or in general with a Hypervisor where the memory should be used primarily for Guest OSes. Are there any draw backs in using ZFS. Will it perform poorly if not getting enough RAM for cache operations. Or is that more a thing for NAS that use ZFS to serve shares. Maybe an expert on that matter can enlighten me.
Thanks and I will see if I can free up a drive in my test pool to test the driver.
-
You are correct, ZFS can be memory hungry if you give it a lot of RAM since it will cache read/write data in the ARC. Depending on what disks you have this will act differently, for example if you have datacenter SSD's or even NVME performance will not be a problem even with a small amount of RAM.
-
dom0 will get more RAM if the the server has a lot of RAM, there is also a possibility to adjust the dom0 RAM value if needed. Please note that ZFS performance will not differ a lot if you have fast disks.
ZFS was designed to be fast even with "spinning rust" if you gave the system a lot of RAM for ARC, SSD for L2ARC or SSD for SLOG to act as cache before the data hit the slower disks.
Since then a lot has changed when it comes to SSD, NVME but also pricing of those disks. -
-
@bufanda mostly RAM is "required" if you go for things like online-deduplication, as that needs to be handled within the RAM.
I configured our servers to 12 GB memory for dom0 anyways, as it helps with the overall performance. 3 GB can be pretty tight if you have a bunch of VMs running.
IIRC generally rather 8 are recommended nowadays. (unless you have a rather small environment) -
@cg said in First SMAPIv3 driver is available in preview:
@bufanda mostly RAM is "required" if you go for things like online-deduplication, as that needs to be handled within the RAM.
I configured our servers to 12 GB memory for dom0 anyways, as it helps with the overall performance. 3 GB can be pretty tight if you have a bunch of VMs running.
IIRC generally rather 8 are recommended nowadays. (unless you have a rather small environment)Yea totally, we also do 8-16Gb on our dom0 if they run a lot of VM's.
-
@nikade Yeah for my small homelab I don't need as much, but I can see it in a enterprise environment be useful yes.
-
This post is deleted! - 20 days later
-
Is there an ETA for fully functional deploy (on local storage) with differential backup, live migration, statistics and so on?
Perhaps with 8.3 stable release?
I'm interested mainly because of bit rotting detection of ZFS. -
Hi,
8.3 release is eating a lot of resources, so that's the opposite: when it's out, this will leave more time to move forward on SMAPIv3