First SMAPIv3 driver is available in preview
-
@bufanda said in First SMAPIv3 driver is available in preview:
Hi,
I see this driver is ZFS only at the moment. I have a question regarding ZFS though I am not really familiar with it and only started to read in to it recently (Still an ext4 enjoyer) and from what I gathered is that ZFS can be pretty memory hungy, but how does that fit into the dom0 with only 3GB RAM or so or in general with a Hypervisor where the memory should be used primarily for Guest OSes. Are there any draw backs in using ZFS. Will it perform poorly if not getting enough RAM for cache operations. Or is that more a thing for NAS that use ZFS to serve shares. Maybe an expert on that matter can enlighten me.
Thanks and I will see if I can free up a drive in my test pool to test the driver.
-
You are correct, ZFS can be memory hungry if you give it a lot of RAM since it will cache read/write data in the ARC. Depending on what disks you have this will act differently, for example if you have datacenter SSD's or even NVME performance will not be a problem even with a small amount of RAM.
-
dom0 will get more RAM if the the server has a lot of RAM, there is also a possibility to adjust the dom0 RAM value if needed. Please note that ZFS performance will not differ a lot if you have fast disks.
ZFS was designed to be fast even with "spinning rust" if you gave the system a lot of RAM for ARC, SSD for L2ARC or SSD for SLOG to act as cache before the data hit the slower disks.
Since then a lot has changed when it comes to SSD, NVME but also pricing of those disks. -
-
@bufanda mostly RAM is "required" if you go for things like online-deduplication, as that needs to be handled within the RAM.
I configured our servers to 12 GB memory for dom0 anyways, as it helps with the overall performance. 3 GB can be pretty tight if you have a bunch of VMs running.
IIRC generally rather 8 are recommended nowadays. (unless you have a rather small environment) -
@cg said in First SMAPIv3 driver is available in preview:
@bufanda mostly RAM is "required" if you go for things like online-deduplication, as that needs to be handled within the RAM.
I configured our servers to 12 GB memory for dom0 anyways, as it helps with the overall performance. 3 GB can be pretty tight if you have a bunch of VMs running.
IIRC generally rather 8 are recommended nowadays. (unless you have a rather small environment)Yea totally, we also do 8-16Gb on our dom0 if they run a lot of VM's.
-
@nikade Yeah for my small homelab I don't need as much, but I can see it in a enterprise environment be useful yes.
-
This post is deleted! -
Is there an ETA for fully functional deploy (on local storage) with differential backup, live migration, statistics and so on?
Perhaps with 8.3 stable release?
I'm interested mainly because of bit rotting detection of ZFS. -
Hi,
8.3 release is eating a lot of resources, so that's the opposite: when it's out, this will leave more time to move forward on SMAPIv3
-
@Paolo If it's only for that: Any HW-RAID with DP should do the job. (in case you don't fully go for SW-RAID)
-
@cg sorry if this is off topic but do you know of any HW raid controllers which actually do this? Storing checksums or whatever?
-
A question or point about the SMAPIv3 ZFS driver. I had a power failure in my testing lab last week and I noticed that the VMs with SMAPIv3 disks attached did not come back up automatically, despite being set to automatically power on. Perhaps this is related to the ZFS driver? My first thought was that there might be a race condition between the VM start and the zpool import at boot up but I don't know how to verify that.
I just figured I would report this in case it's useful to anyone at Vates.
-
@hsnyder AFAIK every - even not so - modern RAID controller can do 'verification read', 'disk scrubbing' or however they call it. It won't fix bitrot with single parity, but it can fix a single and detect dual bit failures.
That's why the only option for our SAN is: RAID6 respectively any DP algorythm. -
@olivierlambert said in First SMAPIv3 driver is available in preview:
Hi,
8.3 release is eating a lot of resources, so that's the opposite: when it's out, this will leave more time to move forward on SMAPIv3
Lots of work means lots of changes, means: I'm exciting about it. Also sounds more like a 9.0, if that much work is going into it.
-
@cg said in First SMAPIv3 driver is available in preview:
@hsnyder AFAIK every - even not so - modern RAID controller can do 'verification read', 'disk scrubbing' or however they call it. It won't fix bitrot with single parity, but it can fix a single and detect dual bit failures.
That's why the only option for our SAN is: RAID6 respectively any DP algorythm.Totally agree with you on the RAID2 / Dual Parity, thats our standard as well.
-
@nikade It's also RAS. The risk of a 2nd failing disc during rebuild is a lot higher than usual.
Our B2D2T server needs about 24 hours for that. -
@cg How big are your disks?
Our primary SAN has NVME SSD so rebuild is just a couple of hours, but like you said, a 2nd failure during rebuild would be a disaster so it isn't worth the risk.Our secondary boxes are ZFS which requires close to 12h to rebuild, so the extra parity is good to avoid biting all your nails off
-
@nikade I found out the HPE MSA2060 has a full flash bundle option, wich is suprisingly cheap, so our SAN has 3.84 TB SAS SSDs - they'll be good within a few hours, but our backup server has a RAID6 with 10 TB HDDs.
-
@cg Those HDD's will take their fair time to rebuild.
Always stressful looking how far along it is while crossing your fingers that another drive wont pop during the process.