[DEPRECATED] SMAPIv3 - Feedback & Bug reports
-
The differences will depends on the "when". In the end, the goal is to keep the flexibility of v1 (ie live storage motion between any kind of storage repository, CBT/diff capabilities etc.) without any of its current limitation (no 2TiB limit, potential new/other datapath).
To me, the best thing with SMAPIv3 is the flexibility (which is a challenge to deal with to appear as seamless regardless the storage you use as a user). But this flexibility could offer fast path and offload to devices.
Eg right now with SMAPIv1, it's ultra flexible but the entire concept is managed by the dom0 (we have to VHD everything, coalesce a chain, garbage collect snapshots etc.).
With SMAPIv3, you can do the same, but also, in some case, delegate to specific hardware. Let's take an example: a Pure Storage Flash array. Those things got an API, so you could have a specific driver talking to the array for doing snapshot and so on. So no more coalesce to deal with on XCP-ng, the storage will do it for you. That's just an example, but it will give a degree of freedom to provide many different capabilities, and some more "native" to a storage tech.
The downside is it's up to us to develop a way to universally works when you want to go from a storage to another, and to export your data too.
-
Sounds like there will be a lot to think about!
Im just happy this is finally happening, it will be a huge improvement for everyone, including new users who haven't had to struggle with the coalescale trrain in the past -
@olivierlambert as someone who only uses zfs for vm storage on all of their xcp-ng hosts, this makes me very happy.
-
@swivvle said in SMAPIv3 - Feedback & Bug reports:
@olivierlambert as someone who only uses zfs for vm storage on all of their xcp-ng hosts, this makes me very happy.
Yea I totally agree, this sounds promising.
We've used Nexentastor and FreeNAS/TrueNAS as backend for our XCP's for many many years and even arrays with spinning rust backed by RAM and SSD cache are pretty fast.Being able to snapshot the zvol volume regularly is also a huge plus in case of a ransomware-attack and disaster-recovery since those can be sent to another machine.
-
@olivierlambert @nikade I'm all local zfs on the hypervisor, no external storage. With what is available now, I prefer vhd's sitting on a zfs dataset with xcp-ng versus LVM on top of zvol for sure. Super stable and easy to manage, add more space with zpool add , etc. I wouldn't use the new v3 without a snapshot and backup option, but without migration would be OK for my use case as long as a vm copy would still work.
-
Snapshot will work out of the box, it's a raw datapath on top of zvols (so no VHD size limitation). Backup will come after that, and migration will be last
-
I read through the intro article but I'm still a bit unclear on how this will work. Right now I have local ext storage and nfs zfs storage. What would I have to do in order to use SMAPIv3?
-
Wait for a blog post coming next week
-
@olivierlambert said in SMAPIv3 - Feedback & Bug reports:
Wait for a blog post coming next week
You have my attention
-
-
Locking this thread since the feedback on our "new new" SMAPIv3 driver will be here: https://xcp-ng.org/forum/topic/8859/first-smapiv3-driver-is-available-in-preview