[DEPRECATED] SMAPIv3 - Feedback & Bug reports
-
It's a bit more subtle than this. SMAPIv3 provides a decoupling between the volume and the data path. It means you can use whatever way to store your volumes (like a regular SMAPIv1 driver for example) BUT also choose the datapath (tapdisk with VHD or raw is the only choice in v1).
With v3, you could use any other datapath, like qemu-dp and other future solution.
Since VHD isn't mandatory to do snapshots and anything (as long you implement a way to do it yourself), it allows you to delegate some operation to the storage itself.
Here is an example with ZFS-ng driver: https://xcp-ng.org/blog/2022/09/23/zfs-ng-an-intro-on-smapiv3/
In short, we use tapdisk in raw mode, and let ZFS handle the snapshots and so on.
The first driver will be available in XCP-ng 8.3, without some features still missing (no migration path from v1 to v3, no storage migration and no backup). We are prioritizing a way to have XO able to backup this first implementation. This way, you could backup a SMAPIv1 based VM disk and restore it on SMAPIv3 ZFS-ng, providing a "cold/warm" migration to it.
-
@olivierlambert said in SMAPIv3 - Feedback & Bug reports:
It's a bit more subtle than this. SMAPIv3 provides a decoupling between the volume and the data path. It means you can use whatever way to store your volumes (like a regular SMAPIv1 driver for example) BUT also choose the datapath (tapdisk with VHD or raw is the only choice in v1).
With v3, you could use any other datapath, like qemu-dp and other future solution.
Since VHD isn't mandatory to do snapshots and anything (as long you implement a way to do it yourself), it allows you to delegate some operation to the storage itself.
Here is an example with ZFS-ng driver: https://xcp-ng.org/blog/2022/09/23/zfs-ng-an-intro-on-smapiv3/
In short, we use tapdisk in raw mode, and let ZFS handle the snapshots and so on.
The first driver will be available in XCP-ng 8.3, without some features still missing (no migration path from v1 to v3, no storage migration and no backup). We are prioritizing a way to have XO able to backup this first implementation. This way, you could backup a SMAPIv1 based VM disk and restore it on SMAPIv3 ZFS-ng, providing a "cold/warm" migration to it.
Alright!
So a clean installation of 8.3 with SMAPIv3 would probably be the best way to test it once 8.3 is released.
If you were to summarize the 3 biggest differences (May be positive or negative) between SMAPIv1 and v3, what would those be?Im thinking one would be this migration limitation, but other than that?
-
The differences will depends on the "when". In the end, the goal is to keep the flexibility of v1 (ie live storage motion between any kind of storage repository, CBT/diff capabilities etc.) without any of its current limitation (no 2TiB limit, potential new/other datapath).
To me, the best thing with SMAPIv3 is the flexibility (which is a challenge to deal with to appear as seamless regardless the storage you use as a user). But this flexibility could offer fast path and offload to devices.
Eg right now with SMAPIv1, it's ultra flexible but the entire concept is managed by the dom0 (we have to VHD everything, coalesce a chain, garbage collect snapshots etc.).
With SMAPIv3, you can do the same, but also, in some case, delegate to specific hardware. Let's take an example: a Pure Storage Flash array. Those things got an API, so you could have a specific driver talking to the array for doing snapshot and so on. So no more coalesce to deal with on XCP-ng, the storage will do it for you. That's just an example, but it will give a degree of freedom to provide many different capabilities, and some more "native" to a storage tech.
The downside is it's up to us to develop a way to universally works when you want to go from a storage to another, and to export your data too.
-
Sounds like there will be a lot to think about!
Im just happy this is finally happening, it will be a huge improvement for everyone, including new users who haven't had to struggle with the coalescale trrain in the past -
@olivierlambert as someone who only uses zfs for vm storage on all of their xcp-ng hosts, this makes me very happy.
-
@swivvle said in SMAPIv3 - Feedback & Bug reports:
@olivierlambert as someone who only uses zfs for vm storage on all of their xcp-ng hosts, this makes me very happy.
Yea I totally agree, this sounds promising.
We've used Nexentastor and FreeNAS/TrueNAS as backend for our XCP's for many many years and even arrays with spinning rust backed by RAM and SSD cache are pretty fast.Being able to snapshot the zvol volume regularly is also a huge plus in case of a ransomware-attack and disaster-recovery since those can be sent to another machine.
-
@olivierlambert @nikade I'm all local zfs on the hypervisor, no external storage. With what is available now, I prefer vhd's sitting on a zfs dataset with xcp-ng versus LVM on top of zvol for sure. Super stable and easy to manage, add more space with zpool add , etc. I wouldn't use the new v3 without a snapshot and backup option, but without migration would be OK for my use case as long as a vm copy would still work.
-
Snapshot will work out of the box, it's a raw datapath on top of zvols (so no VHD size limitation). Backup will come after that, and migration will be last
-
I read through the intro article but I'm still a bit unclear on how this will work. Right now I have local ext storage and nfs zfs storage. What would I have to do in order to use SMAPIv3?
-
Wait for a blog post coming next week
-
@olivierlambert said in SMAPIv3 - Feedback & Bug reports:
Wait for a blog post coming next week
You have my attention
-
-
Locking this thread since the feedback on our "new new" SMAPIv3 driver will be here: https://xcp-ng.org/forum/topic/8859/first-smapiv3-driver-is-available-in-preview