[DEPRECATED] SMAPIv3 - Feedback & Bug reports
-
A big blog post is coming soon. I need to check with @matiasvl about trim passing via raw tapdisk datapath.
-
Please let us know when we can test that new zfs-ng!
-
SMAPI v3 looks very exciting, unfortunately on the bottom is still tapdisk, and that has one but it's a very serious limitation - no io/bandwidth limit ;(
-
It's not obvious/100% sure that tapdisk is the bottleneck
-
@olivierlambert
Hmm, if we creating a volume plugin that combine linux cgroups (iops/bandwidth limit) + filesystem (zfs block device - zvol), that would be one possible workaround no matter what's at the bottom. -
I think you are oversimplifying how the storage is working in Xen It's not KVM.
See https://xcp-ng.org/blog/2022/07/27/grant-table-in-xen/ for more details.
-
Sorry to resurrect an old topic, wasn't sure if updates were being made to a new topic. Wanted to ask how goes the implementation, and if there is code (or will be code) to support SMAPIv3 in Xen Orchestra (xoce/XOA), even as a development version? If there's a newer topic that I missed, please point me that direction! Thanks!
-
Nothing new right now (yet). For now, migrating stuff from Python2 to 3 is taking its tollβ¦
-
@olivierlambert sorry for piggybacking on an old thread but I thought it would be best to keep it together.
We are (As many others) looking for alternatives to our VMWare platform, we're already using XCP and feel that it would possibly be a good alternative once XOSTOR is ready.
One thing that we and others (For example I read this in pretty much every thread on reddit) are struggling with is the 2Tb VDI limit. Many on-prem enterprises are running fileservers or large sql servers which requires a big VDI (Which is not ideal, I know).
Is this being resolved in SMAPIv3? -
Short answer: yes Our goal is to have a local SMAPIv3 SR available in 8.3 on the "short term" to demonstrate what's already doable with it. It will be likely ZFS based behind, allowing to use any VDI size while enjoy ZFS perks (compression).
-
@olivierlambert said in SMAPIv3 - Feedback & Bug reports:
Short answer: yes Our goal is to have a local SMAPIv3 SR available in 8.3 on the "short term" to demonstrate what's already doable with it. It will be likely ZFS based behind, allowing to use any VDI size while enjoy ZFS perks (compression).
This sounds great!
So it is really that close to becoming a reality?
Can't wait for it to be released, this will probably be a huge performance increase as well -
It's a bit more subtle than this. SMAPIv3 provides a decoupling between the volume and the data path. It means you can use whatever way to store your volumes (like a regular SMAPIv1 driver for example) BUT also choose the datapath (tapdisk with VHD or raw is the only choice in v1).
With v3, you could use any other datapath, like qemu-dp and other future solution.
Since VHD isn't mandatory to do snapshots and anything (as long you implement a way to do it yourself), it allows you to delegate some operation to the storage itself.
Here is an example with ZFS-ng driver: https://xcp-ng.org/blog/2022/09/23/zfs-ng-an-intro-on-smapiv3/
In short, we use tapdisk in raw mode, and let ZFS handle the snapshots and so on.
The first driver will be available in XCP-ng 8.3, without some features still missing (no migration path from v1 to v3, no storage migration and no backup). We are prioritizing a way to have XO able to backup this first implementation. This way, you could backup a SMAPIv1 based VM disk and restore it on SMAPIv3 ZFS-ng, providing a "cold/warm" migration to it.
-
@olivierlambert said in SMAPIv3 - Feedback & Bug reports:
It's a bit more subtle than this. SMAPIv3 provides a decoupling between the volume and the data path. It means you can use whatever way to store your volumes (like a regular SMAPIv1 driver for example) BUT also choose the datapath (tapdisk with VHD or raw is the only choice in v1).
With v3, you could use any other datapath, like qemu-dp and other future solution.
Since VHD isn't mandatory to do snapshots and anything (as long you implement a way to do it yourself), it allows you to delegate some operation to the storage itself.
Here is an example with ZFS-ng driver: https://xcp-ng.org/blog/2022/09/23/zfs-ng-an-intro-on-smapiv3/
In short, we use tapdisk in raw mode, and let ZFS handle the snapshots and so on.
The first driver will be available in XCP-ng 8.3, without some features still missing (no migration path from v1 to v3, no storage migration and no backup). We are prioritizing a way to have XO able to backup this first implementation. This way, you could backup a SMAPIv1 based VM disk and restore it on SMAPIv3 ZFS-ng, providing a "cold/warm" migration to it.
Alright!
So a clean installation of 8.3 with SMAPIv3 would probably be the best way to test it once 8.3 is released.
If you were to summarize the 3 biggest differences (May be positive or negative) between SMAPIv1 and v3, what would those be?Im thinking one would be this migration limitation, but other than that?
-
The differences will depends on the "when". In the end, the goal is to keep the flexibility of v1 (ie live storage motion between any kind of storage repository, CBT/diff capabilities etc.) without any of its current limitation (no 2TiB limit, potential new/other datapath).
To me, the best thing with SMAPIv3 is the flexibility (which is a challenge to deal with to appear as seamless regardless the storage you use as a user). But this flexibility could offer fast path and offload to devices.
Eg right now with SMAPIv1, it's ultra flexible but the entire concept is managed by the dom0 (we have to VHD everything, coalesce a chain, garbage collect snapshots etc.).
With SMAPIv3, you can do the same, but also, in some case, delegate to specific hardware. Let's take an example: a Pure Storage Flash array. Those things got an API, so you could have a specific driver talking to the array for doing snapshot and so on. So no more coalesce to deal with on XCP-ng, the storage will do it for you. That's just an example, but it will give a degree of freedom to provide many different capabilities, and some more "native" to a storage tech.
The downside is it's up to us to develop a way to universally works when you want to go from a storage to another, and to export your data too.
-
Sounds like there will be a lot to think about!
Im just happy this is finally happening, it will be a huge improvement for everyone, including new users who haven't had to struggle with the coalescale trrain in the past -
@olivierlambert as someone who only uses zfs for vm storage on all of their xcp-ng hosts, this makes me very happy.
-
@swivvle said in SMAPIv3 - Feedback & Bug reports:
@olivierlambert as someone who only uses zfs for vm storage on all of their xcp-ng hosts, this makes me very happy.
Yea I totally agree, this sounds promising.
We've used Nexentastor and FreeNAS/TrueNAS as backend for our XCP's for many many years and even arrays with spinning rust backed by RAM and SSD cache are pretty fast.Being able to snapshot the zvol volume regularly is also a huge plus in case of a ransomware-attack and disaster-recovery since those can be sent to another machine.
-
@olivierlambert @nikade I'm all local zfs on the hypervisor, no external storage. With what is available now, I prefer vhd's sitting on a zfs dataset with xcp-ng versus LVM on top of zvol for sure. Super stable and easy to manage, add more space with zpool add , etc. I wouldn't use the new v3 without a snapshot and backup option, but without migration would be OK for my use case as long as a vm copy would still work.
-
Snapshot will work out of the box, it's a raw datapath on top of zvols (so no VHD size limitation). Backup will come after that, and migration will be last
-
I read through the intro article but I'm still a bit unclear on how this will work. Right now I have local ext storage and nfs zfs storage. What would I have to do in order to use SMAPIv3?