First SMAPIv3 driver is available in preview
-
-
@olivierlambert
i meant the source for this package: xcp-ng-xapi-storage-volume-zfsvolso that we can see how this new driver is implemented
-
That's inside the repo I posted
-
Has anyone tried a backup using the new driver? I created a new test pool with one of my previous hosts and made SMAPIv3 ZFS storage. I can create a VM just fine, but when I try and add it to my existing backup job, it keeps erroring out with "stream has ended with not enough data (actual: 485, expected: 512)"
Is this expected?
-
You can only do full backup for now, not incremental.
-
@olivierlambert Since it's the first backup, it should be full, correct? Does Delta backup not work at all even if force full is enabled?
-
I mean the backup feature, it only works with XVA underneath (so the full backup feature that is doing a full everytime)
-
I've started using the SMAPIv3 driver too. It's working well so far. I'm keeping my VM boot disks on
md
raid1, and using azfs
mirror via SMAPIv3 for large data disks.I have a question about backups... Is it safe to use
syncoid
to directly synchronize the ZFS volumes to an external backup?syncoid
creates a snapshot at the start of the send process. But, I also have rolling snapshots configured through Xen-Orchestra. Will thesyncoid
snapshot mess up Xen-Orchestra?If this isn't safe or isn't a good idea, I'll just use rsync to back up the filesystem contents inside the VM that the volume is mounted to...
-
On my side, I have no idea, because I never used
syncoid
. Have you asked their dev about this? -
if i understand correctly i would rephrase the question this way:
does xen-orchestra name the snapshots in a way which is unique to xen-orchestra and does xoa know which snapshots belong to it or does it use the latest snapshots no matter how they are named.
@hsnyder: i dont think you can simply use zfs snapshots without xen snapshots because it dont think that they will be crash-consistent.
if syncoid is similar to zrepl you have to check that is doesnt prune the zfs snapshots from xoa.
-
Question for @yann probably then
-
@rfx77 Thanks for clarifying my question, your reading of it was correct.
I've just realized that
syncoid
has an option,--no-sync-snap
, which I think avoids creating a dedicated snapshot for the purpose of the transfer, and instead just transfers over the pre-existing snapshots. If that's indeed what it does, then this solves all potential problems, because the existing snapshots are taken from xen-orchestra. I'll do a test to confirm this is indeed the behavior and then will reply again. -
If I understand he question correctly, the requirement is that the snapshot naming convention by ZFS-vol and by
syncoid
don't collide.
What convention issyncoid
using? The current ZFS-vol driver just assigns a unique integer name to each volume/snapshot, and there would be an error when it attempts to create a snapshot with a new integer name that another tool would have created on its own. -
@hsnyder Hi!
I would let syncoid do a snapshot, check the name and look if there could be any potential naming conflict. if thats not the case i would keep it as it was.
you can check if syncoid keeps the snapshots on the targetanyhow i would recommend zrepl for your tasks. its the tool used by nearly anyone who does zfs replication things. We are extensively using it for many Hub-Spoke sync architectures.
-
@rfx77 thanks for the recommendations. I looked into zrepl and it seems like a good solution as well. However, since I'm using this new zfs beta driver in production, I've decided I'm going to do the backup at the VM filesystem level, i.e. with rsync, instead of at the ZFS level. I figure that strategy is slightly safer in the event of bugs with the driver. I know that's debatable - it would depend on the bug, but this approach feels safer to me.
-
Hello @olivierlambert ,
I am joining this topic as I have a few questions about SMAPIv3:
-
Will it allow provisioning of VDIs larger than 2TB?
-
Will it enable thin provisioning on iSCSI SRs?
Currently, the blockers I encounter are related to my iSCSI storage. This is a major differentiating factor compared to other vendors, and resolving these blockers would significantly increase your market share.
Thanks !
-
-
@still_at_work said in First SMAPIv3 driver is available in preview:
Hello @olivierlambert ,
I am joining this topic as I have a few questions about SMAPIv3:
-
Will it allow provisioning of VDIs larger than 2TB?
-
Will it enable thin provisioning on iSCSI SRs?
Currently, the blockers I encounter are related to my iSCSI storage. This is a major differentiating factor compared to other vendors, and resolving these blockers would significantly increase your market share.
Thanks !
What blockers regarding iSCSI storage? Let me guess, thin provisioning and the 2Tb VDI size limit.
-
-
@still_at_work said in First SMAPIv3 driver is available in preview:
Hello @olivierlambert ,
I am joining this topic as I have a few questions about SMAPIv3:
-
Will it allow provisioning of VDIs larger than 2TB?
-
Will it enable thin provisioning on iSCSI SRs?
Currently, the blockers I encounter are related to my iSCSI storage. This is a major differentiating factor compared to other vendors, and resolving these blockers would significantly increase your market share.
Thanks !
@still_at_work The size limit of the VDI is due to the file format used for these, which is VHD (https://en.wikipedia.org/wiki/VHD_(file_format)). This format can't support more than 2TB, it's known about and are dealing with the issue. It will likely result in a change or addition of a new VDI format likely to be qcow2 unless necessary software for VHDX format is fully open sourced and software for Xen is created which enables create, read, write and use of this format.
It's not a limitation of iSCSI as it also emerges with both NFS and SMB based connections.
-
-
@john-c You've right, thanks for precisions.
However, thin provisioning on iSCSI is a real blocking thing for me, and I'm sure that I'm not alone
Will SMAPIv3 enable thin provisioning on iSCSI SRs?
-
@john-c as well as FC. Basically all shared storages that are production ready.
What are the up/downsides of qcow2 vs. VHDX?