Backups with qcow2 enabled
-
@acebmxer I have a new case of managing to force the fell back to full error...
i'll create a new topic for thisin the time being, if you can, do a toolstack restart on your pool when no tasks is ongoing
your backups with NBD could be better (spoiler alert : iptables rules...
) -
Both full and delta passed...
Full - 2026-04-17T11_32_42.906Z - backup NG.txt
Delta - 2026-04-17T13_26_54.973Z - backup NG.txtI just restarted tool stack on both pool master and second host. Re-edit backup job to remove the purge snapshot with cbt and re-ran a delta backup. Which still fell back to full backup.
-
@acebmxer bottom of POOL advanced tab, is BACKUP NETWORK selected on the NBD enabled network accessible by both hosts and XOA ?
-
Was already configured...


Think this is the issue....
Network tab under pool.

Will run another delta once the current one finishes.
Edit - update
Can a warning be made if NBD is not enabled at the pool level? or make the error more clear?
I enabled the nbd at the pool level and ran another delta - 2026-04-17T14_40_39.535Z - backup NG.txt
I Then re-enable the purge snapshot in back up job and ran another delta. 2026-04-17T15_00_40.358Z - backup NG.txt


-
A acebmxer referenced this topic
-
@acebmxer so, NBD it was...
holy molly, you have some good network performance !
what kind of SR at source ? and remote at destination ?
what about the PIFs ? -
Just a little old truenas running on AMD 5900x with 10 Gic nics bound, Unifi Agg switch with 10gb links to hosts.
Storage 5 8tb tohisba drives and two 1tb nvme drives 1 for cache 1 for log vdev.
Backup device is a DS1819+ with 4 12 tb seagate exo drives with 10gb link.

-
@acebmxer NFS remotes on the DS1819+ ?
we have iSCSI SR (25Gb mellanox 6 PIFs on hosts to 25Gb MSA2062 SAN dual controller)
our remotes are iSCSI os mounted volumes on MSA SANs, presented as S3 (minio VMs)
using XO PROXIES to offload backups from XOAwe max out a 150/200Mb/s during backups

but we are on VHD VDIs, asking myself if the added backup performance you present could be due to QCOW2 format on source SR ?
will have to try VDIs on such SR to see the diff -
@Pilow Yes NFS on vm storage and on backup storage.
All vms are now on qcow2 except for the windows vm what was vhd. However i just mirgrated it over to qcow2. the Nics in all systems are the intel 10gb either x520 or x540
Edit - Sorry missed your question about performance vhd vers qcow2. I would say its on equal to. I didnt run any benchmarks for comparison. (probably should have) But havent seen any major slowness other then GC issues. (See latest post)
-
So progress bar seems to be working now on exporting. But i am also noticing Garbage collection seems to be running quite often that i feel its slowing down the import for the health check.


-
Next thing I am noticing that garbage collection is not able to coalesce vdi to vms that are running. garbage will keep trying to run every 30 - 45 seconds for 30 seconds run time. VDIs to coalesce will keep increasing unless a vm is powered off and given enough time for garbage collection to actually run.
Because garbage collection is spamming so often importing a vdi for health check will take logger.
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login