Using XOA "Disaster Recovery" backup method can be a lot faster than normal offline migration.
One time I did it, it took approx 10 minutes instead of 2 hours...
Using XOA "Disaster Recovery" backup method can be a lot faster than normal offline migration.
One time I did it, it took approx 10 minutes instead of 2 hours...
@DustinB said in Delta Backups:
@IanMcLachlan Are you trying to run multiple types of backups in the same job? Can you show us your "Backup Jobs" overview page?
I think he is referring to the full backup interval option: https://docs.xen-orchestra.com/incremental_backups#key-backup-interval
Enabling various custates and frequency scaling in BIOS/Firmware can help powr consumption. There is some latency cost as it takes longer time from deeper states.
Linux also has CPU frequency governors, but I am not sure how the Xen kernel handles this. Remember that dom0 is a VM under Xen, so things aren't as simple as with plain bare-metal Linuxes.
There's some information here about this:
https://wiki.xenproject.org/wiki/Xen_power_management
@TeddyAstie That is interesting. I had a look. The default seems to be cubic
, but bbr
is available using modprobe tcp_bbr
. I also wonder if different queuing disciplines (tc qdisc
) can help. For example mqprio that spreads packes across the available NIC HW queues?
@olivierlambert said in Migrating an offline VM disk between two local SRs is slow:
80/100MiB/s for one storage migration is already pretty decent. You might go faster by migrating more disks at once.
I'm not sure to understand what difference are you referring too? It's always has been in that ballpark, per disk.
This is not over a Network, only between local ext4 SRs on the same server. I tried the same migration using XCP-ng center and it is at the moment double as fast:
Can't really see any difference though. It is the same sparse_dd and nbd connection Perhaps it's a fragmentation issue. Though, doing a copy of the same VHD file gives close to 500MB/s.
@DustinB said in Migrating an offline VM disk between two local SRs is slow:
Separate question, why are you opting to use RAID 0, purely for the performance gain?
Yes, performance for bulk/temp data stuff.
@DustinB, no these are local disks not networked disks. I used to get around 400-500MB/s or so for plain migration between the two SRs.
@jebrown you could export xva or ova copies. But then those are not exactly identical to backups.
Hi!
I had to migrate a VM from one local SR (SSD) to another SR (4x HDD HW-RAID0 with cache) and it is very slow. It is not often I do this, but I think that in the past this migration was a lot faster.
When I look in iotop
I can see roughly 80-100MiB/s.
I think that the issue is the sparse_dd connecting to localhost (10.12.9.2) over NBD/IP that makes it slow?
/usr/libexec/xapi/sparse_dd -machine -src /dev/sm/backend/55dd0f16-4caf-xxxxx2/46e447a5-xxxx -dest http://10.12.9.2/services/SM/nbd/a761cf8a-c3aa-6431-7fee-xxxx -size 64424509440 -good-ciphersuites ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-GCM-SHA384:AES256-SHA256:AES128-SHA256 -prezeroed
I know I have migrated disks before on this server with multiple hundreds of MB/s, so I am curious to what is the difference.
This is XOA stable channel on XCP-ng 8.2.
My wishlist for a new XCP-ng:
@Tristis-Oris We've had the same problem, so are not using CBT for now.
@mauzilla I do not think numa is exposed to the guests, so they will only see the number of cores assigned. I. E., you can migrate them just fine.
It's a good discovery that having XOA outside the pool can make the backup performance much better.
How is the problem solving going for the root cause? We too have quite poor network performance and would really like to see the end of this. Can we get a summary of the actions taken so far and what the prognosis is for a solution?
Did anyone try plain Xen on a new 6.x kernel to see if the networking is the same there?
Check that XO and XCP-ng uses a ntp server.
What are your needs for a backup? A full bare-metal recovery of the server/vm, or just some of the files on it?
If XCP-ng/XOA backups are too slow, why not separate the main data into a second disk mounted over iSCSI inside Windows. You'd still need to manage backups somehow though.
You mentioned open source. One option is https://www.urbackup.org/. We have been looking at it as a replacement for Acronis for bare-metal machines, but not yet made a decision.
If you just need to keep the files, you could easily make a script that creates a VSS snapshot and runs robocopy to some remote destination. I do this on several servers, and then I manage retention on the remote location separately.
@Rhodderz said in CBT: the thread to centralize your feedback:
Is it possible to force a clean fresh start for the backups similar to Veeam "Active Full"?
Perhaps delete the snapshots for each vm. When backup job starts, it should be a 'full' backup.
Hi,
It seems that it is possible to create custom tags for a SR, but the dropdown list does not allow for selecting other previously created SR tags. Perhaps the dropdown filters on the VM tags instead of SR tags? In any case I would like to have tags searchable from this list.
I haven't seen it in XOA, but in XCP-ng Center, this information is available:
@msupport we use the AnywhereUSB for this reason. Haven't tested the performance with USB disks, but it works good for all our license keys.