Servers to be backed up, all in OVH.
20x xcp-ng8.0 1-2Gbit NIC
SATA hard drives. Not the fastest in the world but can usually manage 20-30MBbps during live migration or VM transfers.
Backup server
xcp-ng8.0 10Gbit NIC.
SAS hard drives 12x arranged in RAID0. xcp-ng and the VMs run from a single SSD drive.
VMs on the server:
XOA
Centos7 server with the RAID array direct mounted, XOA accesses this through NFS.
XOA and the Centos7 server communicate using a network internal to the xcp-ng server.
Testing on the RAID array from the NFS server using dd gives speeds of around 600-800MB/s reading & writing.
I've tested backups by setting a seperate backup job on 10 of the xcp-ng servers, none of them are in a pool, all seperate installs. For the first few machines they amount of data received by XOA and passed onto the NFS server goes up, then it's maxed out at around 70MB/s. Any further machines sending backups through to XOA doesn't increase that total instead reduces the individual speed of each of the backups.
The network input to XOA is a constant 70MBps (give or take) but the output to the NFS server is in pulses maxing out at around 200MBps.
I've done some network testing and using simultaneous iperf from 5 servers I've managed to max out the 10Gbit NIC so the network connectivity into there isn't in question. I wasn't able to test the connectivity into XOA itself, so that could still be a bottleneck, it does seem busy CPU wise.
Any advice as to where to look and what to try? The backup performance isn't bad per-se, it just could be better given the hardware.
There's always the possibility I've made a hideous multiplication by 10 error here somewhere.