Maybe you saw the same thing I did - I also had major issues with backups. Then in the summer of 2019 I upgraded our XOA (Yes, we have the paid version) and everything was going 3-4x times faster. Must have been some major improvement in the XOA code.
So maybe some official info that if you have XO from source Your backup speeds will be as low as 3-4 MB/s? Or what?
Sorry if I was unclear - XOA is built from the same sources so there is no difference in that aspect.
All tho, there was a big performance issue regarding backups which was resolved.
I am using XO from sources in my "private" infrastructure and I can assure you there are no "limits" on purpose.
You're very welcome. My solution to the similar problem I'd had was to set up a couple of internal systems as NTP servers so that I always had something with the right time and static IP addresses and pointed everything needing NTP at them.
@alexanderk@olivierlambert Sorry to have not responded sooner to your question. It has been a very long, slow slog so far and I haven't been able to devote as much time as I'd like to working on this. Here's what I've done so far: Based on Andrew Cooper's recommendation, I installed a fully patched Windows Server 2008 R2 VM to Xen. (Hyper-V was initially released with Server 2008 so this is almost as far back as you can go.) Using the current unmodified Xen source code, the VM will permit Hyper-V to be enabled in the Windows Server 2008 R2 guest, but--as with newer versions of Windows--once you perform the finishing reboot, Hyper-V is not actually active. Adding the two recommended source-code patches, recompiling and performing the same test causes the VM to hang following the enablement of Hyper-V. I know that I need to set up a serial console for the VM in order to view any logging that might provide a clue as to what's failing during the boot, but I haven't worked that out just yet.
I've also spent some considerable time reading through the Xen Dev email posts on the history of the development of nested virtualization in Xen. One very significant learning from that reading is that nested virtualization on Xen was initially developed by an AMD developer. Development of the NV feature-set for Intel came later after the AMD-focused design die had been cast. As far as I can tell given that I'm running Server 2008 R2, this never worked on Intel. (Maybe it did on an older Intel processor, but I am currently working with SkyLake i7-6700s so have no way to test older hardware.) Unfortunately, I also don't have appropriate AMD hardware on which to perform the same test to see whether or not it might work on AMD.
On the Microsoft Hyper-V side, it seems as though the opposite evolution happened. Nested virtualization was developed on Intel first, then (very recently) AMD. This makes me suspect that it doesn't work on AMD either. In other words, I don't know that nested virtualization of Windows on Xen ever worked such that Hyper-V was actually active in the guest. I would be delighted to have somebody prove me wrong.
Just tried updating our pool and after updating the master had failed migration of VM (Ubuntu) and then was unable to get VM to start (Missing VDI error). Only solution was to roll master back to 7.4 and everything working again.
I ran into a similar situation when I upgraded to 7.5 this week. Did you check to see if there was a non-existent ISO mounted in the CD rom for the VM in question?
Thank you. That was my problem. I gave cd empty. Everything works.
@stormi dd stands for disk dump and does exactly that: Copy a stream of data.
Fio however can be configured for precise workloads and read/write mixes, parallel workloads etc.
So the first thing will only give you streamline benchmarks, what almost nobody cares about.
The second can simulate realworld (VM/database...) workloads, where (controller) Caches and non magnetic storage (Flash, Optane, MRAM...) makes the real difference.
Also use big amount of data, since caches can impact small ones extremely. Don't get me wrong: We need them and they can make huge differences, but as long as your benchmarks fully fit into them, it gives your nonsense/fake results. Also (consumer) SSDs start throttling after some 10 to a very few 100 GB of data written. Their caches fill up and they 'overheat'.
You can spend days on benchmarks and how to do what. 😉
Actually, on any recent enough Linux system (ie not many years old) , the PV drivers are directly included in the kernel. Unless, maybe, a very specific distro decides that they don't want Xen support in their kernel.
So the tools you install on the VMs are merely an agent to make the VM more cooperative with the hypervisor, but they don't affect performance.
The situation is different on Windows systems where you need to install PV drivers to achieve better performance.