Maybe you saw the same thing I did - I also had major issues with backups. Then in the summer of 2019 I upgraded our XOA (Yes, we have the paid version) and everything was going 3-4x times faster. Must have been some major improvement in the XOA code.
So maybe some official info that if you have XO from source Your backup speeds will be as low as 3-4 MB/s? Or what?
Sorry if I was unclear - XOA is built from the same sources so there is no difference in that aspect.
All tho, there was a big performance issue regarding backups which was resolved.
I am using XO from sources in my "private" infrastructure and I can assure you there are no "limits" on purpose.
You're very welcome. My solution to the similar problem I'd had was to set up a couple of internal systems as NTP servers so that I always had something with the right time and static IP addresses and pointed everything needing NTP at them.
I gave my latest "trick" another run:
Did a full copy of the VM within xo to the other host. While that job ran, I started the delta backup that finished OK. After the copy was done I deleted the copy and saw - as last time - that the host was coalescing. After finishing the sr's advanced tab is empty again and stayed empty.
Just tried updating our pool and after updating the master had failed migration of VM (Ubuntu) and then was unable to get VM to start (Missing VDI error). Only solution was to roll master back to 7.4 and everything working again.
I ran into a similar situation when I upgraded to 7.5 this week. Did you check to see if there was a non-existent ISO mounted in the CD rom for the VM in question?
Thank you. That was my problem. I gave cd empty. Everything works.
@stormi dd stands for disk dump and does exactly that: Copy a stream of data.
Fio however can be configured for precise workloads and read/write mixes, parallel workloads etc.
So the first thing will only give you streamline benchmarks, what almost nobody cares about.
The second can simulate realworld (VM/database...) workloads, where (controller) Caches and non magnetic storage (Flash, Optane, MRAM...) makes the real difference.
Also use big amount of data, since caches can impact small ones extremely. Don't get me wrong: We need them and they can make huge differences, but as long as your benchmarks fully fit into them, it gives your nonsense/fake results. Also (consumer) SSDs start throttling after some 10 to a very few 100 GB of data written. Their caches fill up and they 'overheat'.
You can spend days on benchmarks and how to do what. 😉
I have good experience with WiBu Key dongles and a "Matrix USB-Key" (also license dongle) but couldn't get an Aladdin HASP working.
I can pass it through (it's visible and attached) but the VM doesn't show it in device manger (Windows 10 1909). I gave up for now and will probably use a network USB thingie from SEH - already have 2 of their devices running in different environments and they work flawlessly.
CephFS is working nicely, but the update deleted my previous secret in /etc and I had to reinstall the extra packages and recreate the SR and then obviously move the virtual disks back across and refresh
Were you not able to attach the pre-existing SR on CephFS? Accordingly, I'll take a look in the documentation or the driver.
No look. I ended up with a load of orphaned discs with no name or description, just a uuid so it was easier to restore the backups.
I guess this is because the test driver had the CephFS storage as a NFS type, so I have to forget and then re attached as a CephFS type which I guess it didn't like! But its all correct now so I guess this was just a one off moving from the test driver.
Anyway all sorted now and back up and running with no CephFS issues! 🙂