Citrix Hypervisor 8.0 landed
@maxcuttins did you really measure 10 and 100 MB after I said 1 G is not enough for accurate results?
Usually you set blocksize to something usefull, like 1M and set count to e.g. 10000.
Of course you can change blocksize to test a bit, but that's usually between like 64k and maybe 4M.
Ah did you intend 10G? instead of 10M?
First rule of all benchmarks: The longer and more often they run, the more precise they are.
If we talk about 1G as base, why should I switch with 10 or 100 to M? That doesn't make any sense at all.
I heard in several places over this forum that
fiowould be a better benchmark than
dd. Does it apply here too?
It's always better than
dd, because it's closer to a real load.
@stormi dd stands for disk dump and does exactly that: Copy a stream of data.
Fio however can be configured for precise workloads and read/write mixes, parallel workloads etc.
So the first thing will only give you streamline benchmarks, what almost nobody cares about.
The second can simulate realworld (VM/database...) workloads, where (controller) Caches and non magnetic storage (Flash, Optane, MRAM...) makes the real difference.
Also use big amount of data, since caches can impact small ones extremely. Don't get me wrong: We need them and they can make huge differences, but as long as your benchmarks fully fit into them, it gives your nonsense/fake results. Also (consumer) SSDs start throttling after some 10 to a very few 100 GB of data written. Their caches fill up and they 'overheat'.
You can spend days on benchmarks and how to do what.