XOSTOR Performance
-
Doing our trial setup before we purchase... and I am getting performance numbers that are lower than I expected.
Setup is 6 Dell R740xd chassis with 23 4TB Samsung EVO SSDs assigned to XOSTOR in each chassis. Network setup is 2 - 10gb ports in an slb bond from each host.
Running TrueNAS on them with a ZFS setup I normally see numbers around 885MiB/s for fio write tests (10GB with 1 worker) from vmware mounting the iscsi from them to a vm.
With XOSTOR I am seeing 170MiB/s from a VM running on the same host as the indicated "In Use" XOSTOR host.
Wondering if these numbers are expected... where to go for "tuning" etc...
Thanks for any info in advance!
-
Hi,
How are you testing performances exactly? Remember about the SMAPIv1 bottleneck per disk, you need to test multiple disks (or VMs) at once to really push the XOSTOR storage in its limits.
-
@olivierlambert I created a 100gb drive on my xostor... mounted it in a vm on /mnt and ran:
fio --name=/mnt/a --direct=1 --bs=1M --iodepth=32 --ioengine=libaio --rw=write --size=10g --numjobs=1Which basically says to write a 10gb file as fast as you can.
-
Yeah, so you have a bottleneck. Create 4 disks in the same VM, do a RAID0 with it, and try again your bench.
Or create 4 VMs with one disk and bench them at once.
-
@olivierlambert Well... that actually went worse and better 68.8MB/s for the first test... but then 266MiB/s when it was no longer "thin" on the second run.
This is still only 1/3 of "raw" performance so... it's not a deal breaker but man... having to do fake raid devices and still coming in that slow, in comparison, is rough. I've seen some forum posts with waaaaaay better speeds... I'll need to do some searching and see if they have any "magic" for me.
My methodology was.. created 4 drives on xostor assigned to the VM. Then in the VM I did:
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=4 /dev/xvdb /dev/xvdc /dev/xvde /dev/xvdf
mkfs.xfs /dev/md0
mount /dev/md0 /mnt
fio --name=/mnt/a --direct=1 --bs=1M --iodepth=32 --ioengine=libaio --rw=write --size=10g --numjobs=1 -
You should try with larger blocks and/or bigger iodepth (64 or 128). I reached greater numbers in my small homelab.
Also, read speed should be near local SR performance (since it doesn't need to read on another node but locally)
-
@olivierlambert iodepth didn't change it much...
Read speeds are good, I'm seeing 1,113 MiB/s on both the raid0 and the single drive... so does smapiv1 have a limiting factor on only the writes?
-
Not only but that's where it's more visible.