@olivierlambert I've understood that part... what I am wondering is if I have 3 hosts in one data center and 3 hosts in another, and I have asked for redundancy of 3 copies, is there a way to ensure all three copies are never in the same data center all at the same time.
Best posts made by vaewyn
-
RE: XOSTOR hyperconvergence preview
Latest posts made by vaewyn
-
RE: XOSTOR Performance
@olivierlambert iodepth didn't change it much...
Read speeds are good, I'm seeing 1,113 MiB/s on both the raid0 and the single drive... so does smapiv1 have a limiting factor on only the writes?
-
RE: XOSTOR Performance
@olivierlambert Well... that actually went worse and better 68.8MB/s for the first test... but then 266MiB/s when it was no longer "thin" on the second run.
This is still only 1/3 of "raw" performance so... it's not a deal breaker but man... having to do fake raid devices and still coming in that slow, in comparison, is rough. I've seen some forum posts with waaaaaay better speeds... I'll need to do some searching and see if they have any "magic" for me.
My methodology was.. created 4 drives on xostor assigned to the VM. Then in the VM I did:
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=4 /dev/xvdb /dev/xvdc /dev/xvde /dev/xvdf
mkfs.xfs /dev/md0
mount /dev/md0 /mnt
fio --name=/mnt/a --direct=1 --bs=1M --iodepth=32 --ioengine=libaio --rw=write --size=10g --numjobs=1 -
RE: XOSTOR Performance
@olivierlambert I created a 100gb drive on my xostor... mounted it in a vm on /mnt and ran:
fio --name=/mnt/a --direct=1 --bs=1M --iodepth=32 --ioengine=libaio --rw=write --size=10g --numjobs=1Which basically says to write a 10gb file as fast as you can.
-
XOSTOR Performance
Doing our trial setup before we purchase... and I am getting performance numbers that are lower than I expected.
Setup is 6 Dell R740xd chassis with 23 4TB Samsung EVO SSDs assigned to XOSTOR in each chassis. Network setup is 2 - 10gb ports in an slb bond from each host.
Running TrueNAS on them with a ZFS setup I normally see numbers around 885MiB/s for fio write tests (10GB with 1 worker) from vmware mounting the iscsi from them to a vm.
With XOSTOR I am seeing 170MiB/s from a VM running on the same host as the indicated "In Use" XOSTOR host.
Wondering if these numbers are expected... where to go for "tuning" etc...
Thanks for any info in advance!
-
Newbie questions
#1 Why can't you have rolling upgrades when there is an XOSTOR on the pool?
#2 Will that limitation go away anytime soon?
#3 I'm guessing that doing the migrate out/maintenance/upgrade/down/up/back in dance manually still works just fine?
#4 "Only show Networks that meet XOSTOR requirements" ... what are those requirements as I am only seeing the management network when others "should" be available.(figured this one out on my own)Thanks!
-
RE: XOSTOR hyperconvergence preview
For those that might run across my questions here... there is a nice blog post at Linbit on how to span availability zones correctly to keep your data redundancy up:
https://linbit.com/blog/multi-az-replication-using-automatic-placement-rules-in-linstor/So CLI is doable GUI would be nice in the future
-
RE: XOSTOR hyperconvergence preview
@olivierlambert Correct... these DCs are across a campus on private fiber so single digit milliseconds worst case. We've historically had vmware keep 3 data copies and make sure at least one is in a separate DC... that way, when a DC is lost, the HA VMs can restart on the remaining host pool successfully because they have their storage available still.
-
RE: XOSTOR hyperconvergence preview
@olivierlambert I've understood that part... what I am wondering is if I have 3 hosts in one data center and 3 hosts in another, and I have asked for redundancy of 3 copies, is there a way to ensure all three copies are never in the same data center all at the same time.
-
RE: XOSTOR hyperconvergence preview
With the integration you are doing is there provision to designate racks/sites/datacenters/etc so at some level replications can be kept off hosts in the same physical risk space(s)?
-
RE: XOSTOR hyperconvergence preview
Are there any rough estimates for timeline on paid support being available? Looking at ditching vmware and my company requires professional support availability. Virtualization I see the availability but I need storage as well that is at least mostly in parity with the vsan I have. Thanks to you all! Love these projects!