Home lab 3 AMD 5600G 10gbe/64GB/1GB SSD/1GB NVMe
-
Nice! do you have any picture?
-
@olivierlambert It's just 3 small black cheap CXC2 towers - nothing special - all LEDs are disabled so they are not interesting at all (as you can see)
-
I can see the XO UI \o/
-
@olivierlambert that was the point
Regarding the XOSTOR storage speed then a simple hdparm test shows some pretty OK sequential read speeds.
[root@rocky9-template ~]# hdparm -t /dev/xvda /dev/xvda: Timing buffered disk reads: 8530 MB in 3.00 seconds = 2843.07 MB/sec [root@rocky9-template ~]# hdparm -t /dev/xvda /dev/xvda: Timing buffered disk reads: 8390 MB in 3.00 seconds = 2796.60 MB/sec [root@rocky9-template ~]# hdparm -t /dev/xvda /dev/xvda: Timing buffered disk reads: 8378 MB in 3.00 seconds = 2792.00 MB/sec [root@rocky9-template ~]#
-
Reads will be limited by the PV drivers anyway, because reading is only local, hence the great performance
-
@olivierlambert did a few fio tests on a small Rocky Linux 9 VM
$ fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=512M --numjobs=2 --runtime=240 --group_reporting <snip> Run status group 0 (all jobs): WRITE: bw=177MiB/s (185MB/s), 177MiB/s-177MiB/s (185MB/s-185MB/s), io=1024MiB (1074MB), run=5799-5799msec Disk stats (read/write): dm-0: ios=0/50159, merge=0/0, ticks=0/61513, in_queue=61513, util=97.02%, aggrios=0/56362, aggrmerge=0/3, aggrticks=0/63740, aggrin_queue=63739, aggrutil=97.37% xvda: ios=0/56362, merge=0/3, ticks=0/63740, in_queue=63739, util=97.37%
$ fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 <snip> Run status group 0 (all jobs): READ: bw=336MiB/s (353MB/s), 336MiB/s-336MiB/s (353MB/s-353MB/s), io=3070MiB (3219MB), run=9132-9132msec WRITE: bw=112MiB/s (118MB/s), 112MiB/s-112MiB/s (118MB/s-118MB/s), io=1026MiB (1076MB), run=9132-9132msec Disk stats (read/write): dm-0: ios=779737/260595, merge=0/0, ticks=356376/197810, in_queue=554186, util=98.97%, aggrios=784620/262516, aggrmerge=1324/170, aggrticks=357389/198820, aggrin_queue=556209, aggrutil=98.61% xvda: ios=784620/262516, merge=1324/170, ticks=357389/198820, in_queue=556209, util=98.61%
The VM is snappy and responsive and was compiling fio in the backgroud from github as I didn't think of it being in the repo for a quicker install
-
@bbruun What a great setup - a bit envious
But I'm loving my slower XCP-ng setup as well -
@bbruun I know this topic is older but wanted to check to see if you were still running this setup? If so how is going because I was thinking of moving my setup over to xostor.
-
@scot1297tutaio Yes it is still going strong. I'm investigating which SSD adapter to buy to get more space as xostor and backup cleanup causes some backups to fails due to free disk space., used by backup snapshots.
I've not yet found out if it is the backup cleanup or xostor that is the source. -
@bbruun well I got xostor going between my three nodes. Itβs been running a little over a week now and it is great. Everything is snappy inside the VMs and creation is fast too.
Only thing I noticed that my backups are super slow for thr VMs on the xostor storage. When it is either on local (not xostor just another storage) or NFS backing up the VMs are what I am use too.
Did you notice a difference in backup speed?
Thanks
-
Backup speed (throughput) shouldn't be slow, but the time to make and mount the snapshot to export the data is a lot longer on XOSTOR than any other SR (mostly all metadata operations, like creating a drive, snapshot etc.). That's mostly because of the nature of the cooperation needed (and resource creation) under the hood. If your disks aren't huge, you might take more time to do a snap, mount it to the Dom0 and connect it to XO (which are the backup steps) than actually transferring the data
But as soon the operation is done (eg boot the VM) then you should have a "regular" SR speed (I mean the real "usage" itself).
We have some optimization going in the next weeks to reduce that metadata operation time.
-
@scot1297tutaio I was about to write something similar to @olivierlambert below, though without the ability to make improvement promises
But as stated then the xostor is 3+ disks that are mirrored, so when a backup snapshot is made it is written to all 3 disks at the same time, which takes extra time, hence the backup is a bit slower.
I went from a roughly 20min backup time of approx 20 VMs to just over 1 hour to an external server using 10Gbit network (but to spinning disks. But I don't really care as the VM's are fully functional during the backup so I'm not feeling any degregration during backup, it just takes longer.
I'm working on the Salt Stack/Salt Cloud 'xen' integration to make transition from VMware to XCP-ng easier, but for some reason when cloning a new VM, especially to xostor, then there is a similar write limit, that some people in forum posts states as a builtin speed quota limit... so it is not just backup that is slow.
Anyhow, I'm really looking forward to xostor's future. I think I would prefer Ceph as the shared storage system instead, as it can scale and be live migrated to other servers with a lot less hazzle than trying to extend xostor to other servers to replace servers as they age. But that is a whole other scenario (a welcome one).
I mostly use xotore because I professionally work with only HA systems (ofcouse we have singleton VM's but all hardware is redundant etc., but using old-tech SAN's and can't think of running anything else personally).
On a very positive node the we had a powerglict here the other day and my XCP-ng cluster servers lost power (I don't have a UPS at home) but the 3 servers booted when power came back and there have been nothing in the logs about any problems from xostor or XCP-ng (except me not having set the "Auto power on" on 2 VMs ... my fault).
-
Also 3 ways mirror is really slower than 2 Most of the time 2 is enough, especially if you have backups!
-
@olivierlambert absolutely true.
I have 3 servers at home each with 1 NVMe disk, so ... I choose 3, also because 3 is the usual minimum for minimum fault tolerance and recovery in a cluster... and it provides direct disk access for the VM vs running diskless on one of the servers during/after migration eg after applying updates to the cluster etc.I don't mind testing things and since I'm new to XCP-ng (not Xen) and enjoy tinkering as I'm a Linux Systems Administrator by trade so I'm enjoying it, even though the version I'm running is a beta. I don't have the money for my 3 nodes to run the full paid version of either XCP-ng or XOSTOR ... so I'm tinkering away even with limits
So far though, home vs enterprise, then I'm not really seeing anything special besides FC SAN usage on VMWare vs what I have available at home that is a main difference for switching, though I think Ceph is the better way to go vs xostor for enterprises. The reason I choose XCP-ng over Proxmox to run at home was the general enterprise feel and prior Xen knowledge I have and that it is a type-1 hypervisor vs the Proxmox that is type-2... and the GUI feels better and less cluttered and a lot more responsive in XCP-ng.
-
@bbruun @olivierlambert Thanks for the information on the backups. It all makes sense to me. Was going to say during backing up everything still runs great, just I noticed it took a bit longer, but now I understand why.
I also am using a three node cluster using nvme as my drives on each node. I am doing the three replications too instead of the two. I am coming from using ceph storage and had always used a minimal of three replications. But maybe I will try just two to see how it runs!
Also this is just running all my stuff inside my house and it is solid. I do wish there was a home license for xostor so that I don't have to use the cli for everything related to it. I do enough CLI stuff in the day job and when at home I must of the time just want a gui to make it super simple.
Thanks again for a awesome product with xostor!