XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. bbruun
    3. Posts
    B
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 1
    • Posts 7
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Home lab 3 AMD 5600G 10gbe/64GB/1GB SSD/1GB NVMe

      @olivierlambert absolutely true.
      I have 3 servers at home each with 1 NVMe disk, so ... I choose 3, also because 3 is the usual minimum for minimum fault tolerance and recovery in a cluster... and it provides direct disk access for the VM vs running diskless on one of the servers during/after migration eg after applying updates to the cluster etc.

      I don't mind testing things and since I'm new to XCP-ng (not Xen) and enjoy tinkering as I'm a Linux Systems Administrator by trade so I'm enjoying it, even though the version I'm running is a beta. I don't have the money for my 3 nodes to run the full paid version of either XCP-ng or XOSTOR ... so I'm tinkering away even with limits 🙂

      So far though, home vs enterprise, then I'm not really seeing anything special besides FC SAN usage on VMWare vs what I have available at home that is a main difference for switching, though I think Ceph is the better way to go vs xostor for enterprises. The reason I choose XCP-ng over Proxmox to run at home was the general enterprise feel and prior Xen knowledge I have and that it is a type-1 hypervisor vs the Proxmox that is type-2... and the GUI feels better and less cluttered and a lot more responsive in XCP-ng.

      posted in Share your setup!
      B
      bbruun
    • RE: Home lab 3 AMD 5600G 10gbe/64GB/1GB SSD/1GB NVMe

      @scot1297tutaio I was about to write something similar to @olivierlambert below, though without the ability to make improvement promises 😉

      But as stated then the xostor is 3+ disks that are mirrored, so when a backup snapshot is made it is written to all 3 disks at the same time, which takes extra time, hence the backup is a bit slower.

      I went from a roughly 20min backup time of approx 20 VMs to just over 1 hour to an external server using 10Gbit network (but to spinning disks. But I don't really care as the VM's are fully functional during the backup so I'm not feeling any degregration during backup, it just takes longer.

      I'm working on the Salt Stack/Salt Cloud 'xen' integration to make transition from VMware to XCP-ng easier, but for some reason when cloning a new VM, especially to xostor, then there is a similar write limit, that some people in forum posts states as a builtin speed quota limit... so it is not just backup that is slow.

      Anyhow, I'm really looking forward to xostor's future. I think I would prefer Ceph as the shared storage system instead, as it can scale and be live migrated to other servers with a lot less hazzle than trying to extend xostor to other servers to replace servers as they age. But that is a whole other scenario (a welcome one).

      I mostly use xotore because I professionally work with only HA systems (ofcouse we have singleton VM's but all hardware is redundant etc., but using old-tech SAN's and can't think of running anything else personally).

      On a very positive node the we had a powerglict here the other day and my XCP-ng cluster servers lost power (I don't have a UPS at home) but the 3 servers booted when power came back and there have been nothing in the logs about any problems from xostor or XCP-ng (except me not having set the "Auto power on" on 2 VMs ... my fault).

      posted in Share your setup!
      B
      bbruun
    • RE: Home lab 3 AMD 5600G 10gbe/64GB/1GB SSD/1GB NVMe

      @scot1297tutaio Yes it is still going strong. I'm investigating which SSD adapter to buy to get more space as xostor and backup cleanup causes some backups to fails due to free disk space., used by backup snapshots.
      I've not yet found out if it is the backup cleanup or xostor that is the source.

      posted in Share your setup!
      B
      bbruun
    • RE: Home lab 3 AMD 5600G 10gbe/64GB/1GB SSD/1GB NVMe

      @olivierlambert did a few fio tests on a small Rocky Linux 9 VM

      $ fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=512M --numjobs=2 --runtime=240 --group_reporting
      <snip>
      Run status group 0 (all jobs):
        WRITE: bw=177MiB/s (185MB/s), 177MiB/s-177MiB/s (185MB/s-185MB/s), io=1024MiB (1074MB), run=5799-5799msec
      
      Disk stats (read/write):
          dm-0: ios=0/50159, merge=0/0, ticks=0/61513, in_queue=61513, util=97.02%, aggrios=0/56362, aggrmerge=0/3, aggrticks=0/63740, aggrin_queue=63739, aggrutil=97.37%
        xvda: ios=0/56362, merge=0/3, ticks=0/63740, in_queue=63739, util=97.37%
      
      $ fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
      <snip>
      Run status group 0 (all jobs):
         READ: bw=336MiB/s (353MB/s), 336MiB/s-336MiB/s (353MB/s-353MB/s), io=3070MiB (3219MB), run=9132-9132msec
        WRITE: bw=112MiB/s (118MB/s), 112MiB/s-112MiB/s (118MB/s-118MB/s), io=1026MiB (1076MB), run=9132-9132msec
      
      Disk stats (read/write):
          dm-0: ios=779737/260595, merge=0/0, ticks=356376/197810, in_queue=554186, util=98.97%, aggrios=784620/262516, aggrmerge=1324/170, aggrticks=357389/198820, aggrin_queue=556209, aggrutil=98.61%
        xvda: ios=784620/262516, merge=1324/170, ticks=357389/198820, in_queue=556209, util=98.61%
      

      The VM is snappy and responsive and was compiling fio in the backgroud from github as I didn't think of it being in the repo for a quicker install 😄

      posted in Share your setup!
      B
      bbruun
    • RE: Home lab 3 AMD 5600G 10gbe/64GB/1GB SSD/1GB NVMe

      @olivierlambert that was the point 🙂

      Regarding the XOSTOR storage speed then a simple hdparm test shows some pretty OK sequential read speeds.

      [root@rocky9-template ~]# hdparm -t /dev/xvda
      
      /dev/xvda:
       Timing buffered disk reads: 8530 MB in  3.00 seconds = 2843.07 MB/sec
      [root@rocky9-template ~]# hdparm -t /dev/xvda
      
      /dev/xvda:
       Timing buffered disk reads: 8390 MB in  3.00 seconds = 2796.60 MB/sec
      [root@rocky9-template ~]# hdparm -t /dev/xvda
      
      /dev/xvda:
       Timing buffered disk reads: 8378 MB in  3.00 seconds = 2792.00 MB/sec
      [root@rocky9-template ~]#
      
      posted in Share your setup!
      B
      bbruun
    • RE: Home lab 3 AMD 5600G 10gbe/64GB/1GB SSD/1GB NVMe

      @olivierlambert It's just 3 small black cheap CXC2 towers - nothing special - all LEDs are disabled so they are not interesting at all (as you can see)
      xcp-ng-homelab.jpeg

      posted in Share your setup!
      B
      bbruun
    • Home lab 3 AMD 5600G 10gbe/64GB/1GB SSD/1GB NVMe

      My home lab, after having been running XCP-ng on 3 Udoo x86 Ultra "SBC"'s have just been updated to 3 small gamer towers with

      The cluster consists of 3 serers with identical specs:

      • AMD Ryzen 5 5600G with Radeon Graphics
      • ASRock B550 Phantom Gaming 4 motherboard (cheap)
      • 2x32 GB 3600 blocks per PC giving my 192GB RAM
      • 1 TB SSD as boot drive per host
      • 1 TB NVMe as XOSTORE drive per host
      • 1 10gbe QNap NIC

      Current XCP-ng is 8.2.1 with XO CE.

      The RAM is set to 3200 in XMP profile otherwise I get tap device errors in the VMs (aka block I/O errors).
      The 1 TB SSD is only used for XCP-ng boot (possibly later on for a few non-critical VMs that I can live without in case of a host crash)
      The 1 TB NVMe is setup as thin provisioned XOSTORE (https://xcp-ng.org/forum/topic/5361/xostor-hyperconvergence-preview) - it does not seem slow as such, but comming from 1gbe network and low performing SSD's (<1GB/s) then I have no speed issues ... as of yet. Time will tell 🙂

      Avg send and receive speed is 1.07GBytes/s according to iperf3 so life is good compared to the now old and retired Udoo x86 Ultra setup 🙂

      Loving it

      posted in Share your setup!
      B
      bbruun