XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Suggestions for new servers

    Scheduled Pinned Locked Moved Hardware
    19 Posts 8 Posters 3.8k Views 6 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • D Offline
      dariosplit
      last edited by

      I gave up on the hardware controller because in case of failure I need to have one in reserve for data recovery.

      planedropP 1 Reply Last reply Reply Quote 0
      • planedropP Offline
        planedrop Top contributor @dariosplit
        last edited by

        @dariosplit While that is true, it's still safer to do that than software RAID, at least in this setup. In my experience hardware RAID controller failures are super rare, have literally never come across one that wasn't like 10+ years old.

        D 1 Reply Last reply Reply Quote 0
        • D Offline
          dariosplit @planedrop
          last edited by

          @planedrop I also don't see the point of XCP installed on SDD in RAID1 with HW controller. When and if it breaks, install it again.
          I also prefer SR on local storage using one NVMe unit and continuous replication to another.

          planedropP 1 Reply Last reply Reply Quote 0
          • planedropP Offline
            planedrop Top contributor @dariosplit
            last edited by

            @dariosplit I mean, it still would be best to avoid breaking it in the first place, it's not always going to be as simple as reinstall and you're good to go, things can get damaged if an SSD dies during use etc.... Could end up requiring restore from backup etc... I'd just avoid that in production.

            X 1 Reply Last reply Reply Quote 0
            • X Offline
              xxbiohazrdxx @planedrop
              last edited by

              I have to say, the recommendation to use hardware based RAID is somewhat shocking. You're throwing away massive amounts of performance by funneling all potentially hundreds of lanes worth of NVMe disks into a single controller with a measly 16 lanes. SDS is the future, wether that be through something like md or ZFS.

              planedropP 1 Reply Last reply Reply Quote 0
              • planedropP Offline
                planedrop Top contributor @xxbiohazrdxx
                last edited by

                @xxbiohazrdxx Sure, but why do you need a massive amount of storage on a local hypervisor? That's when it's time for a SAN.

                D 1 Reply Last reply Reply Quote 0
                • D Offline
                  DustinB @planedrop
                  last edited by

                  @planedrop Well, more importantly, with Hardware Raid, you often get blind and Hotswap capabilities.

                  With Software Raid, you need to prepare the system by telling md to eject a disk and then replace said disk in the array with its replacement.

                  Software Raid is very good, but has its drawbacks.

                  planedropP 1 Reply Last reply Reply Quote 1
                  • planedropP Offline
                    planedrop Top contributor @DustinB
                    last edited by

                    @DustinB This is also a good point, they have their ups and downs. I do think eventually hardware RAID will completely die, but that's still a ways off. Software is certainly what I would call overall superior, but hardware RAID still has good use cases for the time being.

                    1 Reply Last reply Reply Quote 0
                    • P Offline
                      Paolo
                      last edited by Paolo

                      I've done some benchmarks with my new servers and want to share results with you.

                      Server:

                      • CPU: 2 x Intel Gold 5317
                      • RAM: 512GB DDR4-3200
                      • XCP-ng: 8.3-beta2

                      fio parameters common to all tests:
                      --direct=1 --rw=randwrite --filename=/mnt/md0/test.io --size=50G --ioengine=libaio --iodepth=64 --time_based --numjobs=4 --bs=32K --runtime=60 --eta-newline=10

                      VM Debian: 4 vCPU, 4 GB Memory, tools installed
                      VM Windows: 2/4 vCPU, 4 GB Memory, tools installed

                      Results:

                      • First line: Troughtput/Bandwidth (MiB/s)
                      • Second line: IOPS: KIOPS (only linux)

                      Performances.png

                      Considerations:

                      • On bare metal I get full disk performance: approx. double read speed due to RAID1.
                      • On VM the bandwidth and IOPS are approx 20% of bare metal values
                      • On VM the bottleneck is tapdisk process (CPU at 100%) and can handle approx 1900 MB/s
                      BartLanzB 1 Reply Last reply Reply Quote 0
                      • BartLanzB Offline
                        BartLanz @Paolo
                        last edited by

                        Just wanted to say thank you for this thread. I am in the midst of evaluating ESXi alternatives and trying ProxMox as well as XCP-NG.

                        I just installed XCP-NG this morning and have been trying to figure out if I should be doing hardware RAID vs ZFS raid. This thread provided the answer I needed. Hardware Raid on my Dell Power Edge Servers is easy and reliable.

                        Thank you again!

                        1 Reply Last reply Reply Quote 1
                        • P Offline
                          Paolo
                          last edited by

                          This post is deleted!
                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post