XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    VM performance VMWare 8 vs XCP-NG 8.2.1

    Scheduled Pinned Locked Moved Migrate to XCP-ng
    8 Posts 7 Posters 2.0k Views 10 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • nikadeN Offline
      nikade Top contributor
      last edited by

      Hello guys,

      We've been playing with the VMWare Import tool in XO and after a lot of fighting we finally have it working.
      First we have some weird issues when XO was not in the same VLAN as the VMWare source host and XCP-NG destination host so we had to re-design our lab to make that work.
      Then there was this "strange" problem that XO never actually started the import, sometimes there was a brief error and some times just nothing. Clicking Import again actually started the import.

      Now one of my colleagues mentioned that the VM's feels a bit sluggish, I myself never noticed anything but he's mainly doing Windows and im mainly doing Linux so maybe thats why I didn't notice.

      He is sure this is due to disk performance so we booted up a VM on our VMWare host that we successfully migrated to XCP (Both hosts have a Samsung PM893 960Gb SSD for SR/Datastore, the OS sits on an NVME, a single Xeon E3-1225 v5 @ 3.30GHz and 64Gb DDR3 RAM)

      This VM is running Windows Server 2022, on VMware we have the latest 8.0 VMWare tools installed and on XCP-NG we have the Citrix VM Tools 9.3.1 installed. The VM has 2 vCPU, 4Gb RAM and 50Gb disk and this is bench32 on VMWare:

      a82ba343-3c6e-41c4-b939-0b0eb1cec55a-bild.png

      And this is on XCP-NG:

      e7746ab5-145d-4cd1-a5bf-9d59f540c0a6-bild.png

      The difference is not huge but enough to worry us a bit. I know about the SMAPIv1 limitations and history and I also know SMAPIv3 is in progress, but is anyone else seeing this or am I alone?

      A 1 Reply Last reply Reply Quote 0
      • A Offline
        archw
        last edited by

        I did disk speed tests on about thirty VMs as I moved from esxi 8 to XCP-NG (both 8.2.1 and 8.3). In about 60% of the tests esxi was faster and 40% XCP-NG was faster. I used Crystalmark in my tests. When the test disk was a 1gb disk, esxi was a lot faster but when I changed the test disks to 8gb the results were split and there was not much of a difference between winner and looser..

        1 Reply Last reply Reply Quote 0
        • A Offline
          Andrew Top contributor @nikade
          last edited by

          @nikade Here are some results I get from Windows 10....

          HP DL360p G8 E5-2680v2 and 10Gb ethernet with SR on NFS (TrueNAS):
          atto-nfs.jpg

          Asus PN63 i7-11370H and local NVMe (RAID 1) with SR on EXT4:
          atto-nvme.jpg

          1 Reply Last reply Reply Quote 0
          • Theoi-MeteoroiT Offline
            Theoi-Meteoroi
            last edited by

            I would take a look at the disk-scheduler in use ( lsblk -t ) and change to perhaps deadline. CFQ is kind of a dog and a waste of time with SSD.

            K 1 Reply Last reply Reply Quote 0
            • K Offline
              KPS Top contributor @Theoi-Meteoroi
              last edited by

              I think, this is the limitation of the (single-threaded) tapdisk. In my tests, Xenserver and XCP-ng were always slower than vSphere on fast storage, but were able to scale well with more VMs

              nikadeN 1 Reply Last reply Reply Quote 0
              • nikadeN Offline
                nikade Top contributor @KPS
                last edited by

                Thanks everyone for your replies. I'll look into the scheduler, but it might just be that SMAPIv1 is the bottleneck here.

                lawrencesystemsL 1 Reply Last reply Reply Quote 0
                • lawrencesystemsL Offline
                  lawrencesystems Ambassador @nikade
                  last edited by

                  @nikade
                  Something to consider is that due to the way XCP-ng isolates for each disk for each VM for better security there can be some performance issues. But because it's per VM (unless your use case is to only running a single VM) this is less of an issues as most people run many VM's.

                  1 Reply Last reply Reply Quote 1
                  • olivierlambertO Offline
                    olivierlambert Vates 🪐 Co-Founder CEO
                    last edited by

                    You can redo the bench with 4 virtual disks in RAID0 and try again, that will represent the more "real" value in the real world (when you have many VMs and disks)

                    1 Reply Last reply Reply Quote 0
                    • First post
                      Last post