XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Poor VM performance after migrating from VMWare to XCP-NG

    Scheduled Pinned Locked Moved Migrate to XCP-ng
    15 Posts 5 Posters 1.6k Views 4 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M Offline
      markds @Danp
      last edited by

      @Danp

      No luck... still

      • I created a full clone of the migrated VM
      • Ensured the clone was shutdown.
      • Create a new VM via the template (no disks, attached a cdrom, same mac as clone)
      • Edited the new VM
        - removed the reference the cdrom
        - attached the cloned disk using the uuid reference (which is a cool feature)
        - changed the boot order so that harddrive was first
      • ran our ip tables and the same issue 2mins 7 secs so I suspect the underlaying issue still exists...

      how do I confirm that the xen network drivers are in use:
      lsmod | grep xen
      xenfs 17639 1
      xen_netfront 22032 0
      xen_blkfront 17478 5

      does that not indicate that whilst xen_netfront module is loaded, no device is using that module?

      DanpD 1 Reply Last reply Reply Quote 0
      • DanpD Offline
        Danp Pro Support Team @markds
        last edited by

        @markds Looks that way. What output do you get for sudo dmesg | grep xen?

        M 3 Replies Last reply Reply Quote 0
        • M Offline
          markds @Danp
          last edited by

          @Danp

          [ 0.824078] xen/balloon: Initialising balloon driver.
          [ 0.825817] xen-balloon: Initialising balloon driver.
          [ 0.848060] Switching to clocksource xen
          [ 0.861416] xen: --> pirq=16 -> irq=8 (gsi=8)
          [ 0.861486] xen: --> pirq=17 -> irq=12 (gsi=12)
          [ 0.861531] xen: --> pirq=18 -> irq=1 (gsi=1)
          [ 0.861580] xen: --> pirq=19 -> irq=6 (gsi=6)
          [ 0.861632] xen: --> pirq=20 -> irq=4 (gsi=4)
          [ 0.861692] xen: --> pirq=21 -> irq=7 (gsi=7)
          [ 0.916200] xen: --> pirq=22 -> irq=23 (gsi=23)
          [ 1.135771] xen: --> pirq=23 -> irq=28 (gsi=28)
          [ 1.936447] vbd vbd-5696: 19 xenbus_dev_probe on device/vbd/5696

          1 Reply Last reply Reply Quote 0
          • M Offline
            markds @Danp
            last edited by

            @Danp

            Actually maybe its a red herring...

            ethtool -i eth3
            driver: vif
            version:
            firmware-version:
            bus-info: vif-2

            Also I tested using iperf:
            vm <-> vm: 9.78 Gbits/sec (on the same xcp-ng host)
            vm <-> exsi: 0.97 Gbits/sec (network is 1GB)

            As a final test I ran:
            rmmod xen_netfront
            and as expected the vms lost network access

            So maybe the issue is elsewhere

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by olivierlambert

              Can you reproduce the issue by creating a new Linux VM from scratch on XCP-ng? (ie installing the OS etc.) This will help to understand if it's a setup-wise or VM wise issue

              M 1 Reply Last reply Reply Quote 0
              • M Offline
                markds @Danp
                last edited by

                @Danp
                I should also correct my previous answer...

                Looks like some of the dmesg are in a different case...

                dmesg | grep -i xen
                [ 0.000000] DMI: Xen HVM domU, BIOS 4.13 04/11/2024
                [ 0.000000] Hypervisor detected: Xen HVM
                [ 0.000000] Xen version 4.13.
                [ 0.000000] Xen Platform PCI: I/O protocol version 1
                [ 0.000000] Netfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated NICs.
                [ 0.000000] Blkfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated disks.
                [ 0.000000] ACPI: RSDP 0x00000000000EA020 00024 (v02 Xen)
                [ 0.000000] ACPI: XSDT 0x00000000FC00A7C0 00044 (v01 Xen HVM 00000000 HVML 00000000)
                [ 0.000000] ACPI: FACP 0x00000000FC00A370 000F4 (v04 Xen HVM 00000000 HVML 00000000)
                [ 0.000000] ACPI: DSDT 0x00000000FC001040 092A3 (v02 Xen HVM 00000000 INTL 20160527)
                [ 0.000000] ACPI: APIC 0x00000000FC00A470 00260 (v02 Xen HVM 00000000 HVML 00000000)
                [ 0.000000] ACPI: HPET 0x00000000FC00A750 00038 (v01 Xen HVM 00000000 HVML 00000000)
                [ 0.000000] ACPI: WAET 0x00000000FC00A790 00028 (v01 Xen HVM 00000000 HVML 00000000)
                [ 0.000000] Booting paravirtualized kernel on Xen HVM
                [ 0.000000] Xen HVM callback vector for event delivery is enabled
                [ 0.113732] Xen: using vcpuop timer interface
                [ 0.113741] installing Xen timer for CPU 0
                [ 0.390521] installing Xen timer for CPU 1
                [ 0.492037] installing Xen timer for CPU 2
                [ 0.590329] installing Xen timer for CPU 3
                [ 0.918283] xen/balloon: Initialising balloon driver.
                [ 0.920100] xen-balloon: Initialising balloon driver.
                [ 0.952068] Switching to clocksource xen
                [ 0.967516] xen: --> pirq=16 -> irq=8 (gsi=8)
                [ 0.967593] xen: --> pirq=17 -> irq=12 (gsi=12)
                [ 0.967645] xen: --> pirq=18 -> irq=1 (gsi=1)
                [ 0.967696] xen: --> pirq=19 -> irq=6 (gsi=6)
                [ 0.967755] xen: --> pirq=20 -> irq=4 (gsi=4)
                [ 0.967818] xen: --> pirq=21 -> irq=7 (gsi=7)
                [ 1.025049] xen: --> pirq=22 -> irq=23 (gsi=23)
                [ 1.228952] xen: --> pirq=23 -> irq=28 (gsi=28)
                [ 1.571668] XENBUS: Device with no driver: device/vbd/5696
                [ 1.573958] XENBUS: Device with no driver: device/vbd/768
                [ 1.577690] XENBUS: Device with no driver: device/vif/0
                [ 1.579893] XENBUS: Device with no driver: device/vif/1
                [ 1.584074] XENBUS: Device with no driver: device/vif/2
                [ 1.588143] XENBUS: Device with no driver: device/vif/3
                [ 2.040651] vbd vbd-5696: 19 xenbus_dev_probe on device/vbd/5696
                [ 2.152854] Initialising Xen virtual ethernet driver.

                1 Reply Last reply Reply Quote 0
                • M Offline
                  markds @olivierlambert
                  last edited by

                  @olivierlambert
                  So I copied the firewall rules script over to another linux vm (running the current debian version) and that work properly (finished in 5 seconds).

                  It would seem like all the vms that are not performing well are based on legacy versions of Debian.

                  I would have thought a kernel version of 3.2.102 would have been recent enough to get decent performance.

                  Do you know of any issues where Xen performs badly for older Debian versions?

                  1 Reply Last reply Reply Quote 0
                  • olivierlambertO Offline
                    olivierlambert Vates 🪐 Co-Founder CEO
                    last edited by

                    I would have thought a kernel version of 3.2.102 would have been recent enough to get decent performance.

                    😬 Kernel 3.2 was out in 2012 and EOL since 2018. That's very old and not secure (I doubt anyone would have backported any security patches since then). That's 6 years worth of CVEs.

                    I don't know if PV mode would be better. It's a Debian 32 bit or 64 bit?

                    1 Reply Last reply Reply Quote 0
                    • R Offline
                      rtjdamen @markds
                      last edited by

                      @markds we have seen major improvement when booting a vm in uefi, for some reason vms created on vmware with bios are terrible slow on xcp-ng, switching to uefi made them running normal again.

                      Tristis OrisT 1 Reply Last reply Reply Quote 0
                      • Tristis OrisT Offline
                        Tristis Oris Top contributor @rtjdamen
                        last edited by

                        @rtjdamen switch from bios to uefi is possible? VM can't startup after that.

                        R 1 Reply Last reply Reply Quote 0
                        • R Offline
                          rtjdamen @Tristis Oris
                          last edited by

                          @Tristis-Oris u need to change the boot records for this, on windows we use a specific tool for this that converts the MBR disk to a GPT disk, i am not shure how the proces work in linux but i know that can be done there as well.

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post