XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    XCP-ng 8.0.0 Beta now available!

    Scheduled Pinned Locked Moved News
    123 Posts 24 Posters 94.2k Views 6 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • ruskofdR Offline
      ruskofd
      last edited by

      Just updated my homelab server from XCP-ng 7.6 to XCP-ng 8.0 Beta, so far so good. I also tested the new experimental UEFI mode with Windows VM, seems good too.

      I also tested the new XOA deployment through the Web interface of my host, perfect !

      We will see during the following week how it goes 😉

      1 Reply Last reply Reply Quote 1
      • P Offline
        peder
        last edited by

        It does NOT work to migrate a paravirtualized (PV) CentOS6 machine or a PVHVM CentOS7 between two "servers" with Core i3-3110M CPUs in 8.0beta.
        C6 throws a "xenopsd, error from emu-manager: Invalid argument" and C7 "xenopsd, error from emu-manager: xenguest Invalid argument".

        It works on the exact same hardware in 7.6 so that seems to be a new "unsupported old CPU" limitation, unless it's a proper bug in 8.0b.

        I can migrate a Fedora28 (HVM) on that hardware in 8.0b so it appears to depend on what virtualization method the machine uses.

        stormiS 1 Reply Last reply Reply Quote 0
        • stormiS Offline
          stormi Vates 🪐 XCP-ng Team @peder
          last edited by

          peder Thanks for testing. It confirms our recent findings related to PV guests indeed! We're working on it and will post here once it's fixed.

          P 1 Reply Last reply Reply Quote 1
          • P Offline
            peder @stormi
            last edited by

            stormi Nice to hear, thanks!

            1 Reply Last reply Reply Quote 0
            • ronan-aR Offline
              ronan-a Vates 🪐 XCP-ng Team
              last edited by ronan-a

              peder Fixed! This fix will be available (as soon as possible) in a future xcp-emu-manager package.

              1 Reply Last reply Reply Quote 2
              • stormiS Offline
                stormi Vates 🪐 XCP-ng Team
                last edited by

                I have updated https://github.com/xcp-ng/xcp/wiki/Test-XCP with lots of new tests for those who need ideas 🙂

                1 Reply Last reply Reply Quote 0
                • s_mcleodS Offline
                  s_mcleod
                  last edited by s_mcleod

                  Just FYI - I have performed CPU and PGBench benchmarks on XCP-ng 8 beta 1, both with Hyperthreading enabled and disabled when running two identical VMs under different types of low, medium and heavy CPU load.

                  Results are available here: https://github.com/sammcj/benchmark_results/tree/master/xcpng/8/hyperthreading_impact

                  TLDR;

                  • Significant performance decrease (38.7725%) when running multithreaded Sysbench CPU benchmarks in parallel on two VMs when hyperthreading is disabled.

                  • Significant performance decrease (16.96%) when running PGBench under 'normal' load benchmarks in parallel on two VMs when hyperthreading is disabled.

                  • No significant performance decrease when running Phoronix Test Suite's Pybench and OpenSSL benchmarks in parallel on two VMs when hyperthreading is disabled.

                  MajorTomM 1 Reply Last reply Reply Quote 1
                  • stormiS Offline
                    stormi Vates 🪐 XCP-ng Team
                    last edited by

                    yum update will now install the latest xcp-ng-emu-manager that fixes the PV guest migration and brings better debug traces in case of crash of the emu-manager binary. We'd be interested if anyone managed to make a migration fail.

                    Testing ideas still at https://github.com/xcp-ng/xcp/wiki/Test-XCP

                    P 1 Reply Last reply Reply Quote 1
                    • MajorTomM Offline
                      MajorTom @s_mcleod
                      last edited by

                      s_mcleod Hi, I'd like to do some basic benchmarks (though not on 8.0.0, but 7.6 still) to compare a host before and after disabling SMT (hyper-threading).

                      I thought I'd use some hints from your document at https://github.com/sammcj/benchmark_results/tree/master/xcpng/8/hyperthreading_impact

                      But the "Test 2 - Sysbench Multithreaded Prime Benchmark" link (https://github.com/sammcj/benchmark_results/blob/master/xcpng/8/hyperthreading_impact/hyperthreading_impact/test_2_sysbench_prime.md) returns "404 page not found".

                      Maybe you'd want to correct the link? Thank you!

                      s_mcleodS 1 Reply Last reply Reply Quote 0
                      • P Offline
                        peder @stormi
                        last edited by

                        stormi I just managed to make migration fail using xcp-emu-manager-1.1.1-1 and xcp-ng-generic-lib-1.1.1-1 🙂

                        I have a PVHVM guest (CentOS7) which has static memory limit = 128M/2G and dynamic = 1G/1G and the migration fails after about 20% with a "xenguest invalid argument"
                        It works if I set static and dynamic max to the same value.

                        Migration of a PVHVM Fedora 28 with static 1G/2G and dynamic 1G/1G works so it's possible it's the 128M static min that's part of the problem in the CentOS case.

                        A PV CentOS6 with static = 512M/2G and dynamic 1G/1G also works.

                        stormiS 1 Reply Last reply Reply Quote 0
                        • stormiS Offline
                          stormi Vates 🪐 XCP-ng Team @peder
                          last edited by

                          peder Thanks! Could you make it fail once again and then produce a bug status report on both hosts with xen-bugtool -y and send the the produced tarballs to the project contact address, or to upload it somewhere temporarily for us to download?

                          P 1 Reply Last reply Reply Quote 0
                          • olivierlambertO Online
                            olivierlambert Vates 🪐 Co-Founder CEO
                            last edited by

                            I just made a try here, I can't reproduce with the same guest OS and memory settings.

                            Are you also doing Xen Storage motion?

                            P 1 Reply Last reply Reply Quote 0
                            • P Offline
                              peder @stormi
                              last edited by

                              stormi I've placed the tarballs here https://student.oedu.se/~peder/xcp-ng/
                              I changed the static min to 512M, to match the Fedora case, but it still failed.

                              Olivier, I'm not using Xen Storage motion but I am using two old Lenovo L430 Thinkpads as "servers" so that could be part of the problem.

                              I'll install a new C7 guest and see if the problem persists.

                              1 Reply Last reply Reply Quote 1
                              • P Offline
                                peder @olivierlambert
                                last edited by

                                olivierlambert The VM that fails migration seems to have been created in xcp-ng 7.6 using the "Other Media" template.
                                I made a new VM in xcp-ng 8 using the CentOS7 template and I can migrate that just fine with a larger static max.

                                1 Reply Last reply Reply Quote 0
                                • olivierlambertO Online
                                  olivierlambert Vates 🪐 Co-Founder CEO
                                  last edited by

                                  Can you provide the full VM record of the problematic one? with

                                  vm param-list uuid=<YOUR FAILING VM UUID>

                                  Also the same with the one now working, so I can compare.

                                  P 1 Reply Last reply Reply Quote 0
                                  • P Offline
                                    peder @olivierlambert
                                    last edited by

                                    olivierlambert Sure.
                                    I've put the param-list logs as well as the VMs (sans the Disk) on https://student.oedu.se/~peder/xcp-ng/

                                    1 Reply Last reply Reply Quote 0
                                    • olivierlambertO Online
                                      olivierlambert Vates 🪐 Co-Founder CEO
                                      last edited by

                                      So I imported your VM metadata in my lab pool, attached to a VM disk with Debian (so we don't really care about the OS) and it worked 😆

                                      I also attached a disk of a previously working CentOS 7 VM, same thing: migration worked 🐹

                                      1 Reply Last reply Reply Quote 0
                                      • P Offline
                                        peder
                                        last edited by

                                        Weird.

                                        Maybe it's due to the hardware I'm using. I only have 8 GB RAM in the servers but the migration fails even if I'm not running any other VM.
                                        And since it works if static max=dynamic max it shouldn't be a RAM case either.

                                        Unless the migration for some reason tries to allocate 2-3 times the amount of RAM if static and dynamic max differ.

                                        1 Reply Last reply Reply Quote 0
                                        • s_mcleodS Offline
                                          s_mcleod @MajorTom
                                          last edited by

                                          MajorTom Sorry about that I've been (and still am AFK):

                                          Corrected link: https://github.com/sammcj/benchmark_results/blob/master/xcpng/8/hyperthreading_impact/test_2_sysbench_prime.md

                                          1 Reply Last reply Reply Quote 0
                                          • A Offline
                                            AllooTikeeChaat
                                            last edited by AllooTikeeChaat

                                            Folks... was at Citrix event today and the morning was a talk by LoginVSI looking at the impact of the latest security patches for Intel CPU's and the performance impact server on EUC work loads and server scalebility on the Hypervisors. According to the presenter all the Hypervisors (Hyper-V, VMware ESX and XenServer) that they were testing were seeing approx 20-25% decrease in performance when using Intel based systems. What I found interesting was VMware has developed a new scheduler, SCAv2 to help reduce the impact although no numbers where mentioned. There was no mention of anything of th sort for XenServer or HyperV.

                                            https://blogs.vmware.com/performance/2019/05/new-scheduler-option-for-vsphere-6-7-u2.html

                                            According to that blog post .. the impact using the SCAv2 was reduced to 11%.

                                            1 Reply Last reply Reply Quote 1
                                            • First post
                                              Last post