XCP-ng

    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups

    XCP-ng 8.0.0 Beta now available!

    News
    24
    123
    33848
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • MajorTom
      MajorTom @s_mcleod last edited by

      @s_mcleod Hi, I'd like to do some basic benchmarks (though not on 8.0.0, but 7.6 still) to compare a host before and after disabling SMT (hyper-threading).

      I thought I'd use some hints from your document at https://github.com/sammcj/benchmark_results/tree/master/xcpng/8/hyperthreading_impact

      But the "Test 2 - Sysbench Multithreaded Prime Benchmark" link (https://github.com/sammcj/benchmark_results/blob/master/xcpng/8/hyperthreading_impact/hyperthreading_impact/test_2_sysbench_prime.md) returns "404 page not found".

      Maybe you'd want to correct the link? Thank you!

      s_mcleod 1 Reply Last reply Reply Quote 0
      • P
        peder @stormi last edited by

        @stormi I just managed to make migration fail using xcp-emu-manager-1.1.1-1 and xcp-ng-generic-lib-1.1.1-1 🙂

        I have a PVHVM guest (CentOS7) which has static memory limit = 128M/2G and dynamic = 1G/1G and the migration fails after about 20% with a "xenguest invalid argument"
        It works if I set static and dynamic max to the same value.

        Migration of a PVHVM Fedora 28 with static 1G/2G and dynamic 1G/1G works so it's possible it's the 128M static min that's part of the problem in the CentOS case.

        A PV CentOS6 with static = 512M/2G and dynamic 1G/1G also works.

        stormi 1 Reply Last reply Reply Quote 0
        • stormi
          stormi Vates 🪐 XCP-ng Team 🚀 @peder last edited by

          @peder Thanks! Could you make it fail once again and then produce a bug status report on both hosts with xen-bugtool -y and send the the produced tarballs to the project contact address, or to upload it somewhere temporarily for us to download?

          P 1 Reply Last reply Reply Quote 0
          • olivierlambert
            olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼 last edited by

            I just made a try here, I can't reproduce with the same guest OS and memory settings.

            Are you also doing Xen Storage motion?

            P 1 Reply Last reply Reply Quote 0
            • P
              peder @stormi last edited by

              @stormi I've placed the tarballs here https://student.oedu.se/~peder/xcp-ng/
              I changed the static min to 512M, to match the Fedora case, but it still failed.

              Olivier, I'm not using Xen Storage motion but I am using two old Lenovo L430 Thinkpads as "servers" so that could be part of the problem.

              I'll install a new C7 guest and see if the problem persists.

              1 Reply Last reply Reply Quote 1
              • P
                peder @olivierlambert last edited by

                @olivierlambert The VM that fails migration seems to have been created in xcp-ng 7.6 using the "Other Media" template.
                I made a new VM in xcp-ng 8 using the CentOS7 template and I can migrate that just fine with a larger static max.

                1 Reply Last reply Reply Quote 0
                • olivierlambert
                  olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼 last edited by

                  Can you provide the full VM record of the problematic one? with

                  vm param-list uuid=<YOUR FAILING VM UUID>

                  Also the same with the one now working, so I can compare.

                  P 1 Reply Last reply Reply Quote 0
                  • P
                    peder @olivierlambert last edited by

                    @olivierlambert Sure.
                    I've put the param-list logs as well as the VMs (sans the Disk) on https://student.oedu.se/~peder/xcp-ng/

                    1 Reply Last reply Reply Quote 0
                    • olivierlambert
                      olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼 last edited by

                      So I imported your VM metadata in my lab pool, attached to a VM disk with Debian (so we don't really care about the OS) and it worked 😆

                      I also attached a disk of a previously working CentOS 7 VM, same thing: migration worked 🐹

                      1 Reply Last reply Reply Quote 0
                      • P
                        peder last edited by

                        Weird.

                        Maybe it's due to the hardware I'm using. I only have 8 GB RAM in the servers but the migration fails even if I'm not running any other VM.
                        And since it works if static max=dynamic max it shouldn't be a RAM case either.

                        Unless the migration for some reason tries to allocate 2-3 times the amount of RAM if static and dynamic max differ.

                        1 Reply Last reply Reply Quote 0
                        • s_mcleod
                          s_mcleod @MajorTom last edited by

                          @MajorTom Sorry about that I've been (and still am AFK):

                          Corrected link: https://github.com/sammcj/benchmark_results/blob/master/xcpng/8/hyperthreading_impact/test_2_sysbench_prime.md

                          1 Reply Last reply Reply Quote 0
                          • A
                            AllooTikeeChaat last edited by AllooTikeeChaat

                            Folks... was at Citrix event today and the morning was a talk by LoginVSI looking at the impact of the latest security patches for Intel CPU's and the performance impact server on EUC work loads and server scalebility on the Hypervisors. According to the presenter all the Hypervisors (Hyper-V, VMware ESX and XenServer) that they were testing were seeing approx 20-25% decrease in performance when using Intel based systems. What I found interesting was VMware has developed a new scheduler, SCAv2 to help reduce the impact although no numbers where mentioned. There was no mention of anything of th sort for XenServer or HyperV.

                            https://blogs.vmware.com/performance/2019/05/new-scheduler-option-for-vsphere-6-7-u2.html

                            According to that blog post .. the impact using the SCAv2 was reduced to 11%.

                            1 Reply Last reply Reply Quote 1
                            • olivierlambert
                              olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼 last edited by

                              This will come in Xen. In fact, work already started. It's called "core scheduling", and this is the preliminary work before sync scheduling, which is a perfect solution (better perfs AND better security) against Intel CPU flaws.

                              Source: https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg00370.html

                              A 1 Reply Last reply Reply Quote 0
                              • A
                                AllooTikeeChaat @olivierlambert last edited by

                                @olivierlambert

                                Good to know that they're already working on it for Xen. Surprised that none of the Citrix bod's at the event mentioned it. Looks like the best way to do get around the massive performance hit without switching to AMD !

                                olivierlambert 1 Reply Last reply Reply Quote 0
                                • olivierlambert
                                  olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼 @AllooTikeeChaat last edited by

                                  @AllooTikeeChaat I'm always stunned by the fact Citrix communication on Xen is equal to /dev/null, despite all the great efforts made by the Xen devs (half of them are from Citrix!)

                                  So as I said, it's not there yet, but it's going into that direction pretty well, I would expect this stuff to be backported in the future I suppose, because it's really powerful!

                                  1 Reply Last reply Reply Quote 0
                                  • D
                                    dariosplit last edited by

                                    When will the final version of XCP-ng 8.0.0. be available?

                                    1 Reply Last reply Reply Quote 0
                                    • olivierlambert
                                      olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼 last edited by

                                      This will depend a bit on community feedback: more feedback means faster release 🙂

                                      s_mcleod 1 Reply Last reply Reply Quote 1
                                      • s_mcleod
                                        s_mcleod @olivierlambert last edited by

                                        @olivierlambert I’m sorry I’m not more available to bring more testing to the table this week, I am AFK.

                                        1 Reply Last reply Reply Quote 0
                                        • olivierlambert
                                          olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼 last edited by

                                          No worries mate! To give a more precise answer, we might target a RC for the end of the month, maybe sooner, IDK yet.

                                          Then if there is no big issue spotted with the RC, release will come near after 🙂

                                          1 Reply Last reply Reply Quote 0
                                          • P
                                            peder last edited by

                                            Connecting an iSCSI SR in xsconsole fails if there's a discovery password.
                                            It works if I disable the authentication.

                                            It works in XenOrchestra if I attach it as a new SR (With auth) AND have the same discovery username and password in the ACL of the iqn.
                                            That still doesn't work in xsconsole.

                                            My iscsi server uses targetcli-fb/LIO.

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post