XCP-ng 8.0.0 Beta now available!
-
Yup, but before (0.8) it wasn't good for performances, at all, due to cache poisoning (no
O_DIRECT
support). -
Any chance to get a newer lsblk that supports json output?
Would be great for plugins and would make parsing output much easier.Currently installed on XCP-ng 8 beta: util-linux-2.23.2-52.el7_5.1.x86_64
(something later than v2.27?)
https://git.devuan.org/CenturionDan/util-linux/commit/4a102a4871fdb415f4de5af9ffb7a2fb8926b5d1... ah forget it, I see, CentOS is using the old versions since long time ...
-
@cocoon said in XCP-ng 8.0.0 Beta now available!:
... ah forget it, I see, CentOS is using the old versions since long time ...
Yeah the chances that we'd change the version of such a low level package just for added functionality are very low.
-
@stormi yes and I totally understand that ... I just thought at first, it is so old, there must be something new if CentOS 7.5 is new ... but no
-
hi
Is this possible to install xcp-ng in xcp-ng just for tests?
-
Sure you need to enable Nested Virtualization when you create your VM and that's it.
-
-
Just updated my homelab server from XCP-ng 7.6 to XCP-ng 8.0 Beta, so far so good. I also tested the new experimental UEFI mode with Windows VM, seems good too.
I also tested the new XOA deployment through the Web interface of my host, perfect !
We will see during the following week how it goes
-
It does NOT work to migrate a paravirtualized (PV) CentOS6 machine or a PVHVM CentOS7 between two "servers" with Core i3-3110M CPUs in 8.0beta.
C6 throws a "xenopsd, error from emu-manager: Invalid argument" and C7 "xenopsd, error from emu-manager: xenguest Invalid argument".It works on the exact same hardware in 7.6 so that seems to be a new "unsupported old CPU" limitation, unless it's a proper bug in 8.0b.
I can migrate a Fedora28 (HVM) on that hardware in 8.0b so it appears to depend on what virtualization method the machine uses.
-
@peder Thanks for testing. It confirms our recent findings related to PV guests indeed! We're working on it and will post here once it's fixed.
-
@stormi Nice to hear, thanks!
-
@peder Fixed! This fix will be available (as soon as possible) in a future xcp-emu-manager package.
-
I have updated https://github.com/xcp-ng/xcp/wiki/Test-XCP with lots of new tests for those who need ideas
-
Just FYI - I have performed CPU and PGBench benchmarks on XCP-ng 8 beta 1, both with Hyperthreading enabled and disabled when running two identical VMs under different types of low, medium and heavy CPU load.
Results are available here: https://github.com/sammcj/benchmark_results/tree/master/xcpng/8/hyperthreading_impact
TLDR;
-
Significant performance decrease (38.7725%) when running multithreaded Sysbench CPU benchmarks in parallel on two VMs when hyperthreading is disabled.
-
Significant performance decrease (16.96%) when running PGBench under 'normal' load benchmarks in parallel on two VMs when hyperthreading is disabled.
-
No significant performance decrease when running Phoronix Test Suite's Pybench and OpenSSL benchmarks in parallel on two VMs when hyperthreading is disabled.
-
-
yum update
will now install the latestxcp-ng-emu-manager
that fixes the PV guest migration and brings better debug traces in case of crash of the emu-manager binary. We'd be interested if anyone managed to make a migration fail.Testing ideas still at https://github.com/xcp-ng/xcp/wiki/Test-XCP
-
@s_mcleod Hi, I'd like to do some basic benchmarks (though not on 8.0.0, but 7.6 still) to compare a host before and after disabling SMT (hyper-threading).
I thought I'd use some hints from your document at https://github.com/sammcj/benchmark_results/tree/master/xcpng/8/hyperthreading_impact
But the "Test 2 - Sysbench Multithreaded Prime Benchmark" link (https://github.com/sammcj/benchmark_results/blob/master/xcpng/8/hyperthreading_impact/hyperthreading_impact/test_2_sysbench_prime.md) returns "404 page not found".
Maybe you'd want to correct the link? Thank you!
-
@stormi I just managed to make migration fail using xcp-emu-manager-1.1.1-1 and xcp-ng-generic-lib-1.1.1-1
I have a PVHVM guest (CentOS7) which has static memory limit = 128M/2G and dynamic = 1G/1G and the migration fails after about 20% with a "xenguest invalid argument"
It works if I set static and dynamic max to the same value.Migration of a PVHVM Fedora 28 with static 1G/2G and dynamic 1G/1G works so it's possible it's the 128M static min that's part of the problem in the CentOS case.
A PV CentOS6 with static = 512M/2G and dynamic 1G/1G also works.
-
@peder Thanks! Could you make it fail once again and then produce a bug status report on both hosts with
xen-bugtool -y
and send the the produced tarballs to the project contact address, or to upload it somewhere temporarily for us to download? -
I just made a try here, I can't reproduce with the same guest OS and memory settings.
Are you also doing Xen Storage motion?
-
@stormi I've placed the tarballs here https://student.oedu.se/~peder/xcp-ng/
I changed the static min to 512M, to match the Fedora case, but it still failed.Olivier, I'm not using Xen Storage motion but I am using two old Lenovo L430 Thinkpads as "servers" so that could be part of the problem.
I'll install a new C7 guest and see if the problem persists.