Your most loved* basic client XCP-ng Center 8.0.0 is now released:
https://github.com/xcp-ng/xenadmin/releases/tag/v.8.0.0
*of course XenOrchestra is the most loved advanced client
Main IT-Admin in Germany (Saxony) for a company group
Your most loved* basic client XCP-ng Center 8.0.0 is now released:
https://github.com/xcp-ng/xenadmin/releases/tag/v.8.0.0
*of course XenOrchestra is the most loved advanced client
Hello together,
to avoid more and more confusion about the pieces of the XCP-ng Universe, I'll try to describe what is what.
The Hypervisor, the Host, the Virtualisation Server: XCP-ng
This is the thing you install on bare hardware, it consists of:
It's exact the same concept like CITRIX XenServer or CITRIX Hypervisor. XCP-ng is derived from them. Something like a fork, but not so hard.
The basic management software, the windows client, the thing with the tree: XCP-ng Center
It's derived from CITRIX XenCenter.
The advanced management software, the web client, the backup solution and more: XenOrchestra
It's not derived from anything (as far as I know).
The ready to use XenOrchestra, the appliance: XOA
The company behind: Vates
Olivier Lambert and his company Vates startet all the jazz. They provide pro support for XCP-ng (the host) and XenOrchestra. They help to sign XCP-ng Center and the XCP-ng Windows Client Tools (and drivers) with an proper EV certificate.
Happy Easter!
I hope you are all well at and in good condition.
https://github.com/xcp-ng/xenadmin/releases/tag/v20.04.00.32
Hello Community!
I hope you all had a good start into year 2020!
I decided to change the way I release XCP-ng Center due the lack of other people participating in development. In future I will only release builds based on the master branch of CITRIX XenCenter (https://github.com/xenserver/xenadmin).
Pro:
Contra:
Within this change the version schema will also change. I think I use the YEAR.MONTH schema, so the next release will be "XCP-ng Center 2020.01".
The first step in this direction is the pre-relase of the current nightly build 99.99.99.27: https://github.com/xcp-ng/xenadmin/releases/tag/v99.99.99.27
The "quick" release of XCP-ng Center 20.04.01 for compatibility with XCP-ng 8.2 ist alive:
https://github.com/xcp-ng/xenadmin/releases/tag/v20.04.01.33
@stormi Heyho! I'll try to find some time. But first I'll setup an XCP-ng 8.2 host
If you have problems to install Citrix(TM) or XCP-ng Windows Guest Tools, please try the first working version of this tool:
https://schulzalex.de/builds/xcp-ng/XCP-ng-Windows-Guest-Tools-Cleaner_alpha_1.0.0.0.zip
Hello all together!
I started to build a little tool, that should help to find the problems before installing/updating the Guest Tools in a Windows VM. For now its only a first start, but it can grow to a very usefull tool.
Any constructive help or hint is very welcome!
Later (so my plans) it should be a part of a new Guest Tools Installer.
Issue: https://github.com/xcp-ng/xcp/issues/152
GIT repository: https://github.com/xcp-ng/win-installer-ng
[02:27 xen19 ~]# uname -a
Linux xen19 4.19.142 #1 SMP Tue Nov 3 11:27:36 CET 2020 x86_64 x86_64 x86_64 GNU/Linux
[02:30 xen19 ~]# yum list installed | grep kernel
kernel.x86_64 4.19.19-7.0.9.1.xcpng8.2 @xcp-ng-updates
kernel-alt.x86_64 4.19.142-1.xcpng8.2 @xcp-ng-base
memory graph so far:
looking very good!!!
@r1 our complete setup:
[FreeNAS NFS] <----shared-storage----> [Pool of 2 servers (xen22 + xen23)] ----XAPI---> [XO from sources] -----remote----> [FreeNAS NFS]
All [servers] are real hardware servers, no VMs involved.
Same chain of servers for xen19, execpt there are more pool members (and VMs).
@stormi I'm not complete sure if I did the iso upgrade or not...
But it's a good idea to reinstall the poolmaster from scratch...
Good news from the kernel-alt (server xen19): No RAM leaks so far
@stormi on server xen19 I think so, on server xen22 I'm not sure
I looked more close on my memory graphs and saw, that the memory baseline increases every night:
"bump" every day:
closer look in week 53:
Dez 31. - Jan 01.
Our Backups run from 18.00 till 3 or 4 in the morning (including coalesce).
--> maybe the heavy IO load leads to memory leaks "somewhere"?
@r1 yes, ever line "installed" in yum.log is an Upgrade from XCP-ng.
Problems started with XCP-ng 8.x
@r1 in general we do not restart many of our VMs, its all very static, only manual operated
xen19 is now rebootet (we need it in production) with kernel-alt - highest id is currently 4
xen22 (pool master of another affected pool) - highest id is curently 30
memory graphs of xen22
yum.log of xen22 (Problem here also after installing kernel-4.19.19-6.0.10.1.xcpng8.1.x86_64)
yum.log.5.gz:Dec 19 00:52:47 Updated: kernel-4.4.52-4.0.12.x86_6
yum.log.3.gz:Nov 08 10:07:40 Updated: kernel-4.4.52-4.0.13.x86_64
yum.log.1:Apr 10 20:31:01 Installed: kernel-4.19.19-6.0.10.1.xcpng8.1.x86_64
yum.log.1:Aug 31 23:10:50 Updated: kernel-4.19.19-6.0.11.1.xcpng8.1.x86_64
yum.log.1:Dec 11 18:00:54 Updated: kernel-4.19.19-6.0.12.1.xcpng8.1.x86_64
yum.log.1:Dec 19 12:52:00 Updated: kernel-4.19.19-6.0.13.1.xcpng8.1.x86_64
yum.log.1:Dec 19 12:54:13 Updated: kernel-4.19.19-7.0.9.1.xcpng8.2.x86_64
looked in my yum.log on this server (xen19):
our problems startet exactly since "Apr 10 18:10:29 Installed: kernel-4.19.19-6.0.10.1.xcpng8.1.x86_64"
yum.log.4.gz:Oct 03 17:35:54 Installed: kernel-4.4.52-4.0.7.1.x86_64
yum.log.4.gz:Nov 20 18:29:29 Updated: kernel-4.4.52-4.0.12.x86_64
yum.log.2.gz:Oct 10 20:19:31 Updated: kernel-4.4.52-4.0.13.x86_64
yum.log.1:Apr 10 18:10:29 Installed: kernel-4.19.19-6.0.10.1.xcpng8.1.x86_64
yum.log.1:Jul 07 17:46:34 Updated: kernel-4.19.19-6.0.11.1.xcpng8.1.x86_64
yum.log.1:Dec 10 17:59:07 Updated: kernel-4.19.19-6.0.12.1.xcpng8.1.x86_64
yum.log.1:Dec 19 13:53:39 Updated: kernel-4.19.19-6.0.13.1.xcpng8.1.x86_64
yum.log.1:Dec 19 13:55:20 Updated: kernel-4.19.19-7.0.9.1.xcpng8.2.x86_64
yum.log:Jan 18 17:35:07 Installed: kernel-alt-4.19.142-1.xcpng8.2.x86_64
I noticed in my monitoring graphs, that since we have this issue, the SWAP is not used like before the issue:
We also have an issue with growing control domain memory:
Today I install the alternate kernel on one of our poolmaster to see if that resolves our issue.