Based on Citrix's hotfix XS82E030, here's a bugfix kernel update. I don't think it will change much for most hosts, except in some specific cases.
Previous kernel updates (that fixed network performance issues for FreeBSD and sometimes other VMs), may have reduced the performance in some situation according to Citrix. Based on the patches, it looks like it's related to IRQ affinity and cross-domain networking. Here's the patch: https://github.com/xcp-ng-rpms/kernel/blob/master/SOURCES/0001-xen-events-fix-setting-irq-affinity.patch
Tools that need to make the ioperm syscall were crashing on dom0. For example Supermicro Update Manager (SUM). This should fix it.
An additional dependency was added to the perf RPM (not installed by default) to make it able to do backtraces when you try to run it on binaries in dom0.
A patch fixes CVE-2021-29154 was added. It's not considered a security update because it does not fix an exploitable vulnerability. It's extra defence in depth.
How to update (XCP-ng 8.2 only)
yum update kernel --enablerepo=xcp-ng-testing
Version that should be installed: 4.19.19-126.96.36.199.xcpng8.2
What to test
Installation of the update, normal use, no obvious regressions...
Plus the changes described above if you're in a situation that allows it.
Test window before release
None defined at the moment. As it's not a security update, I'll wait for more updates to be ready before I push the next train officially. But feedback is always useful as soon as it can be provided.
A new ISO is available here to test IPv6 in dom0.
aligned patches with Citrix upstream merges
fix regardings tunnels, Sriovs network, VLAN and bonds when creating from IPv6 PIF
netinstall support in IPv6 (netinstall repo info coming soon)
NFS support in an IPv6 Pool
Do not use this in production this is an experimental feature!
What to test?
upgrade from an already configured IPv6 host
netinstall when a repo will be available
what you'd normally do with an xcp-ng host
For fresh installs, do not forget to add the ipv6 rpm repo for future updates:
@ricardowec I completely second @olivierlambert here. Although there is potential for security issues with an OS that loses support, a hypervisor should be treated with extreme caution anyway. I personally never expose a hypervisor in any capacity directly to the internet (for inbound traffic anyway 🙂 ). This practice alongside using very specific and often highly customised software packages should keep everything running smooth.
On top of all this, the underlying kernel, storage systems and such are all custom, so there is nothing really to worry about 🙂
I will be completely open when I admin that I am very new to XCP-NG, but the team seem to be really responsive if you have an issue. I have never seen such fast reply's, even from an unnamed big brand competitor who charge us $$$ for support.
@hoerup Hi, I can't remember too much to be honest.
I created a Debian 10 VM with 20GB disk, set up all the stuff that needed to be common for the Ceph pool pretty much following the Ceph documentation using the cephadm method - so Docker etc. This would be my 'Ceph admin VM'
Once that was all sorted I cloned the VM 3 times, for my actual Ceph pool and changed the hostname and static IP etc. I've got 3 hosts with Supermicro boards that have two SATA controllers on board, so on each one I passed though one of the controllers to the Ceph VM and then just deployed Ceph and followed the documentation. The only issues I ran into and any other tips are in the other post I linked to. Now Ceph is all containerised it all seams a bit too easy! Hope they're not my famous last words!! 😖 It does like a lot of RAM, so I've reduced the OSD limits down a bit and its fine for me.
CephFS is working nicely, but the update deleted my previous secret in /etc and I had to reinstall the extra packages and recreate the SR and then obviously move the virtual disks back across and refresh
Were you not able to attach the pre-existing SR on CephFS? Accordingly, I'll take a look in the documentation or the driver.
No look. I ended up with a load of orphaned discs with no name or description, just a uuid so it was easier to restore the backups.
I guess this is because the test driver had the CephFS storage as a NFS type, so I have to forget and then re attached as a CephFS type which I guess it didn't like! But its all correct now so I guess this was just a one off moving from the test driver.
Anyway all sorted now and back up and running with no CephFS issues! 🙂