Refreshed XCP-ng 8.2.0 ISOs: 8.2.0-2 - testing
Testing and feedback thread for the refreshed 8.2 ISOs.
It's not a new release of XCP-ng or a service pack or anything. It's just about refreshed installation ISO images.
We are going to release updated installation images for XCP-ng 8.2. No precise date yet, but the plan is roughly: a few weeks after the Guest Secure Boot feature is released.
We already built a test installation ISO that passed the first basic internal tests so you have an opportunity to test them, help detect any obvious (or not) regression that we would have missed, and give feedback.
Download it from here: https://nextcloud.vates.fr/index.php/s/G4rHRQJJieQpRQB
What changed from the previous XCP-ng 8.2 installation ISO
- Includes all the updates released up to now
- Includes some testing versions of packages, from the
- Installer changes:
- Fixed installer being stuck after creating a Software RAID for the system
- Fixed the restore function failing to restore the bootloader in case of software RAID
- Removed the "2G RAM max" boot entry, which does not seem necessary anymore to boot computers with Ryzen APUs
- In BIOS mode, added a mention about the
menuboot option that few people knew about and is very useful when one wants to edit a boot entry manually before booting.
What to test
The things from the list above plus normal usage.
I'll use this one to test the 8.2-to-8.2 upgrade we talked about on the software RAID thread.
Cannot test the bugfixes on my system but did an ISO update of my two host playlab (DELL Optiplex 9010, XCP-ng 8.2.0 fully pathed) with the updated installation images for XCP-ng 8.2. The update process went well, but this are pretty plain and identical systems with XCP01 being master and XCP02 being slave. The only thing different between the hosts was that I had to confirm the NIC setup steps on the slave XCP02 but not on the master XCP01 (which was updated and rebooted first)?
Apart from the update process, creating, starting, stopping and deleting (Linux) VMs does work as usual as well as taking snapshots and doing VM and storage migration. Will do a fresh install with the updated installation images the other day if time permits.
Did a complete wipe of my two host playlab today and a fresh install with the updated installation image. Still cannot help with the bug fixes, but installation went as usual and there is nothing special I can report. Did a XOA quick deploy, created my pool of two host, connected NFS SRs (ISO and data), imported some VMs (ova files) and moved them around a bit. Create a new Debian 10 XO from 3rd party script VM from scratch and in addition restored Windows and Linux VMs from backup. Looks good. Let me know if I can test anything special with my setup.
Nothing weird in the installer? Thanks again @gskger for your tests!
@olivierlambert No, everything looked quite normal. Is there anything I should pay special attention to?
I extended my test today by creating a softraid boot device with two identical SSDs for a XCP-ng clean install. The installer guided me through the install and softraid creation process without any hiccups or complains. Did a XOA quick deploy after that for my now single host pool (since I used the other host's boot SSD for the RAID), connected to the NFS SRs (ISO and data) and did some shutdown / reboots of the host. Even disconnected one SSD of the softraid while the host/RAID was online (not recommended, but hey - it is a playlab) but XCP-ng and some running Linux VMs just kept going. Adding the disconnected SSD back into the RAID from cli was unspectacular (State is clean again). The only thing that somehow surprised me is that there was no information or warning on the degraded RAID status in XOA (or I missed it). How would I know (from XOA) that the boot device RAID is degraded?
No just asking
About degraded RAID: it's in our backlog, right now there's no API call to check the state (we have a PR open IIRC)
@stormi So far, I have tested a fresh install using software RAID mirror creation. Works fine. Also, noticed the new EFI boot kludge to correct missing bootloader on Dell and other faulty UEFI firmware. (I used to always add the /boot/efi/EFI/boot/bootx64.efi file to correct this since it also occurs on my ASUS-motherboard machine.) That works well. The newly refreshed secureboot-certs install default default default latest command is not working. The requests python module is not being found. BTW, I think the default option where the command is secureboot-certs install should be equivalent to adding default default default latest parameters @beshleman . I'll continue to test and report back later.
@xcp-ng-justgreat thank you for the update/testing. Looks like our test machines already had
python-requests-2.6.0-10.el7.noarchinstalled, and we need to add it as a dependency to the RPM. I'll float the idea of "no args" == "default default default latest". I am all for reducing the amount of typing necessary. Thank you for the feedback again.
@beshleman So, after yum --enablerepo=base install python-requests on each of my hosts, secureboot-certs install default default default latest works perfectly. (Cool that it installs certs to each host in the pool with one invocation from any pool host.) Interesting that it doesn't install the three files to /var/lib/uefistored until you secure boot a vm on each host. I went looking for them and was initially confused because they were only written to the pool db. Serves me right for looking under the hood! Looks like XCP-ng secure boot is ready for prime time. Great job!
@xcp-ng-justgreat Thanks! Your feedback helped us make it better and again, it is much appreciated.
@beshleman I tried the latest testing update @stormi published with the updated SB support and it does indeed work properly including allowing installation of Windows Update KB4535680 on Server 2019 as previously cited. Also--a big thank you for adding the default parameter values for the improved secureboot-certs install command. Less is more. Very nice!
I'm having a different problem with MD RAID installs....
Two issues (related). This is for Linux Software MD RAID only (not hardware) when using the new testing installer.
If there is a MD RAID already setup on different drives (ie. existing data storage drives) then it gets in the way making a new boot/install MD RAID. The installer just can't create the RAID setup I want to use for the install. Yes, I can manually use the console to disable/remove the RAID device that's in the way.
Also, if I create a MD RAID for the install it won't let me create a second RAID setup to use for data storage. Yes, I can manually create the second RAID from the console during or after the install.
@andrew Yes indeed, those are known shortcomings:
- The installer does not wipe previous RAID setups.
- It does not offer to create another RAID for data storage. There's an extensive guide about this at https://xcp-ng.org/docs/guides.html#software-raid-storage-repository