XCP-ng 8.2.0 beta now available!
And I expect that XCP-ng 8.2.0 will be perfectly stable well before we release it officially. The main missing piece is fixes to UEFI VM support.
I just pushed
xcp-ng-pv-tools-8.2.0-1.xcpng8.2.noarch.rpmto the 8.2 base repository. Beta testers can install it with a simple
yum update. No reboot required. Please check that :
- guest tools attached to existing VMs are detached during the update
- the new guest tools ISO is available for attaching to VMs after the update
- the updated README.txt looks fine (updated links to the docs, and added a FreeBSD section)
- tools can be installed to linux and FreeBSD VMs
@demanzke with the preferred client for XCP-ng: Xen Orchestra You'll have Ubuntu logo and VM IP address
The new implementation of UEFI support for VMs just landed through the last update to the beta.
yum updatewill install
uefistored-0.2.1-1.xcpng8.2.x86_64, then a simple
xe-toolstack-restartwill take it into account.
Dedicated thread for tests, feedback, debug and discussion: https://xcp-ng.org/forum/post/32335
ChuckNorrison last edited by ChuckNorrison
Upgrade finally done, works flawlessly. Citrix Hypervisor 8.2 -> XCP-ng 8.2 beta.
Iam using Ubuntu 20.04 Server VMs and a single Windows 10 VM (uefi). NFS Storage is used for Backups and SMB ISO Library.
- Patches got applied successful
- Several Host restarts successful
- Create VM successful
- USB Passthrough to Ubuntu 20.04 with APC-1400 USV successful
- Dynamic Memory settings successful
You make me happy
gskger last edited by gskger
Upgraded a three host homelab from XCP-ng 8.1 fully patched to XCP-ng 8.2 beta.
Things done/tested successfully:
- Upgrade process (via ISO)
- Copy/Move of VMs (even cross pool)
- Create/Change VMs (Linux, Windows (but not UEFI))
- Add/remove server to XO from source
- Create/Change networks
- Fresh install process (via ISO)
- Basic storage benchmark on dom0 and Debian VM using
- Add, Remove, Re-add NFS shares (ISO and storage)
This weekend I try to do some backup/restore (if time permits).
Nice work XCP-ng rocks
[edit: some more tests]
Guest tools ISO: Citrix doesn't provide a guest tools ISO anymore in Citrix Hypervisor. They replaced it with downloads from their internet website. We chose to retain the feature and still provide a guest tools ISO that you can mount to your VMs. Many thanks to the XAPI developers who have kept the related source code for us in the XAPI project, rather than delete it.
I think that the best solution would be to have them packaged by distros. It will also make installs much easier.
S.Pam last edited by
I agree here. Package xcp-ng tools for the main distros out there. Fedora, Centos, Ubuntu, Debian, OpenSuse, FreeBSD etc.. Not sure about Windows though. Perhaps an ISO is easiest way to distribute the tools?
The tools are packaged already for many of the distros you mentioned. However this is work that takes time and depends on the willingness of contributors from each distro, so a guest tools ISO remains useful in my view.
To everyone: don't forget to update your XCP-ng 8.2 beta hosts from time to time (
Recent updates include Xen and Linux kernel security updates, the latest
uefistoredthat fixes UEFI support for Windows Server 2016's installer, updated Intel
e1000edrivers in order to support more devices, updated ZFS packages to version 0.8.5, updated
Updates done in my home lab and everything is working great so far
By the way, this thread is now superseded by the RC thread: https://xcp-ng.org/forum/topic/3769/xcp-ng-8-2-0-rc-now-available
@jmccoy555 Could I pursuade you to make a post, describing your setup and summarizing any findings ?
@hoerup Hi, I can't remember too much to be honest.
I created a Debian 10 VM with 20GB disk, set up all the stuff that needed to be common for the Ceph pool pretty much following the Ceph documentation using the cephadm method - so Docker etc. This would be my 'Ceph admin VM'
Once that was all sorted I cloned the VM 3 times, for my actual Ceph pool and changed the hostname and static IP etc. I've got 3 hosts with Supermicro boards that have two SATA controllers on board, so on each one I passed though one of the controllers to the Ceph VM and then just deployed Ceph and followed the documentation. The only issues I ran into and any other tips are in the other post I linked to. Now Ceph is all containerised it all seams a bit too easy! Hope they're not my famous last words!! It does like a lot of RAM, so I've reduced the OSD limits down a bit and its fine for me.