@stormi Nope.... my mistake. Now ubuntu 20.04 and Windows 2016 boot with UEFI Secure Boot enabled.
# secureboot-certs install
No arguments provided to command install, default arguments will be used:
- PK: default
- KEK: default
- db: default
- dbx: latest
Successfully installed certificates to the XAPI DB for pool.
Actually, on any recent enough Linux system (ie not many years old) , the PV drivers are directly included in the kernel. Unless, maybe, a very specific distro decides that they don't want Xen support in their kernel.
So the tools you install on the VMs are merely an agent to make the VM more cooperative with the hypervisor, but they don't affect performance.
The situation is different on Windows systems where you need to install PV drivers to achieve better performance.
CephFS is working nicely, but the update deleted my previous secret in /etc and I had to reinstall the extra packages and recreate the SR and then obviously move the virtual disks back across and refresh
Were you not able to attach the pre-existing SR on CephFS? Accordingly, I'll take a look in the documentation or the driver.
No look. I ended up with a load of orphaned discs with no name or description, just a uuid so it was easier to restore the backups.
I guess this is because the test driver had the CephFS storage as a NFS type, so I have to forget and then re attached as a CephFS type which I guess it didn't like! But its all correct now so I guess this was just a one off moving from the test driver.
Anyway all sorted now and back up and running with no CephFS issues! 🙂
Initial brief test seems ok 👍
Will see if i can do more of the tests later...
Updated from 8.1 via yum which caused windows 10 & windows 2019 server to hang on the tiano logo.
Interestingly a debian 10 uefi vm worked fine...
After the update to uefistored both windows VMs started in recovery and did whatever it is windows does besides spin dots on your screen 🤔
After a reboot both Windows 10 2004 and windows 2019 server booted just fine 👍
@hoerup Hi, I can't remember too much to be honest.
I created a Debian 10 VM with 20GB disk, set up all the stuff that needed to be common for the Ceph pool pretty much following the Ceph documentation using the cephadm method - so Docker etc. This would be my 'Ceph admin VM'
Once that was all sorted I cloned the VM 3 times, for my actual Ceph pool and changed the hostname and static IP etc. I've got 3 hosts with Supermicro boards that have two SATA controllers on board, so on each one I passed though one of the controllers to the Ceph VM and then just deployed Ceph and followed the documentation. The only issues I ran into and any other tips are in the other post I linked to. Now Ceph is all containerised it all seams a bit too easy! Hope they're not my famous last words!! 😖 It does like a lot of RAM, so I've reduced the OSD limits down a bit and its fine for me.
For the tests we had a need for 1 week ago, it's now fine, however I'll probably update this thread once we have an ISO image that can be tested by users that have such hardware. Note: it's not about using the GPU itself, but simply about making sure that the hypervisor works well with the changes we made to replace the not-built-by-us gpumon tool whose absence would make XCP-ng 8.1 unbootable (as we sadly found out after the release) with a dummy one built by us.
Now is time for the tests I was talking about earlier. XCP-ng 8.2 beta is now available with our dummy gpumon and we need users who have nVIDIA GPUs to test it and give us feedback. There may be situations we have not tested where our dummy gpumon is not enough to make the XAPI happy, despite the fact that we don't support nVIDIA vGPUs (proprietary software from Citrix required for that feature).