The vGPU unlock is far more complicated. Also, it's very likely that it won't be a viable solution on the long run: I'm pretty sure Nvidia will put the authorized features in the card firmware for the next generation.
@ricardowec I completely second @olivierlambert here. Although there is potential for security issues with an OS that loses support, a hypervisor should be treated with extreme caution anyway. I personally never expose a hypervisor in any capacity directly to the internet (for inbound traffic anyway 🙂 ). This practice alongside using very specific and often highly customised software packages should keep everything running smooth.
On top of all this, the underlying kernel, storage systems and such are all custom, so there is nothing really to worry about 🙂
I will be completely open when I admin that I am very new to XCP-NG, but the team seem to be really responsive if you have an issue. I have never seen such fast reply's, even from an unnamed big brand competitor who charge us $$$ for support.
@hoerup Hi, I can't remember too much to be honest.
I created a Debian 10 VM with 20GB disk, set up all the stuff that needed to be common for the Ceph pool pretty much following the Ceph documentation using the cephadm method - so Docker etc. This would be my 'Ceph admin VM'
Once that was all sorted I cloned the VM 3 times, for my actual Ceph pool and changed the hostname and static IP etc. I've got 3 hosts with Supermicro boards that have two SATA controllers on board, so on each one I passed though one of the controllers to the Ceph VM and then just deployed Ceph and followed the documentation. The only issues I ran into and any other tips are in the other post I linked to. Now Ceph is all containerised it all seams a bit too easy! Hope they're not my famous last words!! 😖 It does like a lot of RAM, so I've reduced the OSD limits down a bit and its fine for me.
CephFS is working nicely, but the update deleted my previous secret in /etc and I had to reinstall the extra packages and recreate the SR and then obviously move the virtual disks back across and refresh
Were you not able to attach the pre-existing SR on CephFS? Accordingly, I'll take a look in the documentation or the driver.
No look. I ended up with a load of orphaned discs with no name or description, just a uuid so it was easier to restore the backups.
I guess this is because the test driver had the CephFS storage as a NFS type, so I have to forget and then re attached as a CephFS type which I guess it didn't like! But its all correct now so I guess this was just a one off moving from the test driver.
Anyway all sorted now and back up and running with no CephFS issues! 🙂
@ieugen Read the blog post that XCP-NG posted today on this very topic, but even if the decided to stick with CentOS 8 Stream for the future base platform, they have selective control over which packages/updates would get released for XCP-NG.
I've already switched my CentOS 8.x installs to CentOS 8 Stream. Fedora is too buggy and too far upstream of RedHat for my personal taste. CentOS 8 Steam is supposed to be positioned between Fedora and RedHat, so they might just hit the sweet spot.
Of course, if XCP-NG switched to Ubuntu LTS releases as the base going forward, I wouldn't cry about that either, so I anticipate this announcement from RedHat won't really affect XCP-NG and we'll look back on this and realize it was not a big deal.