@stormi Nice work and the joint effort by the community for tests gets better every time 🤙. Keep up the good work 👍. Updated my (semi production) three host homelab and it works - as usual. Now it is time to tear down my playlab for some 10G testing 😎 .
A new ISO is available here to test IPv6 in dom0.
aligned patches with Citrix upstream merges
fix regardings tunnels, Sriovs network, VLAN and bonds when creating from IPv6 PIF
netinstall support in IPv6 (netinstall repo info coming soon)
NFS support in an IPv6 Pool
Do not use this in production this is an experimental feature!
What to test?
upgrade from an already configured IPv6 host
netinstall when a repo will be available
what you'd normally do with an xcp-ng host
For fresh installs, do not forget to add the ipv6 rpm repo for future updates:
@ricardowec I completely second @olivierlambert here. Although there is potential for security issues with an OS that loses support, a hypervisor should be treated with extreme caution anyway. I personally never expose a hypervisor in any capacity directly to the internet (for inbound traffic anyway 🙂 ). This practice alongside using very specific and often highly customised software packages should keep everything running smooth.
On top of all this, the underlying kernel, storage systems and such are all custom, so there is nothing really to worry about 🙂
I will be completely open when I admin that I am very new to XCP-NG, but the team seem to be really responsive if you have an issue. I have never seen such fast reply's, even from an unnamed big brand competitor who charge us $$$ for support.
@hoerup Hi, I can't remember too much to be honest.
I created a Debian 10 VM with 20GB disk, set up all the stuff that needed to be common for the Ceph pool pretty much following the Ceph documentation using the cephadm method - so Docker etc. This would be my 'Ceph admin VM'
Once that was all sorted I cloned the VM 3 times, for my actual Ceph pool and changed the hostname and static IP etc. I've got 3 hosts with Supermicro boards that have two SATA controllers on board, so on each one I passed though one of the controllers to the Ceph VM and then just deployed Ceph and followed the documentation. The only issues I ran into and any other tips are in the other post I linked to. Now Ceph is all containerised it all seams a bit too easy! Hope they're not my famous last words!! 😖 It does like a lot of RAM, so I've reduced the OSD limits down a bit and its fine for me.
CephFS is working nicely, but the update deleted my previous secret in /etc and I had to reinstall the extra packages and recreate the SR and then obviously move the virtual disks back across and refresh
Were you not able to attach the pre-existing SR on CephFS? Accordingly, I'll take a look in the documentation or the driver.
No look. I ended up with a load of orphaned discs with no name or description, just a uuid so it was easier to restore the backups.
I guess this is because the test driver had the CephFS storage as a NFS type, so I have to forget and then re attached as a CephFS type which I guess it didn't like! But its all correct now so I guess this was just a one off moving from the test driver.
Anyway all sorted now and back up and running with no CephFS issues! 🙂