Experimental feature: select a network to evacuate an host
A new feature is available in our testing repo: select a network for host evacuation, this would allow to evacuate an host on any (faster) given network instead of the management one.
To access the feature, on all your hosts (always starting with master when in a pool):
yum update --enablerepo=xcp-ng-testing xapi-core-1.249.5-1.1.0.evacnet.1.xcpng8.2.x86_64 xapi-xe-1.249.5-1.1.0.evacnet.1.xcpng8.2.x86_64 xapi-tests-1.249.5-1.1.0.evacnet.1.xcpng8.2.x86_64
And restart your hosts.
WHAT TO TEST
Host evacuation not on the management network (probably a 10G storage network to go faster!)
You can run xe host-evacuate uuid=<host_uuid> network-uuid=<network_uuid>
Or a XAPI client can call host.evacuate with a network ref parameter.
Host evacuation without the optionnal new parameter should behave as before the update.
Please report here if anything goes wrong (or right hopefully 🙂 ) and if you spot a regression as well.
Edit: there are no plans for now to add this feature in 8.2 LTS, the package will probably stay in the testing repo for 8.2 and will be available in 8.3. It means that the package would be erased at next xapi update.
A new ISO is available here to test IPv6 in dom0.
aligned patches with Citrix upstream merges
fix regardings tunnels, Sriovs network, VLAN and bonds when creating from IPv6 PIF
netinstall support in IPv6 (netinstall repo info coming soon)
NFS support in an IPv6 Pool
Do not use this in production this is an experimental feature!
What to test?
upgrade from an already configured IPv6 host
netinstall when a repo will be available
what you'd normally do with an xcp-ng host
For fresh installs, do not forget to add the ipv6 rpm repo for future updates:
@ricardowec I completely second @olivierlambert here. Although there is potential for security issues with an OS that loses support, a hypervisor should be treated with extreme caution anyway. I personally never expose a hypervisor in any capacity directly to the internet (for inbound traffic anyway 🙂 ). This practice alongside using very specific and often highly customised software packages should keep everything running smooth.
On top of all this, the underlying kernel, storage systems and such are all custom, so there is nothing really to worry about 🙂
I will be completely open when I admin that I am very new to XCP-NG, but the team seem to be really responsive if you have an issue. I have never seen such fast reply's, even from an unnamed big brand competitor who charge us $$$ for support.
@hoerup Hi, I can't remember too much to be honest.
I created a Debian 10 VM with 20GB disk, set up all the stuff that needed to be common for the Ceph pool pretty much following the Ceph documentation using the cephadm method - so Docker etc. This would be my 'Ceph admin VM'
Once that was all sorted I cloned the VM 3 times, for my actual Ceph pool and changed the hostname and static IP etc. I've got 3 hosts with Supermicro boards that have two SATA controllers on board, so on each one I passed though one of the controllers to the Ceph VM and then just deployed Ceph and followed the documentation. The only issues I ran into and any other tips are in the other post I linked to. Now Ceph is all containerised it all seams a bit too easy! Hope they're not my famous last words!! 😖 It does like a lot of RAM, so I've reduced the OSD limits down a bit and its fine for me.
CephFS is working nicely, but the update deleted my previous secret in /etc and I had to reinstall the extra packages and recreate the SR and then obviously move the virtual disks back across and refresh
Were you not able to attach the pre-existing SR on CephFS? Accordingly, I'll take a look in the documentation or the driver.
No look. I ended up with a load of orphaned discs with no name or description, just a uuid so it was easier to restore the backups.
I guess this is because the test driver had the CephFS storage as a NFS type, so I have to forget and then re attached as a CephFS type which I guess it didn't like! But its all correct now so I guess this was just a one off moving from the test driver.
Anyway all sorted now and back up and running with no CephFS issues! 🙂