IPv6 support in XCP-ng for the management interface - feedback wanted
-
Hi. Check XCP-ng 8.3 beta 1: https://xcp-ng.org/blog/2023/06/22/xcp-ng-8-3-beta-1/
-
Sorry, I replied based on the e-mail notification I got, and missed your EDIT
-
@stormi Am I required to install version 8.3-beta1 from scratch, rather than upgrading, in order to get the new IPv6 functionality? I just ran the upgrade from 8.2.1, but am not seeing any change, nor was I prompted to choose which IP versions to enable during the upgrade process.
If I'm required to upgrade from scratch, is there a recommended way to do this without losing my VM data, given that my pool consists of a single host running all VMs using local storage?
-
@jivanpal The only way to enable IPv6 for the management interface is at boot time. It's something that can't be modified afterwards in XAPI.
Reinstalling without overwriting your VM disks is possible. You'd just 1. make sure you have backups, just in case and 2. carefully avoid selecting any disks for local SR creation during installation. Otherwise they would be wiped. I have doubts about what happens when they are on a partition which is on the same disk as the system, though. I'd have to test to be sure.
However, this is not a complete solution, because you'd still have your VM disks, but all VM metadata will be gone so you'd have to re-create all VMs one by one and associate the disks to them. There is a way to export and reimport metadata, but here's the catch: the metadata also contain information about the management interface, and restoring them might also overwrite your brand new IPv6 configuration.
So, to me, the best would be to export the VMs (VM exports do include VM metadata for the exported VM), reinstall the server, and then reimport the VM.
This considering that you only have one server. With more servers, you can use live migration.
-
@stormi Thanks for the thorough explanation. I will test whether SR partition on system disk is overwritten by installing and reinstalling XCP-ng on a VM on my laptop.
Loss of VM metadata doesn't concern me as I have relatively few VMs and am happy to just recreate these and attach the retained VDIs to them. The only question that remains is whether those VDIs (and some raw/non-sparse VHDs I have that were created by cloning old disks for data recovery tasks) will show up under the Local Storage repo with a simple click of the refresh button in XO, or whether metadata for those also needs to recreated manually.
Rest assured that I have backups I'd obviously just prefer to avoid needing to restore from them as it's time-consuming.
-
@jivanpal There's another gotcha, IIRC: after reinstalling, make sure you attach the SR, not create a new one, because SR creation is destructive.
-
@stormi Thanks, my testing in a VM should reveal how to make sure I do this properly.
-
-
The only way to enable IPv6 for the management interface is at boot time. It's something that can't be modified afterwards in XAPI.
@stormi Wouldn't it possible to change afterwards by following a similar procedure to replacing an interface card that has the management interface on it? Disable host management, modify PIF with v6 as primary address type (forget/reintroduce/reconfigure), then reset the host management.
It's definitely a process but seems possible
-
@lethedata Hi!
Yes indeed it's impossible to change the primary address type of the management interface, it's forbidden by design by XAPI for now. Maybe in the future it'll be allowed but before there still some work to do and then it'd need to be discuss with the XAPI dev team.
I'll try to find a workaround and post here if found but i'm not sure there is for now.
Regards!
-
@stormi @BenjiReis I thought I'd document my upgrade process here, as I did a bunch of testing this week on a spare laptop before finally doing it for real last night, and it all went very smoothly in the end. Perhaps all of this can be done by the installer as a user-friendly means of upgrading to add IPv6 support without needing any changes in XAPI:
- Make note of the current partition table, because it will be wiped and the SR partition will not be recreated during the installation process. Mine was as follows:
# lsblk /dev/sda NAME MAJ:MIN RM SIZE sda 8:0 0 21.8T 0 disk โโsda4 8:4 0 512M 0 part /boot/efi โโsda2 8:2 0 18G 0 part โโsda5 8:5 0 4G 0 part /var/log โโsda3 8:3 0 21.8T 0 part โ โโXSLocalEXT--d62dbe0a--b8b8--143f--6f29--3829124d35d4-d62dbe0a--b8b8--143f--6f29--3829124d35d4 253:0 0 21.8T 0 lvm /run/sr-mount/d62dbe0a-b8b8-143f-6f29-3829124d35d4 โโsda1 8:1 0 18G 0 part / โโsda6 8:6 0 1G 0
# gdisk -l /dev/sda [...] First usable sector is 34, last usable sector is 46875541470 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 46139392 83888127 18.0 GiB 0700 2 8390656 46139391 18.0 GiB 0700 3 87033856 46875541470 21.8 TiB 8E00 4 83888128 84936703 512.0 MiB EF00 5 2048 8390655 4.0 GiB 0700 6 84936704 87033855 1024.0 MiB 8200
-
Ensure that you have an instance of XO (XenOrchestra) running on a different machine. Use that instance to create a backup of the pool metadata of the machine you'll be adding IPv6 support to.
-
Install XCP-ng 8.3 from scratch on the machine, overwriting the existing installation. Ensure that no disks are selected for use as an SR. This will wipe the partition table and create new partitions for the OS, but leave unpartitioned space where the SR partition would otherwise be. Since versions 8.2 and 8.3 use the same partition layout, you should get the same partition sizes, thereby leaving the SR filesystem intact on the disk, but inaccessible. Since you opted not to create an SR partition, the partition numbers will differ slightly. Immediately after installation, mine was as follows:
# lsblk /dev/sda NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 21.8T 0 disk โโsda2 8:2 0 18G 0 part โโsda5 8:5 0 4G 0 part /var/log โโsda3 8:3 0 512M 0 part /boot/efi โโsda1 8:1 0 18G 0 part / โโsda6 8:6 0 1G 0 part [SWAP]
# gdisk -l /dev/sda [...] First usable sector is 34, last usable sector is 46875541470 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 46139392 83888127 18.0 GiB 0700 2 8390656 46139391 18.0 GiB 0700 3 83888128 84936703 512.0 MiB EF00 5 2048 8390655 4.0 GiB 0700 6 84936704 87033855 1024.0 MiB 8200
-
Reboot into the new installation, and then recreate the SR partition using
gdisk
:- Run
gdisk /dev/sda
(or other device node name as appropriate). - Create a new partition by entering
n
, then use the default values for the start and end sector (these should automatically match those of the SR partition as it appeared in the original partition table prior to reinstallation), and use8e00
for the partition type. - Remove the partition label by entering
c
, then the partition number (should be4
), then enter nothing for the name. - Check the new partition table by entering
p
; the start and end sector values should match those of the original partition table, but the partition numbers may differ. - Write the changes with
w
, or quit without writing changes withq
.
- Run
-
Connect to the new installation using the remote XO instance, then create a new backup of this fresh installation's pool metadata.
-
Alter the first backup's file
data
(which is an XML file) as follows:-
In the section
<table name="PBD">
, replace the occurrence of the device node path for the SR with the correct path as it would be for the new installation. In particular, the disk's SCSI or other ID may have changed, and the SR partition's number in the partition table has probably changed from 3 to 4. In my case, I had to change it from/dev/disk/by-id/scsi-36...fa-part3
to/dev/disk/by-id/scsi-36...a9-part4
. -
In the second backup's file
data
, find the section<table name="PIF">
. Within it, find the<row>
pertaining to the management interface. Copy the values of the following<row>
attributes, overwriting the corresponding attributes in the first backup's filedata
with their values, so that the new installation's values for the IPv4- and IPv6-related configuration parameters are used:DNS
IP
IPv6
gateway
ip_cofiguration_mode
ipv6_configuration_mode
ipv6_gateway
netmask
primary_address_type
-
-
Use XO to restore the now-altered first backup to the new installation. It will automatically reboot, and all storage backends, virtual disk metadata, VMs, and VM metadata should be restored and working, along with IPv6 on the management interface.
-
@jivanpal It's good that that works but I can see high change for data loss if the installation changes or adjusts the partition size during the install. It could wipe the data and prevent recovery of the SR. That's also not mentioning any local modifications done to the hosts that may be required and have to be redone.
As a side note, I don't need to do a migration of the management interface and was just theorizing that it looked possible to do via xAPI without reinstalling based on documentation I was reading. I ran into a situation on one of my lab servers where running an emergency reset broke v6 management interfaces. I just reinstalled but was curious of how one could recover from that state.
-
In order to upgrade an XCP-ng host and put in IPv6 what could be done also:
- Do the upgrade
xe host-management-disable && xe pif-set-primary-address-type uuid=<udid> primary_address_type=IPv6 ; xe host-management-reconfigure pif-uuid=<uuid>
-
@jivanpal you wrote in the 8.3 beta thread:
There is no way to configure IPv6 on the management interface via xsconsole, such as if one wants to switch between static configuration, autoconf via RAs, or DHCPv6.
True but we'll soon release an new version of xsconsole adapted for IPV6 allowing to configure IPv6 for management interface
There is apparently no support for IPv6 DNS servers, only IPv4. For example, if I try to add an IPv6 address like fd00::1 or [fd00::1] as a DNS server via xsconsole, there is apparently no change to the configuration. Editing /etc/resolv.conf works to achieve this (e.g. adding the line nameserver fd00::1), but this is known not to persist across reboots.
Should be solved by the future xsconsole release as well
There is apparently no support for RDNSS (advertisement of DNS servers in RAs rather than via DHCPv6).
DHCPv6 is one of the major blindspot for now indeed, I'm working on it but I don't have much knowledge on this so any hints are welcome if you spot if something is missing somewhere.
The "autoconf" option (available during installation, after choosing IPv6-only or dual-stack, and then being asked which mode to use to configure IPv6 addresses) appears to only be used at installation time to determine values such as the gateway's link-local address, the available address prefixes, and perform SLAAC and DAD, but then the resulting values are hard-coded and don't change according to changes in the environment, such as an upstream change in network prefix. (I will need to do some more testing to really confirm this, but this seems to be the case in my experience.) Compare this to when IPv4 is configured to use DHCP(v4), in which the management interface may have a different IPv4 address at different times, namely if it's assigned a different address by the DHCP server when it attempts to get or renew a lease.
I'm not aware of this issue, i'll try to reproduce in our env.
Some repos are unreachable in IPv6-only environments, which I'm aware is already known, and I can get around this by using NAT64 (either with CLAT to perform 464XLAT; or with DNS64), but this fact is currently a blocker for me to move to being IPv6-only.
We contacted the mirrors many times, still trying to have'em all advertising IPv4 and 6 and also trying to find a solution that could "smartly" redirect towards a compatible mirror.
Speaking of NAT64, this is just a question, I haven't tested or looked into this myself: Does XCP-ng include a CLAT daemon and support for auto-configuring 464XLAT using either the "PREF64" RA option (RFC8781) or resolution of ipv4only.arpa via a DNS64 server (RFC7050)?
Haven't tested either for now, feel free to do and report if you get here before me.
Again, thank you for the report, this is greatly appreciated and any info about what's missing for IPv6 (and perhaps how to achieve it when possible) is welcomed.
Regards!
-
FYI, I have finally reviewed all mirrors that provide updates for XCP-ng and disabled the remaining 6 which didn't support IPv6 (and notified their owners. I'll enable them again if they enable IPv6).
So, if you experience any issues installing updates via IPv6, tell us so that we investigate faulty mirrors.
-
Stil having this issue:
Failed to set locale, defaulting to C Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Excluding mirror: updates.xcp-ng.org * xcp-ng-base: mirrors.xcp-ng.org Excluding mirror: updates.xcp-ng.org * xcp-ng-updates: mirrors.xcp-ng.org http://mirrors.xcp-ng.org/8/8.3/base/x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2a01:240:ab08:2::2: Cannot assign requested address" Trying other mirror. http://mirrors.xcp-ng.org/8/8.3/base/x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2a01:240:ab08:2::2: Cannot assign requested address" Trying other mirror. http://mirrors.xcp-ng.org/8/8.3/base/x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2a01:240:ab08:2::2: Cannot assign requested address" Trying other mirror. http://mirrors.xcp-ng.org/8/8.3/base/x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2a01:240:ab08:2::2: Cannot assign requested address" Trying other mirror. http://mirrors.xcp-ng.org/8/8.3/base/x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2a01:240:ab08:2::2: Cannot assign requested address" Trying other mirror. http://mirrors.xcp-ng.org/8/8.3/base/x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2a01:240:ab08:2::2: Cannot assign requested address" Trying other mirror. http://mirrors.xcp-ng.org/8/8.3/base/x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2a01:240:ab08:2::2: Cannot assign requested address" Trying other mirror. http://mirrors.xcp-ng.org/8/8.3/base/x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2a01:240:ab08:2::2: Cannot assign requested address" Trying other mirror. http://mirrors.xcp-ng.org/8/8.3/base/x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2a01:240:ab08:2::2: Cannot assign requested address" Trying other mirror. http://mirrors.xcp-ng.org/8/8.3/base/x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2a01:240:ab08:2::2: Cannot assign requested address" Trying other mirror. One of the configured repositories failed (XCP-ng Base Repository), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=xcp-ng-base ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable xcp-ng-base or subscription-manager repos --disable=xcp-ng-base 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=xcp-ng-base.skip_if_unavailable=true
-
@TheFrisianClause This is not the right thread for your issue. I know it's tempting to think so because of the IPv6 address you see, but your host is not setup to use IPv6 for the management interface, right?
-
@stormi I have made a different thread, but the reply you posted. Made me feel like I could put my reply here.
-
@TheFrisianClause What's the other thread?
-
@stormi This is the link to it: https://xcp-ng.org/forum/topic/8459/yum-update-ipv6-issue
-
@BenjiReis I've finally taken the time to review this again now that I've updated to 8.3-rc1 via
yum update
, so here's some follow-up on the points I brought up previously:There is no way to configure IPv6 on the management interface via xsconsole, such as if one wants to switch between static configuration, autoconf via RAs, or DHCPv6.
True but we'll soon release an new version of xsconsole adapted for IPV6 allowing to configure IPv6 for management interface
There is apparently no support for IPv6 DNS servers, only IPv4. For example, if I try to add an IPv6 address like fd00::1 or [fd00::1] as a DNS server via xsconsole, there is apparently no change to the configuration. Editing /etc/resolv.conf works to achieve this (e.g. adding the line nameserver fd00::1), but this is known not to persist across reboots.
Should be solved by the future xsconsole release as well
Still not seeing any enhancements/changes in behaviour as of xsconsole 11.0.6-1.1.xcpng8.3.
There is apparently no support for RDNSS (advertisement of DNS servers in RAs rather than via DHCPv6).
DHCPv6 is one of the major blindspot for now indeed, I'm working on it but I don't have much knowledge on this so any hints are welcome if you spot if something is missing somewhere.
Just to clarify, this isn't related to DHCPv6, but RAs (Router Advertisement packets). I personally don't have a DHCPv6 server on my network at all. RDNSS is described in RFC8106.
Others may want to advertise DNS servers using DHCPv6, though, so that should still be tested as well.
The "autoconf" option (available during installation, after choosing IPv6-only or dual-stack, and then being asked which mode to use to configure IPv6 addresses) appears to only be used at installation time to determine values such as the gateway's link-local address, the available address prefixes, and perform SLAAC and DAD, but then the resulting values are hard-coded and don't change according to changes in the environment, such as an upstream change in network prefix. (I will need to do some more testing to really confirm this, but this seems to be the case in my experience.) Compare this to when IPv4 is configured to use DHCP(v4), in which the management interface may have a different IPv4 address at different times, namely if it's assigned a different address by the DHCP server when it attempts to get or renew a lease.
I'm not aware of this issue, i'll try to reproduce in our env.
I haven't been able to reproduce this either, and my prefix has changed a couple of times since I said this was an issue. Perhaps I just imagined it, hit a weird edge case, or didn't wait for the valid lifetime of the old prefix to expire; my router doesn't reliably advertise the fact that an old prefix is no longer valid.
Some repos are unreachable in IPv6-only environments, which I'm aware is already known, and I can get around this by using NAT64 (either with CLAT to perform 464XLAT; or with DNS64), but this fact is currently a blocker for me to move to being IPv6-only.
We contacted the mirrors many times, still trying to have'em all advertising IPv4 and 6 and also trying to find a solution that could "smartly" redirect towards a compatible mirror.
@stormi said in IPv6 support in XCP-ng for the management interface - feedback wanted:
FYI, I have finally reviewed all mirrors that provide updates for XCP-ng and disabled the remaining 6 which didn't support IPv6 (and notified their owners. I'll enable them again if they enable IPv6).
So, if you experience any issues installing updates via IPv6, tell us so that we investigate faulty mirrors.
I personally haven't had any issues reaching repos since then, but I haven't explicitly tested this or looked through the mirrorlist. I also don't think this is much of an issue in practice, since 464XLAT can be used; this is no longer a blocker from me, as I've reviewed the way I'm deploying IPv6-only. It's very nice to see you motivate / put pressure on mirror maintainers to make their sites accessible over IPv6 though, especially indirectly by simply removing such sites from the mirrorlist.
Speaking of NAT64, this is just a question, I haven't tested or looked into this myself: Does XCP-ng include a CLAT daemon and support for auto-configuring 464XLAT using either the "PREF64" RA option (RFC8781) or resolution of ipv4only.arpa via a DNS64 server (RFC7050)?
Haven't tested either for now, feel free to do and report if you get here before me.
I've got this working pretty easily by manually installing clatd from GitHub and its dependencies from EPEL and the other RHEL repos. It works, but isn't native. That being said, I don't know of any other Linux distros that natively support this yet. To my knowledge, there is ongoing work to implement this directly in Systemd. Clatd supports RFC7050, but doesn't support PREF64/RFC8781 as it's not particualrly feasible for it to do so, but hopefully Systemd is able to if/when it implements a CLAT.
This also isn't reliable across reboots / DHCP lease renewals because I have no simple way to disable IPv4 on the management interface. I haven't tried this with an installation where I've selected "IPv6-only" in the installer.
One practical issue I've experienced when using 464XLAT in this way is that XO Lite tries to contact the pool server in the frontend / client / web browser using JS fetch calls for URLs falling under
https://localhost/
, which would instead usually be underhttps://<pool server IPv4 address>/
. These are the addresses that XO Lite will prompt the user to ensure that the browser trusts TLS certificates for if they are self-signed and no known CA has issued/signed them. As such, these don't work, since "localhost" from the XO Lite user's perspective isn't the same machine as the "localhost" that XO Lite is running on. If XO Lite supported making these calls using any of the pool servers' routable IPv6 addresses (e.g. ULAs or GUAs, but not LLAs), this would work just fine.I may find some time to test these things on an "IPv6-only" installation, but I expect that will be after 8.3 has reached general release.