IPv6 support in XCP-ng for the management interface - feedback wanted
-
@BenjiReis Hello !
Thank you for the ISO !I just tested the install in a VM (for the moment, soon I will have a physical machine available)
First review:
So far, the autoconf seem to be working !
But, during the installation, I provide a IPv6 DNS (the Cloudflare one, 2606:4700:4700::1111), but DNS is not working, as I have 1.1.1.1 in my /etc/resolv.conf
I don't know if is the autoconf who is pushing the 1.1.1.1 (i need to check my router first), but I think is better if when we give a DNS, it bypass the autoconfMore test is coming the next few day
(sorry if my English is a bit bad)
Thank you and the team for all you work !
-
@AtaxyaNetwork thanks for the report.
I reproduced the issue, for some reason at first boot XCP-ng launch an IPv4 dhclient request (even though IPv4 is not configured on the management interface...) which overrides the DNS set after the request is replied to.
-
@BenjiReis I've just started giving the IPv6-enabled 8.2.1 a try. Right within the first hour I've stumbled across the following two issues on an IPv6-only server:
Repository mirrors
The preconfigured repositories use
mirrors.xcp-ng.org
. That one returns the address of an actual mirror. And if that mirror happens not to have an IPv6 address, doing anything (e.g.yum makecache
) fails.Re-running it might return on with AAAA records; then it does work β or maybe it'll be another AAAA-less mirror.
NFS mounting via host name
NFS mounting doesn't work if I use a host name that has both A and AAAA records (the problem isn't the A record, though). I've tried to do this via XOA. After entering everything the list of exports available on the server is actually populated, but selecting one will result in the following error in
/var/log/SMlog
:Jun 9 19:38:55 ul SM: [7165] ['mount.nfs', 'sweet-chili.int.bunkus.org:/srv/nfs4/home/', '/var/run/sr-mount/probe', '-o', 'soft,proto=tcp,vers=3,acdirmin=0,acdirmax=0'] Jun 9 19:38:55 ul SM: [7165] FAILED in util.pread: (rc 32) stdout: '', stderr: 'mount.nfs: Network is unreachable Jun 9 19:38:55 ul SM: [7165] ' Jun 9 19:38:55 ul SM: [7165] Raising exception [73, NFS mount error [opterr=mount failed with return code 32]] Jun 9 19:38:55 ul SM: [7165] lock: released /var/lock/sm/sr Jun 9 19:38:55 ul SM: [7165] ***** generic exception: sr_probe: EXCEPTION <class 'SR.SROSError'>, NFS mount error [opterr=mount failed with return code 32] Jun 9 19:38:55 ul SM: [7165] File "/opt/xensource/sm/SRCommand.py", line 110, in run Jun 9 19:38:55 ul SM: [7165] return self._run_locked(sr) Jun 9 19:38:55 ul SM: [7165] File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked Jun 9 19:38:55 ul SM: [7165] rv = self._run(sr, target) Jun 9 19:38:55 ul SM: [7165] File "/opt/xensource/sm/SRCommand.py", line 332, in _run Jun 9 19:38:55 ul SM: [7165] txt = sr.probe() Jun 9 19:38:55 ul SM: [7165] File "/opt/xensource/sm/NFSSR", line 170, in probe Jun 9 19:38:55 ul SM: [7165] self.mount(temppath, self.remotepath) Jun 9 19:38:55 ul SM: [7165] File "/opt/xensource/sm/NFSSR", line 133, in mount Jun 9 19:38:55 ul SM: [7165] raise xs_errors.XenError('NFSMount', opterr=exc.errstr)
The problem here is
mount.nfs -o proto=tcp
. As can be seen inman 5 nfs
theudp
andtcp
protocols only use IPv4 where asudp6
andtcp6
only use IPv6. I'm not aware of a way of saying "use TCP as the protocol, resolve the name, prefer IPv6 over IPv4", unfortunately.This isn't limited to XOA, obviously; the corresponding call
xe sr-probe type=nfs device-config:server=sweet-chili.int.bunkus.org device-config:serverpath=/srv/nfs4/space
fails the same way.One possible way of addressing this could be to resolve the host name right before constructing the mount commands & using the correct
proto
depending on whether the management interface is IPv6 enabled.Note that using an IPv6 address instead of a host name does not work either: even though
sr-create
works asproto=tcp6
is used in the mount calls according to/var/log/SMlog
, the latersr-create
does not work with similar error messages.I can file issues for both on Github, if that helps. The second one in
xcp-ng/xcp
, I guess, but where would I file the first one?vatesfr/xen-orchestra
? -
@mbunkus thanks for the report.
About entering an IPv6 address for NFS in XOA: did you put the
[]
around the IPv6?
If so and it still failed you can indeed create an issue onvatesfr/xen-orchestra
repo (make sure to reference this thread if you do).For the rest, no need to create issues, i'm aware of them and I'll note them in our internal board for next devs.
Regards
-
Thanks for your work !
I'm little new in xcp-ng, I was on xen few year ago.I'm trying xcp-ng with an ipv6 only server.
ISO file is Ok for install.
Just a little thing, mirrors of packages don't all have an ipv6 record, and on a ipv6 installation I have some error.I just change the file
/etc/yum.repos.d/xcp-ng.repo
:
Before :... baseurl=http://mirrors.xcp-ng.org/8/8.2/base/x86_64/ http://updates.xcp-ng.org/8/8.2/base/x86_64/ ...
After :
... baseurl=https://updates.xcp-ng.org/8/8.2/base/x86_64/ ...
Regards
-
Hey good catch! Let me check and fix that ASAP!
-
Are you sure about this?
olivier@test:~$ ping mirrors.xcp-ng.org PING mirrors.xcp-ng.org(alpha.xcp-ng.org (2a01:240:ab08:2::2)) 56 data bytes 64 bytes from alpha.xcp-ng.org (2a01:240:ab08:2::2): icmp_seq=1 ttl=63 time=0.386 ms 64 bytes from alpha.xcp-ng.org (2a01:240:ab08:2::2): icmp_seq=2 ttl=63 time=0.264 ms ^C --- mirrors.xcp-ng.org ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 0.264/0.325/0.386/0.061 ms olivier@test:~$ ping updates.xcp-ng.org PING updates.xcp-ng.org(alpha.xcp-ng.org (2a01:240:ab08:2::2)) 56 data bytes 64 bytes from alpha.xcp-ng.org (2a01:240:ab08:2::2): icmp_seq=1 ttl=63 time=0.672 ms 64 bytes from alpha.xcp-ng.org (2a01:240:ab08:2::2): icmp_seq=2 ttl=63 time=0.295 ms ^C --- updates.xcp-ng.org ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1010ms rtt min/avg/max/mdev = 0.295/0.483/0.672/0.188 ms
edit: aaaaah I see! It's just that SOME mirrors aren't IPv6 ready (ours are). This is indeed less trivial to solve. We'll discuss that with @stormi
-
@olivierlambert Hello
Yes, the problem is onhttp://mirrors.xcp-ng.org/8/8.2/base/x86_64/repodata/repomd.xml
, there is302 Found
to non-IPv6 urlsYou can try with :
curl -IvL http://mirrors.xcp-ng.org/8/8.2/base/x86_64/repodata/repomd.xml
You will see the differents mirrors associated (Link), and some of them redirect to ipv4.
< Link: <https://xcpng-mirror.as208069.net/8/8.2/base/x86_64/repodata/repomd.xml>; rel=duplicate; pri=1; geo=fr Link: <https://xcpng-mirror.as208069.net/8/8.2/base/x86_64/repodata/repomd.xml>; rel=duplicate; pri=1; geo=fr < Link: <https://mirror.as50046.net/xcp-ng/8/8.2/base/x86_64/repodata/repomd.xml>; rel=duplicate; pri=2; geo=fr Link: <https://mirror.as50046.net/xcp-ng/8/8.2/base/x86_64/repodata/repomd.xml>; rel=duplicate; pri=2; geo=fr < Link: <https://mirror-xcpng.torontot.fr/8/8.2/base/x86_64/repodata/repomd.xml>; rel=duplicate; pri=3; geo=fr Link: <https://mirror-xcpng.torontot.fr/8/8.2/base/x86_64/repodata/repomd.xml>; rel=duplicate; pri=3; geo=fr < Link: <https://updates.xcp-ng.org/8/8.2/base/x86_64/repodata/repomd.xml>; rel=duplicate; pri=4; geo=fr Link: <https://updates.xcp-ng.org/8/8.2/base/x86_64/repodata/repomd.xml>; rel=duplicate; pri=4; geo=fr < Location: https://rg2-xcpng-mirror.reptigo.fr/8/8.2/base/x86_64/repodata/repomd.xml Location: https://rg2-xcpng-mirror.reptigo.fr/8/8.2/base/x86_64/repodata/repomd.xml ... couldn't connect to host at rg2-xcpng-mirror.reptigo.fr:443: Failed to connect to 45.152.69.252
-
Howdy, all, just wondering what the status of this feature is as I'm looking to go IPv6-only on my LAN. If it's complete, is there a way for me to add it to an existing installation of 8.2.1 (stable, i.e. an installation that was not made using one of test ISOs mentioned in this thread)?
Cheers
EDIT: Just my luck that I see this feature mentioned in the 8.3 Beta 1 blog post minutes after I post this! If there's any recommended path to enter the beta so that I can upgrade my existing 8.2.1 installation to it and get this feature, I'd love to know how
-
Hi. Check XCP-ng 8.3 beta 1: https://xcp-ng.org/blog/2023/06/22/xcp-ng-8-3-beta-1/
-
Sorry, I replied based on the e-mail notification I got, and missed your EDIT
-
@stormi Am I required to install version 8.3-beta1 from scratch, rather than upgrading, in order to get the new IPv6 functionality? I just ran the upgrade from 8.2.1, but am not seeing any change, nor was I prompted to choose which IP versions to enable during the upgrade process.
If I'm required to upgrade from scratch, is there a recommended way to do this without losing my VM data, given that my pool consists of a single host running all VMs using local storage?
-
@jivanpal The only way to enable IPv6 for the management interface is at boot time. It's something that can't be modified afterwards in XAPI.
Reinstalling without overwriting your VM disks is possible. You'd just 1. make sure you have backups, just in case and 2. carefully avoid selecting any disks for local SR creation during installation. Otherwise they would be wiped. I have doubts about what happens when they are on a partition which is on the same disk as the system, though. I'd have to test to be sure.
However, this is not a complete solution, because you'd still have your VM disks, but all VM metadata will be gone so you'd have to re-create all VMs one by one and associate the disks to them. There is a way to export and reimport metadata, but here's the catch: the metadata also contain information about the management interface, and restoring them might also overwrite your brand new IPv6 configuration.
So, to me, the best would be to export the VMs (VM exports do include VM metadata for the exported VM), reinstall the server, and then reimport the VM.
This considering that you only have one server. With more servers, you can use live migration.
-
@stormi Thanks for the thorough explanation. I will test whether SR partition on system disk is overwritten by installing and reinstalling XCP-ng on a VM on my laptop.
Loss of VM metadata doesn't concern me as I have relatively few VMs and am happy to just recreate these and attach the retained VDIs to them. The only question that remains is whether those VDIs (and some raw/non-sparse VHDs I have that were created by cloning old disks for data recovery tasks) will show up under the Local Storage repo with a simple click of the refresh button in XO, or whether metadata for those also needs to recreated manually.
Rest assured that I have backups I'd obviously just prefer to avoid needing to restore from them as it's time-consuming.
-
@jivanpal There's another gotcha, IIRC: after reinstalling, make sure you attach the SR, not create a new one, because SR creation is destructive.
-
@stormi Thanks, my testing in a VM should reveal how to make sure I do this properly.
-
-
The only way to enable IPv6 for the management interface is at boot time. It's something that can't be modified afterwards in XAPI.
@stormi Wouldn't it possible to change afterwards by following a similar procedure to replacing an interface card that has the management interface on it? Disable host management, modify PIF with v6 as primary address type (forget/reintroduce/reconfigure), then reset the host management.
It's definitely a process but seems possible
-
@lethedata Hi!
Yes indeed it's impossible to change the primary address type of the management interface, it's forbidden by design by XAPI for now. Maybe in the future it'll be allowed but before there still some work to do and then it'd need to be discuss with the XAPI dev team.
I'll try to find a workaround and post here if found but i'm not sure there is for now.
Regards!
-
@stormi @BenjiReis I thought I'd document my upgrade process here, as I did a bunch of testing this week on a spare laptop before finally doing it for real last night, and it all went very smoothly in the end. Perhaps all of this can be done by the installer as a user-friendly means of upgrading to add IPv6 support without needing any changes in XAPI:
- Make note of the current partition table, because it will be wiped and the SR partition will not be recreated during the installation process. Mine was as follows:
# lsblk /dev/sda NAME MAJ:MIN RM SIZE sda 8:0 0 21.8T 0 disk ββsda4 8:4 0 512M 0 part /boot/efi ββsda2 8:2 0 18G 0 part ββsda5 8:5 0 4G 0 part /var/log ββsda3 8:3 0 21.8T 0 part β ββXSLocalEXT--d62dbe0a--b8b8--143f--6f29--3829124d35d4-d62dbe0a--b8b8--143f--6f29--3829124d35d4 253:0 0 21.8T 0 lvm /run/sr-mount/d62dbe0a-b8b8-143f-6f29-3829124d35d4 ββsda1 8:1 0 18G 0 part / ββsda6 8:6 0 1G 0
# gdisk -l /dev/sda [...] First usable sector is 34, last usable sector is 46875541470 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 46139392 83888127 18.0 GiB 0700 2 8390656 46139391 18.0 GiB 0700 3 87033856 46875541470 21.8 TiB 8E00 4 83888128 84936703 512.0 MiB EF00 5 2048 8390655 4.0 GiB 0700 6 84936704 87033855 1024.0 MiB 8200
-
Ensure that you have an instance of XO (XenOrchestra) running on a different machine. Use that instance to create a backup of the pool metadata of the machine you'll be adding IPv6 support to.
-
Install XCP-ng 8.3 from scratch on the machine, overwriting the existing installation. Ensure that no disks are selected for use as an SR. This will wipe the partition table and create new partitions for the OS, but leave unpartitioned space where the SR partition would otherwise be. Since versions 8.2 and 8.3 use the same partition layout, you should get the same partition sizes, thereby leaving the SR filesystem intact on the disk, but inaccessible. Since you opted not to create an SR partition, the partition numbers will differ slightly. Immediately after installation, mine was as follows:
# lsblk /dev/sda NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 21.8T 0 disk ββsda2 8:2 0 18G 0 part ββsda5 8:5 0 4G 0 part /var/log ββsda3 8:3 0 512M 0 part /boot/efi ββsda1 8:1 0 18G 0 part / ββsda6 8:6 0 1G 0 part [SWAP]
# gdisk -l /dev/sda [...] First usable sector is 34, last usable sector is 46875541470 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 46139392 83888127 18.0 GiB 0700 2 8390656 46139391 18.0 GiB 0700 3 83888128 84936703 512.0 MiB EF00 5 2048 8390655 4.0 GiB 0700 6 84936704 87033855 1024.0 MiB 8200
-
Reboot into the new installation, and then recreate the SR partition using
gdisk
:- Run
gdisk /dev/sda
(or other device node name as appropriate). - Create a new partition by entering
n
, then use the default values for the start and end sector (these should automatically match those of the SR partition as it appeared in the original partition table prior to reinstallation), and use8e00
for the partition type. - Remove the partition label by entering
c
, then the partition number (should be4
), then enter nothing for the name. - Check the new partition table by entering
p
; the start and end sector values should match those of the original partition table, but the partition numbers may differ. - Write the changes with
w
, or quit without writing changes withq
.
- Run
-
Connect to the new installation using the remote XO instance, then create a new backup of this fresh installation's pool metadata.
-
Alter the first backup's file
data
(which is an XML file) as follows:-
In the section
<table name="PBD">
, replace the occurrence of the device node path for the SR with the correct path as it would be for the new installation. In particular, the disk's SCSI or other ID may have changed, and the SR partition's number in the partition table has probably changed from 3 to 4. In my case, I had to change it from/dev/disk/by-id/scsi-36...fa-part3
to/dev/disk/by-id/scsi-36...a9-part4
. -
In the second backup's file
data
, find the section<table name="PIF">
. Within it, find the<row>
pertaining to the management interface. Copy the values of the following<row>
attributes, overwriting the corresponding attributes in the first backup's filedata
with their values, so that the new installation's values for the IPv4- and IPv6-related configuration parameters are used:DNS
IP
IPv6
gateway
ip_cofiguration_mode
ipv6_configuration_mode
ipv6_gateway
netmask
primary_address_type
-
-
Use XO to restore the now-altered first backup to the new installation. It will automatically reboot, and all storage backends, virtual disk metadata, VMs, and VM metadata should be restored and working, along with IPv6 on the management interface.
-
@jivanpal It's good that that works but I can see high change for data loss if the installation changes or adjusts the partition size during the install. It could wipe the data and prevent recovery of the SR. That's also not mentioning any local modifications done to the hosts that may be required and have to be redone.
As a side note, I don't need to do a migration of the management interface and was just theorizing that it looked possible to do via xAPI without reinstalling based on documentation I was reading. I ran into a situation on one of my lab servers where running an emergency reset broke v6 management interfaces. I just reinstalled but was curious of how one could recover from that state.