IPv6 support in XCP-ng for the management interface - feedback wanted
-
@AtaxyaNetwork also, i tried to reproduce you XOA deploy issue buf failed, did you add
[]
around the IPv6 of your XCP-ng when filling up the deploy form?Thanks.
-
@BenjiReis Hello !
I don't remember exactly, but I will try to retest all in the incoming days -
@AtaxyaNetwork there are some issues, but on XOA's side.
For now the deploy script specifically wait for an IPv4 to complete succesfully so it won't even though the XOA VM is up and running.
For some reason it seems the XO app is not reachable with the IPv6 address in a web browser. I'm still investigating that. -
@BenjiReis Ok !
Don't hesitate to ping me if you need help to debug the XOA side -
@AtaxyaNetwork so infact the issue was with our IPv6 lab config (lol) so XOA is reachable in fact with an IPv6 address ahah.
So you can play with it.Now i'm back on my DHCPv6/Autoconf/SLAAC investigations
-
Hi all!
8.2.1 IPv6 ISO available!
Here's a new ISO for IPv6 based on XCP-ng 8.2.1!
The ISO can be used to upgrade an existing server installed with the previous IPv6 test ISO or install a brand new XCP-ng 8.2.1 with IPv6 support on management interface.A non-IPv6 8.2.0 would remain non-IPv6 after an upgrade as it's not possible to edit the management interface's primary adress type.
An 8.2.0 IPv6 hosts can also be upgraded via yum:
yum upgrade --enablerepo=xcp-ng-updates,xcp-ng-ipv6
.What's new
- All 8.2.1 fixes
- Better DNS management in the case of both IPv4 and IPv6 configured on a PIF
- Partial support of IPv6 DHCP and autoconf
What to test
- Your daily uses of XCP-ng but with IPv6
- DHCP and autoconf (I have reached the limits of my knowledge so help from the community with more IPv6 expertise would be very VERY VERY helpful! :D)
The goal of this ISO release is mainly to get help and leads about what's missing in DHCP and Autoconf.
Any issue encountered (and what works fine also) can be reported in this thread.
Usual warning
This a test ISO with an experimental feature still in development.
IPv6 on management interface is not officially supported by XCP-ng yet and so, we do not recommend to use it for a production environment.Thanks a lot for the help and I hope the ISO will work well for everyone.
-
@BenjiReis Hello !
Thank you for the ISO !I just tested the install in a VM (for the moment, soon I will have a physical machine available)
First review:
So far, the autoconf seem to be working !
But, during the installation, I provide a IPv6 DNS (the Cloudflare one, 2606:4700:4700::1111), but DNS is not working, as I have 1.1.1.1 in my /etc/resolv.conf
I don't know if is the autoconf who is pushing the 1.1.1.1 (i need to check my router first), but I think is better if when we give a DNS, it bypass the autoconfMore test is coming the next few day
(sorry if my English is a bit bad)
Thank you and the team for all you work !
-
@AtaxyaNetwork thanks for the report.
I reproduced the issue, for some reason at first boot XCP-ng launch an IPv4 dhclient request (even though IPv4 is not configured on the management interface...) which overrides the DNS set after the request is replied to.
-
@BenjiReis I've just started giving the IPv6-enabled 8.2.1 a try. Right within the first hour I've stumbled across the following two issues on an IPv6-only server:
Repository mirrors
The preconfigured repositories use
mirrors.xcp-ng.org
. That one returns the address of an actual mirror. And if that mirror happens not to have an IPv6 address, doing anything (e.g.yum makecache
) fails.Re-running it might return on with AAAA records; then it does work β or maybe it'll be another AAAA-less mirror.
NFS mounting via host name
NFS mounting doesn't work if I use a host name that has both A and AAAA records (the problem isn't the A record, though). I've tried to do this via XOA. After entering everything the list of exports available on the server is actually populated, but selecting one will result in the following error in
/var/log/SMlog
:Jun 9 19:38:55 ul SM: [7165] ['mount.nfs', 'sweet-chili.int.bunkus.org:/srv/nfs4/home/', '/var/run/sr-mount/probe', '-o', 'soft,proto=tcp,vers=3,acdirmin=0,acdirmax=0'] Jun 9 19:38:55 ul SM: [7165] FAILED in util.pread: (rc 32) stdout: '', stderr: 'mount.nfs: Network is unreachable Jun 9 19:38:55 ul SM: [7165] ' Jun 9 19:38:55 ul SM: [7165] Raising exception [73, NFS mount error [opterr=mount failed with return code 32]] Jun 9 19:38:55 ul SM: [7165] lock: released /var/lock/sm/sr Jun 9 19:38:55 ul SM: [7165] ***** generic exception: sr_probe: EXCEPTION <class 'SR.SROSError'>, NFS mount error [opterr=mount failed with return code 32] Jun 9 19:38:55 ul SM: [7165] File "/opt/xensource/sm/SRCommand.py", line 110, in run Jun 9 19:38:55 ul SM: [7165] return self._run_locked(sr) Jun 9 19:38:55 ul SM: [7165] File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked Jun 9 19:38:55 ul SM: [7165] rv = self._run(sr, target) Jun 9 19:38:55 ul SM: [7165] File "/opt/xensource/sm/SRCommand.py", line 332, in _run Jun 9 19:38:55 ul SM: [7165] txt = sr.probe() Jun 9 19:38:55 ul SM: [7165] File "/opt/xensource/sm/NFSSR", line 170, in probe Jun 9 19:38:55 ul SM: [7165] self.mount(temppath, self.remotepath) Jun 9 19:38:55 ul SM: [7165] File "/opt/xensource/sm/NFSSR", line 133, in mount Jun 9 19:38:55 ul SM: [7165] raise xs_errors.XenError('NFSMount', opterr=exc.errstr)
The problem here is
mount.nfs -o proto=tcp
. As can be seen inman 5 nfs
theudp
andtcp
protocols only use IPv4 where asudp6
andtcp6
only use IPv6. I'm not aware of a way of saying "use TCP as the protocol, resolve the name, prefer IPv6 over IPv4", unfortunately.This isn't limited to XOA, obviously; the corresponding call
xe sr-probe type=nfs device-config:server=sweet-chili.int.bunkus.org device-config:serverpath=/srv/nfs4/space
fails the same way.One possible way of addressing this could be to resolve the host name right before constructing the mount commands & using the correct
proto
depending on whether the management interface is IPv6 enabled.Note that using an IPv6 address instead of a host name does not work either: even though
sr-create
works asproto=tcp6
is used in the mount calls according to/var/log/SMlog
, the latersr-create
does not work with similar error messages.I can file issues for both on Github, if that helps. The second one in
xcp-ng/xcp
, I guess, but where would I file the first one?vatesfr/xen-orchestra
? -
@mbunkus thanks for the report.
About entering an IPv6 address for NFS in XOA: did you put the
[]
around the IPv6?
If so and it still failed you can indeed create an issue onvatesfr/xen-orchestra
repo (make sure to reference this thread if you do).For the rest, no need to create issues, i'm aware of them and I'll note them in our internal board for next devs.
Regards
-
Thanks for your work !
I'm little new in xcp-ng, I was on xen few year ago.I'm trying xcp-ng with an ipv6 only server.
ISO file is Ok for install.
Just a little thing, mirrors of packages don't all have an ipv6 record, and on a ipv6 installation I have some error.I just change the file
/etc/yum.repos.d/xcp-ng.repo
:
Before :... baseurl=http://mirrors.xcp-ng.org/8/8.2/base/x86_64/ http://updates.xcp-ng.org/8/8.2/base/x86_64/ ...
After :
... baseurl=https://updates.xcp-ng.org/8/8.2/base/x86_64/ ...
Regards
-
Hey good catch! Let me check and fix that ASAP!
-
Are you sure about this?
olivier@test:~$ ping mirrors.xcp-ng.org PING mirrors.xcp-ng.org(alpha.xcp-ng.org (2a01:240:ab08:2::2)) 56 data bytes 64 bytes from alpha.xcp-ng.org (2a01:240:ab08:2::2): icmp_seq=1 ttl=63 time=0.386 ms 64 bytes from alpha.xcp-ng.org (2a01:240:ab08:2::2): icmp_seq=2 ttl=63 time=0.264 ms ^C --- mirrors.xcp-ng.org ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 0.264/0.325/0.386/0.061 ms olivier@test:~$ ping updates.xcp-ng.org PING updates.xcp-ng.org(alpha.xcp-ng.org (2a01:240:ab08:2::2)) 56 data bytes 64 bytes from alpha.xcp-ng.org (2a01:240:ab08:2::2): icmp_seq=1 ttl=63 time=0.672 ms 64 bytes from alpha.xcp-ng.org (2a01:240:ab08:2::2): icmp_seq=2 ttl=63 time=0.295 ms ^C --- updates.xcp-ng.org ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1010ms rtt min/avg/max/mdev = 0.295/0.483/0.672/0.188 ms
edit: aaaaah I see! It's just that SOME mirrors aren't IPv6 ready (ours are). This is indeed less trivial to solve. We'll discuss that with @stormi
-
@olivierlambert Hello
Yes, the problem is onhttp://mirrors.xcp-ng.org/8/8.2/base/x86_64/repodata/repomd.xml
, there is302 Found
to non-IPv6 urlsYou can try with :
curl -IvL http://mirrors.xcp-ng.org/8/8.2/base/x86_64/repodata/repomd.xml
You will see the differents mirrors associated (Link), and some of them redirect to ipv4.
< Link: <https://xcpng-mirror.as208069.net/8/8.2/base/x86_64/repodata/repomd.xml>; rel=duplicate; pri=1; geo=fr Link: <https://xcpng-mirror.as208069.net/8/8.2/base/x86_64/repodata/repomd.xml>; rel=duplicate; pri=1; geo=fr < Link: <https://mirror.as50046.net/xcp-ng/8/8.2/base/x86_64/repodata/repomd.xml>; rel=duplicate; pri=2; geo=fr Link: <https://mirror.as50046.net/xcp-ng/8/8.2/base/x86_64/repodata/repomd.xml>; rel=duplicate; pri=2; geo=fr < Link: <https://mirror-xcpng.torontot.fr/8/8.2/base/x86_64/repodata/repomd.xml>; rel=duplicate; pri=3; geo=fr Link: <https://mirror-xcpng.torontot.fr/8/8.2/base/x86_64/repodata/repomd.xml>; rel=duplicate; pri=3; geo=fr < Link: <https://updates.xcp-ng.org/8/8.2/base/x86_64/repodata/repomd.xml>; rel=duplicate; pri=4; geo=fr Link: <https://updates.xcp-ng.org/8/8.2/base/x86_64/repodata/repomd.xml>; rel=duplicate; pri=4; geo=fr < Location: https://rg2-xcpng-mirror.reptigo.fr/8/8.2/base/x86_64/repodata/repomd.xml Location: https://rg2-xcpng-mirror.reptigo.fr/8/8.2/base/x86_64/repodata/repomd.xml ... couldn't connect to host at rg2-xcpng-mirror.reptigo.fr:443: Failed to connect to 45.152.69.252
-
Howdy, all, just wondering what the status of this feature is as I'm looking to go IPv6-only on my LAN. If it's complete, is there a way for me to add it to an existing installation of 8.2.1 (stable, i.e. an installation that was not made using one of test ISOs mentioned in this thread)?
Cheers
EDIT: Just my luck that I see this feature mentioned in the 8.3 Beta 1 blog post minutes after I post this! If there's any recommended path to enter the beta so that I can upgrade my existing 8.2.1 installation to it and get this feature, I'd love to know how
-
Hi. Check XCP-ng 8.3 beta 1: https://xcp-ng.org/blog/2023/06/22/xcp-ng-8-3-beta-1/
-
Sorry, I replied based on the e-mail notification I got, and missed your EDIT
-
@stormi Am I required to install version 8.3-beta1 from scratch, rather than upgrading, in order to get the new IPv6 functionality? I just ran the upgrade from 8.2.1, but am not seeing any change, nor was I prompted to choose which IP versions to enable during the upgrade process.
If I'm required to upgrade from scratch, is there a recommended way to do this without losing my VM data, given that my pool consists of a single host running all VMs using local storage?
-
@jivanpal The only way to enable IPv6 for the management interface is at boot time. It's something that can't be modified afterwards in XAPI.
Reinstalling without overwriting your VM disks is possible. You'd just 1. make sure you have backups, just in case and 2. carefully avoid selecting any disks for local SR creation during installation. Otherwise they would be wiped. I have doubts about what happens when they are on a partition which is on the same disk as the system, though. I'd have to test to be sure.
However, this is not a complete solution, because you'd still have your VM disks, but all VM metadata will be gone so you'd have to re-create all VMs one by one and associate the disks to them. There is a way to export and reimport metadata, but here's the catch: the metadata also contain information about the management interface, and restoring them might also overwrite your brand new IPv6 configuration.
So, to me, the best would be to export the VMs (VM exports do include VM metadata for the exported VM), reinstall the server, and then reimport the VM.
This considering that you only have one server. With more servers, you can use live migration.
-
@stormi Thanks for the thorough explanation. I will test whether SR partition on system disk is overwritten by installing and reinstalling XCP-ng on a VM on my laptop.
Loss of VM metadata doesn't concern me as I have relatively few VMs and am happy to just recreate these and attach the retained VDIs to them. The only question that remains is whether those VDIs (and some raw/non-sparse VHDs I have that were created by cloning old disks for data recovery tasks) will show up under the Local Storage repo with a simple click of the refresh button in XO, or whether metadata for those also needs to recreated manually.
Rest assured that I have backups I'd obviously just prefer to avoid needing to restore from them as it's time-consuming.