@r0ssar00 hi, that issue would arise if you ran this script with python3, but it's interpreter is set as /usr/bin/python
- How did you call this script, did you manually call it with python3? It should be ran by just running the command on the CLI eg interface-rename
Posts
-
RE: XCP-ng 8.3 betas and RCs feedback 🚀
-
RE: Ubuntu 22.04 Cloud-init disk size issue
@jubin3 As this is totally unrelated to XOA and XCP-ng, you'll (hopefully) get a better response in the cloud-init community, as it's their project which has (once again) been broken by an OS update. I gave up chasing them some time ago, especially with brand new OS releases.
-
RE: Assign second ipadres to network card
@rtjdamen Copying my reply to your official support ticket (any reason for duplicating support tickets on the forum as well?):
given XOA is built on standard debian, you can assign multiple IPs to the same interface quite easily by just duplicating another "iface eth1 inet static" line. Also keep in mind XOA does not add extra interfaces under the main /etc/network/interfaces file, but in files under the /etc/network/interfaces.d/ directory. So in your case given it was eth1 you wanted a second IP on, you can add your required second IP in this file like so:
[09:43 12] xoa:~$ cat /etc/network/interfaces.d/eth1 allow-hotplug eth1 iface eth1 inet static address 192.168.1.80 netmask 255.255.255.0 #second IP iface eth1 inet static address 172.16.100.5 netmask 255.255.255.0
-
RE: 10 gig secondary network
@abelaguilar indeed you do not have to fill out the dns and gateway fields - in fact as you surmised you shouldn't. Where you getting an error or something when leaving them blank? The only mandatory fields are IP and netmask.
-
RE: Second ip for hosts interface
@SNSNSN Indeed, these would typically at least be isolated via vlans at least (one vlan for iscsi traffic, one for iscsi). There's no point in having them in two different subnets if they're in the same network and vlan, the traffic isn't isolated at all. You might as well have them in the same subnet if you're doing that, in which case you only need 1 IP on the XCP-ng management NIC.
-
RE: Second ip for hosts interface
@SNSNSN Hi, this isn't possible, at least not without a lot of manual workarounds. It's not recommended anyhow, why do you need to assign another subnet to an adapter already in a different subnet? These should typically be isolated either physically via different connections, or via VLANs.
-
RE: Windows Server 2022 Essentials
@olivierlambert never done it myself, but this is indeed exactly what the feature "Copy host BIOS strings to VM" was intended for as @Andrew mentioned. Hopefully the BIOS strings this feature copies are enough for the ROK installer to recognize the "authorized" dell hardware
-
RE: iptables rule to allow apcupsd traffic to APC management card
Indeed, to properly edit iptables rules on xcp-ng, you need to add rules to
/etc/sysconfig/iptables
. I would copy something like the ssh allow line to another line directly below it, and change the port to 161 for example (and change protocol to udp, which I'm pretty sure your card uses, if it's just doing plain snmp). After verifying that fixes it, you can lock the rule down further by allowing this traffic only from the IP of the management card.Example of added line below ssh line:
-A RH-Firewall-1-INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m conntrack --ctstate NEW -m udp --dport 694 -j ACCEPT ##UPS rule -A RH-Firewall-1-INPUT -p tcp -m conntrack --ctstate NEW -m udp --dport 161 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m conntrack --ctstate NEW -m tcp --dport 80 -j ACCEPT etc etc
Note that anytime you edit this file, you must restart iptables for it to take effect with
service iptables restart
Thinking about this further though I don't think this should be necessary, as the ups daemon in dom0 is reaching out to the UPS card, not the other way around, so an explicit open port shouldn't be necessary with the default iptables in dom0 (which allows outbound conns)
-
RE: Network pool + Cloud Init
@brm Hmm, I actually am not sure if we ever added support for this specifically (specifying an IP from IP pools in a cloud-init configuration). I've never seen IP variables used or referenced so I don't think it's currently possible. @olivierlambert who was it on the team that implemented the IP Pools feature?
-
RE: When attempting to create a OPNsense VM via XO stack becomes unresponsive.
@MrXeon So, the actual root issue here I believe, is opnsense installs come with an IP and dhcp server already assigned and enabled on the lan interface (I believe it's 192.168.1.1, but don't quote me). If your existing home network already uses 192.168.1.x/24 and already has a dhcp server, booting an opnsense install with it's virtual lan nic set to your existing home lan, there will be a lot of conflicts. Virtual nic order can be whatever you'd like (you can change and move around assignments in opnsense), but if it's preconfigured lan interface gets set to your preexisting lan network, there will be conflicts
-
RE: Any updated tutorial on how to create new cloud images?
Also note the text at the top of your screenshot: to continue you need to select a boot device. There might be a way in that menu (or partition creation submenu) to mark that created partition as bootable, or maybe you just need to highlight/select the partition under "used devices" before hitting "done"
-
RE: Any updated tutorial on how to create new cloud images?
@encryptblockr yup, welcome to cloud-init hell. Your issue is definitely ubuntu related though, if I had to guess, the installer wants/requires a swap partition. Just create a 1 or 2gb swap partition as well, but put it first in the partition table, so the root partition after it has room to grow. You'll also run into some network issues probably when trying to use your new template, as ubuntu has moved to new netplan crap to manage networking in the OS, and cloud-init has a ton of bugs with it
-
RE: Proper way to handle XO CloudConfigDrive and CloudInit post provisioning
@furyflash777 I'm assuming you're on Ubuntu? Indeed as Olivier said this is tested on Debian and doesn't cause issues, but it seems on the newer Ubuntu versions with cloud-init, the new Netplan based network manager and how it interacts with cloud-init breaks/gets wiped if no cloud-init drive is found. Yet another cloud-init bug to track down
-
RE: Networking disparity between guest OS and XO
@jcdick1 I run opnsense on xcp-ng personally as well and use their packaged tools without issue, the only time I've gotten this behavior is when I hot-added interfaces and it changed the order of interfaces. If that's not it, I'm really not sure what would be causing this
One last thing you can try in case it's a weird cash issue is (inside XOA) go to settings > servers, click the green connected button next to your xcp-ng server to disconnect it from xoa - then wait a couple seconds and click it again to reconnect it
-
RE: Networking disparity between guest OS and XO
@jcdick1 Hi, have you hot-added any new network interfaces to this VM by chance? I've noticed when doing this with *bsd based guests like the *sense projects, the order can get quite messed up, if you've added any new interfaces, changed any MAC addresses, etc, can you please shut the VM down entirely (not just issue a reboot) - once the power state of the VM is completely off, start it again.
Note that if you did hot-add interfaces and hadn't rebooted yet, the interface order will probably change into its "final" order (the ordering seems to be affected when hot adding interfaces, eg when I hot add interfaces into *bsd VMs, sometimes the new interface will show up as xn0 in the VM, so the existing xn0 will get moved to xn3 etc). I've avoided this by just no longer hot adding interfaces and doing it when the VM is off instead
-
RE: Epyc Boost... not boosting?
@tekwendell xen carefully manages CPU power management to match VM load and vCPU count, I would not manually try to adjust things with xenpm in the meantime as it's likely you'll make things worse (don't try to outsmart xen power management unless you have a VERY specific use case). Xen is designed for paralleled workloads (more than a single VM), so there's many tunables for VMs that are set with this in mind (like CPU affinity). So by default I'm sure the CPU affinity for your single windows VM is still set somewhere in the "middle", so it's not going to be allowed to schedule the full CPU time versus what dom0 is also using.
I'm not an expert in AMD/Epyc power management, but I believe it's pretty typical that CPU power/clock management boosts based on overall CPU load, and running a benchmark on only a single VM using something like 8 cores on a 64 core processor is not going to demand a lot CPU time, so I'm not surprised to see it's not boosting very far. Spin up 6 more of those VMs and benchmark them all at the same time, I wouldn't be surprised if you see it start boosting higher
475 cpu-z versus 501 bare metal is very good and indicates pretty clearly there's no issue here, you're getting 94% bare metal performance on windows under a large virtualization stack (historically the OS with the most overhead to virtualize). I would be very happy about this
If you really want to dig further, ensure your bios power management is set to "OS-controlled", this will hand more control over turbo and c-states to the xen power manager and is what is recommended on AMD processors, and then you can use some commands listed here to check actual turbo status. But again, note that I won't be surprised if you can't get a 64-core processor to enter its highest turbo states when only stressing 1/10th of its cores: https://support.citrix.com/article/CTX200390/power-settings-in-citrix-hypervisor-cstates-turbo-and-cpu-frequency-scaling
-
RE: Netbox Plugin: IP-address created always uses the "largest prefix" in Netbox
@olivierlambert I vaguely remember @pdonias and I discussing which of these behaviors would be best and we decided on adding it to the smallest matching prefix, I'm not sure why the behavior is the opposite
-
RE: Changing Hosts and XOA IP
@jmishal As @tjkreidl says, you can do this quite easily through the management console in your screenshot. But be aware if this is a pool, change the master first, then each slave. This won't affect VM traffic on the network or running VMs.
-
RE: Any updated tutorial on how to create new cloud images?
@encryptblockr The issue is the cloud-init project is so disorganized and is constantly introducing more and more OS-dependent workarounds, bugs, and oddities that any documentation will become useless in a few months. I maintain our XOA Hub cloud-init builds, and I have to personally rewrite my own documentation every single time we push out a new image because cloud-init has broken something new, changed or removed config options without documenting it, or the underlying OS has changed how it deals with networking files etc and cloud-init has not been updated to deal with that. For example: I don't think the below will work any longer networking wise with netplan based distros, like newer ubuntus.
For debian/ubuntu, this is my rough process currently, but it's not worth putting in an official blog or document guide because it will be rendered useless and just cause people frustration the next release cloud-init puts out, the next debian or ubuntu version change, etc. You'll note none of this is related to XCP-ng at all (except for xen tools obviously), so the below is documentation cloud-init should really be publishing themselves - I have a feeling they also know it would quickly be rendered useless as well so they don't bother. Anyway:
#create a new debian or ubuntu VM in XCP-ng with a ~10gb disk. During the install ISO process, you'll have to choose manual partitioning, and either remove the swap partition completely from the partition layout, or move it to the beginning of the disk. If you choose auto partitioning, it will put a swap partition after the data partition, so the data partition won't be able to be expanded on your template. Once it's installed and running, install xcp-ng guest utilities. To get the latest version, use our guest-utilities ISO. Once all that xen-specific stuff is done, you can start the cloud-init setup: Start with installing cloud-init in the first place:
apt-get install -y cloud-init cloud-utils cloud-initramfs-growroot
Set the root and default user to random passwords, then disable password logins (this is done on our templates so only pubkey based logins are accepted, skip this optionally):
echo 'root:dfgdfgdfg' | chpasswd echo 'ubuntu:dfgdfg' | chpasswd passwd -l root passwd -l ubuntu nano /etc/ssh/sshd_config #set permitrootlogin to "no"
Clean networking stuff that the VM picked up via dhcp from your current network, they will be re-populated with current info whenever the template is deployed:
nano /etc/resolv.conf #remove all nano /etc/hosts # remove everything but localhost
Now the important part, set the main cloud init config. Edit
/etc/cloud/cloud.cfg
- What exactly you need to edit here is impossible to document, as the default values in this file change across every OS, and are changed across every cloud-init version, with no documentation indicating they have done so. The actual behavior of a given option has also been changed out of nowhere. So I'll try to summarize. You want to remove anydatasource_list
ordatasource
blocks, and replace them with these values:datasource_list: [ NoCloud, ConfigDrive ] datasource: ConfigDrive: dsmode: local NoCloud: fs_label: cidata
You'll need to find and change (or add) these vars as well so networking is properly handled:
manage_resolv_conf: true manage_etc_hosts: true preserve_hostname: false
Also find the
default_user
block, and change it to whatever username you set up during the install ISO (the user aside from root). On our templates we name this user after the OS, so the non-root user on the ubuntu template is "ubuntu", so that part of the cloud-init config looks like this:default_user: name: ubuntu
If your file doesn't have the following somewhere in it, add it (I've had to add it on some builds, on other builds it was magically there already):
users: - default disable_root: true
On some older cloud-init versions, I had to manually re-order the module init staging order, because cloud-init could not be bothered to get this right themselves. I think (???) this has been resolved in later builds, but again it changes constantly so there's no way to know. Basically, ensure these three modules are under the first "cloud_init_modules" that run during the init stage, not under the later config or final stage:
- set_hostname - update_hostname - update_etc_hosts
Save the config file, and hopefully you're done with that. I can't outline just how unreliable documenting this file is - the defaults, the behavior of certain options, or even the presence of certain options changes entirely across OS's and cloud-init versions, so I can't just keep a "master copy" of the config file and expect it to work. You have to examine the default file you get line by line and work towards the values above until the template works properly. When it stops working properly 3 months later, browse the cloud-init mailing lists and github issues to figure out which option's behavior they changed without any warning, and repeat.
Now remove all the random config files that cloud-init occasionally decides to install that will override your own config and render it useless. This list of files changes whenever cloud-init builds feel like it, so just look in the parent directory to see what's there on your specific cloud-init version:
rm -rf /etc/cloud/cloud.cfg.d/99-installer.cfg /etc/cloud/cloud.cfg.d/90_dpkg.cfg /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg
Remove any stuff the VM picked up that you don't want in all your templates. This also cleans any cloud-init runs that might have occurred if you rebooted the VM, so the template you end up with will be "fresh". We also remove the command history for the root user and default user
rm -rf /etc/ssh/ssh_host_* cloud-init clean --logs su - ubuntu cat /dev/null > ~/.bash_history && history -c && exit cat /dev/null > ~/.bash_history && history -c && exit
Now you can shut the VM down and convert it into a template. Whenever deploying, it will already have xen guest tools, and a disk that will automatically be expanded to whatever you set the disk size to when deploying
NOTE: If you plan on using the "network config" cloud-init box during VM creation in XOA, note that whatever you put in that box is ADDED to the VM's default network config. It does not REPLACE it. That means when you follow the directions above, almost all OS's will have a default config of DHCP on eth0 in
/etc/network/interfaces
- so if you fill out the cloud-init network box during VM creation to set a static IP, the VM will still read the DHCP config, get a DHCP address, then read the cloud-init created network config files under/etc/network/interfaces.d/*
and add that static IP as well. If you want to configure VMs with only static IPs you configure during VM deployment with cloud-init for example, you would have to remove the dhcp config stuff out of/etc/network/interfaces
on your template VM when creating it