Updated configs for cloud-init
-
Had a chance to "refresh" some cloud-init configs for some later distros (Debian 12, Ubuntu 24.04, Rocky Linux 9.3, etc.) so thought I would share some example configs that are used mainly for bootstrapping new VM's and or containers.
This first one is targeted at deb based distros and sets up an Incus container host. Like most configs I use this one is pretty specific pulling key files and configs from an NFS share, rsyslog target, etc., but gives an idea what can be done for detailed provisioning of instances, aside from just slapping your SSH keys on and letting ansible take over. With this I can have a fresh VM provisioned in about 5 minutes.
I always like to run a status check after init:
root@IncusTEST20:~# cloud-init status --long status: done boot_status_code: enabled-by-generator last_update: Sat, 26 Jul 2024 17:29:50 +0000 detail: DataSourceNoCloud [seed=/dev/xvdc][dsmode=net]
Then we can query the user-data to see how it rendered from the config injected into the nocloud datasource:
root@IncusTEST20:~# cloud-init query userdata#cloud-config hostname: IncusTEST20 users: - name: foo groups: sudo lock_passwd: false passwd: $6$10023$EKz8eWTDXQO3x7.4ff0ZNJLsl9q6RB.l8pZN9nq8BzT42zzOn7O4r./ybHeVa/l0W0/FARK/2Ttg177ywAP0Z1 ssh_authorized_keys: - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEqTfKkUKEGxOi62A9tCWMslqF5i9xm0aMN+ZxWgHuR6 foobar-ed25519-20240725 shell: /bin/bash - name: bar groups: sudo lock_passwd: false passwd: $6$10023$EKz8eWTDXQO3x7.4ff0ZNJLsl9q6RB.l8pZN9nq8BzT42zzOn7O4r./ybHeVa/l0W0/FARK/2Ttg177ywAP0Z1 ssh_authorized_keys: - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEqTfKkUKEGxOi62A9tCWMslqF5i9xm0aMN+ZxWgHuR6 foobar-ed25519-20240725 shell: /bin/bash locale: en_US.UTF-8 timezone: America/New_York resize_rootfs: true mounts: - ["192.168.99.2:/mnt/Vault/lxd", "/mnt/lxd", "nfs", "auto,nofail,noatime,nolock,intr,tcp,actimeo=1800,user,suid", "0", "0"] - ["192.168.0.54:/mnt/raid/backup", "/mnt/nas", "nfs", "auto,nofail,noatime,nolock,intr,tcp,actimeo=1800,user,suid", "0", "0"] rsyslog: remotes: log_serv: "192.168.50.35:5140" write_files: - path: /etc/init.d/incus.sh owner: root:root permissions: 0o755 defer: true content: | #!/bin/bash curl -fsSL https://pkgs.zabbly.com/key.asc -o /etc/apt/keyrings/zabbly.asc sh -c 'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-lts-6.0.sources Enabled: yes Types: deb URIs: https://pkgs.zabbly.com/incus/lts-6.0 Suites: $(. /etc/os-release && echo ${VERSION_CODENAME}) Components: main Architectures: $(dpkg --print-architecture) Signed-By: /etc/apt/keyrings/zabbly.asc EOF' apt update -y dpkg --configure -a apt install incus incus-ui-canonical -y mkdir -p /root/.config mkdir -p /root/.config/rclone mount -a cp /mnt/lxd/.config/.encode /root/.encode cp /mnt/lxd/.config/rclone.conf /root/.config/rclone/rclone.conf chmod 600 /root/.config/rclone/rclone.conf # Delete self rm "${0}" runcmd: - mkdir /mnt/lxd - mkdir /mnt/nas - date > /etc/birth_certificate - [ mount /dev/cdrom /mnt ] - [ bash /mnt/Linux/install.sh ] - [ umount /dev/cdrom ] - [ sh, /etc/init.d/incus.sh ] package_update: true package_upgrade: true packages: - htop - nano - curl - wget - nfs-common - btrfs-progs - bridge-utils - build-essential - rclone
Here is a more universal config with jinja templating syntax that can be targeted at many distros and using an 'if distro' will configure the instance based on the distro value in metadata:
This one targets debian, ubuntu, centos, redhat and rocky linux 8 and up - notice the ##template: jinja at the top, this allows jinja to render and process:
## template: jinja #cloud-config {% set u1 = 'foobar' %} {% set u1pass = '$6$10023$EKz8eWTDXQO3x7.4ff0ZNJLsl9q6RB.l8pZN9nq8BzT42zzOn7O4r./ybHeVa/l0W0/FARK/2Ttg177ywAP0Z1' %} {% set u1key = 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEqTfKkUKEGxOi62A9tCWMslqF5i9xm0aMN+ZxWgHuR6 foobar-ed25519-20240725' %} {% set u2 = 'ansible' %} {% set u2pass = '$6$10023$EKz8eWTDXQO3x7.4ff0ZNJLsl9q6RB.l8pZN9nq8BzT42zzOn7O4r./ybHeVa/l0W0/FARK/2Ttg177ywAP0Z1' %} {% set u2key = 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEqTfKkUKEGxOi62A9tCWMslqF5i9xm0aMN+ZxWgHuR6 foobar-ed25519-20240725' %} locale: en_US.UTF-8 timezone: America/New_York runcmd: - mkdir /mnt/v-nas - mkdir /mnt/home - date > /etc/birth_certificate rsyslog: remotes: log_serv: "192.168.50.35:5140" {% if distro == 'rocky' or distro == 'centos' or distro == 'redhat' %} {% set group = 'wheel' %} repo_update: true repo_upgrade: all yum_repos: epel-release: name: Extra Packages for Enterprise Linux 9 - Release baseurl: http://download.fedoraproject.org/pub/epel/9/Everything/$basearch enabled: true failovermethod: priority gpgcheck: true gpgkey: http://download.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-9 package_upgrade: true packages: - htop - nano {% elif distro == 'ubuntu' or distro == 'debian' %} {% set group = 'sudo' %} package_update: true package_upgrade: true packages: - htop - nano - build-essential users: - name: {{ u1 }} groups: {{ group }} lock_passwd: false passwd: {{ u1pass }} ssh_authorized_keys: - {{ u1key }} shell: /bin/bash - name: {{ u2 }} groups: {{ group }} lock_passwd: false passwd: {{ u2pass }} ssh_authorized_keys: - {{ u2key }} shell: /bin/bash {%- endif %}
Just a note, to consume jinja templating, you need cloud-init 22.x or higher, with the jinja package installed in your template/image.
For anyone wanting to tinker with these, (these are not my keys or password hashes, these I just threw in for demonstration purposes and for the wiki page). The password for the accounts is "foobar" and the matching private key is:
-----BEGIN OPENSSH PRIVATE KEY----- b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW QyNTUxOQAAACBKk3ypFChBsToutgPbQljLJaheYvcZtGjDfmcVoB7kegAAAKADtgJ2A7YC dgAAAAtzc2gtZWQyNTUxOQAAACBKk3ypFChBsToutgPbQljLJaheYvcZtGjDfmcVoB7keg AAAEC1DHPxJPEU3Ywf14x7k7IMXt1nKPvwBmG6vAXsZceiVUqTfKkUKEGxOi62A9tCWMsl qF5i9xm0aMN+ZxWgHuR6AAAAF2Zvb2Jhci1lZDI1NTE5LTIwMjQwNzI1AQIDBAUG -----END OPENSSH PRIVATE KEY-----
Enjoy!
-
are you able to create Ubuntu 24.04 cloud image template?
-
@encryptblockr
Yes, and I documented the process I used a while back here: https://xcp-ng.org/forum/topic/6821/cloud-init-successI just use the stock cloud image .ova file as my base image. for Ubuntu templates. I use the noble (24.04) image in quite a few instances from my template with success.
Your post mentions cloud-init is not working - some tips to try troubleshooting, when the new VM is provisioned, (assuming you are able to login) I usually run "cloud-init status --wait" and when that completes I run "cloud-init status --long" for a summary of the state. Then you can grep for WARNING in /var/log/cloud-init.log for clues.
If you weren't able to login to the VM - make sure your cloud-init config has the userid's, passwords and SSH keys you will need for login. See the example I posted here, since those are what I use in my lab or some variation.
Feel free to share any troubleshooting steps you have taken, and I will help any way I can. I have setup hundreds of VM's from templates using cloud-init so I have seen many of the challenges it brings.