XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    DevOps Megathread: what you need and how we can help!

    Scheduled Pinned Locked Moved Infrastructure as Code
    33 Posts 14 Posters 2.8k Views 17 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • J Offline
      Jonathon @nathanael-h
      last edited by

      @nathanael-h Nice 😄

      If you have any questions let me know, I have been using this for all our on prem clusters for a while now.

      andrewperryA 1 Reply Last reply Reply Quote 1
      • B Offline
        BMeach
        last edited by

        It nice to be able to create a schedule or have some way of automatically cleaning up VM templates within XO/XCP-NG with the ability to set a retention policy like backups.

        For example, we have a GitHub workflow that runs daily, weekly, and monthly to create base Ubuntu/Windows server images with the latest updates, those templates have a tag with the build date. We then leverage those templates through other pipelines to test k3s cluster updates with the Terraform provider.

        Currently I have not been able to find any automated task within XO or XCP-NG without external systems. As of current I go in daily and clean up templates that exceeds our desired retention policy (similar to our backup retention policy). I would also like to note that we also have the Vates VMS Enterprise plan through my work account.

        I would also love to see some more work on the Packer provider, mainly with the XVA builder. We have our base Ubuntu templates but would like to be able to take that template and then make another template on top of it with the k3s binary installed for example to prevent having to download and install the binary or other tooling on each VM in a cluster using Terraform or Ansible.

        Lastly it would be nice to have some more frequent updates to the Terraform provider. I am aware that there are updates still being pushed to the main branch but the last release was published on Mar 20, 2024.

        1 Reply Last reply Reply Quote 0
        • B Offline
          BMeach @bufanda
          last edited by

          @bufanda said in DevOps Megathread: what you need and how we can help!:

          Backup management with the Terraform provider would be a great feature. Maybe also for an upcoming ansible module. I always struggle to find the right backup for a VM since I grouped them in logical groups. So one Backup may handle multiple VMs and Sometimes it would be just easier to edit some IaC then the GUI, especially when I destroy a VM, I always forget to check if Backups exists.

          @nathanael-h said in DevOps Megathread: what you need and how we can help!:

          @bufanda I think we'll be able to add backup support to Terraform when 1. the provider will use the new Rest API, and 2. when this API will offer endpoints for backups management. I took note. (This won't be done in minutes 😉 )
          About Ansible, it'll depends also if/when we start work on it.

          +1 to the backup management through Terraform. It would be great to be able to manage backup jobs and sequences through Terraform.

          1 Reply Last reply Reply Quote 0
          • nathanael-hN Offline
            nathanael-h Vates 🪐 DevOps Team
            last edited by

            Hello there, we released a new Pulumi Xen Orchestra provider last month ! It's worth noting that the work on this was started by some contributors from DESY, and that now we (Vates) commit to support and maintain it. This demonstrate the strength of joined work from both community and Vates on free and open source softwares 🤝

            So what is offered is to declare your infrastructure as code, in Javascript or Typescript, Go, or Python (pick the one you prefer 🎲 ) and to deploy, maintain, and update it.

            https://github.com/vatesfr/pulumi-xenorchestra/

            CyrilleC 1 Reply Last reply Reply Quote 5
            • CyrilleC Offline
              Cyrille Vates 🪐 DevOps Team @nathanael-h
              last edited by

              Hello,
              If you want to discuss the Pulumi Provider in detail, let discuss here: https://xcp-ng.org/forum/topic/10592/pulumi-xen-orchestra-news

              1 Reply Last reply Reply Quote 1
              • andrewperryA Offline
                andrewperry @Jonathon
                last edited by andrewperry

                @Jonathon this is really nice to have shared, as we are looking to migrate from the RKE cluster we've deployed on bare-metal Xen to XCP-ng VMs to setup an RKE2 cluster to migrate to.

                Will review this and probably have a bunch of questions!

                J 1 Reply Last reply Reply Quote 2
                • J Offline
                  Jonathon @andrewperry
                  last edited by

                  @andrewperry I myself migrated our rancher management cluster from the original rke to a new rke2 cluster using this plan not too long ago, so you should not have much trouble. Feel free to ask questions 🙂

                  1 Reply Last reply Reply Quote 1
                  • Tristis OrisT Offline
                    Tristis Oris Top contributor
                    last edited by Tristis Oris

                    i found time to play with cloud init, most of examples looks outdated or don't work dunno why.

                    hostname: {name} don't work, only hostname: {name}%. Also i don't find that macros at official doc.

                    with manage_etc_hosts: true it changed /etc/hosts 127.0.1.1 basename to 127.0.1.1 basename test%. Maybe package itself bug, maybe XO problem.

                    preserve_hostname: false looks not required, i don't see any difference.

                    even if not use any network config, it change netplan (don't need it with dhcp).

                    network:
                      version: 2
                      ethernets:
                        enX0:
                          dhcp4: true
                    

                    to

                    network:
                      version: 2
                      ethernets:
                        enX0:
                          match:
                            macaddress: "my_mac"
                          dhcp4: true
                          dhcp6: true
                          set-name: "enX0"
                    

                    to save default netplan, need to use something like

                    network:
                      version: 1
                      config:
                          subnets:
                          - type: dhcp4
                          type: physical
                    

                    can't make disk resize work, it looks like rocket science. And this is most important part for me.

                    resize_rootfs: true
                    growpart:
                      mode: auto
                      devices: ['/']
                      ignore_growroot_disabled: false
                    

                    I'm fine enough with manually tuned templates, 99% time don't need to change anything except name\disk. Other tasks require manual attention anyway or already covered with ansible. Would be nice to see tutorial for IQ<3.

                    nathanael-hN 1 Reply Last reply Reply Quote 0
                    • nathanael-hN Offline
                      nathanael-h Vates 🪐 DevOps Team @Tristis Oris
                      last edited by

                      @Tristis-Oris
                      Hello, thanks for the report. I will try to fix and improve things, but before I have a few questions.

                      • What is the template you are using? Is it one from XOA Hub?
                      • Where did you found the cloud-init config snippets?

                      For your information, the defaults cloud-init configs snippets come from here: https://github.com/vatesfr/xen-orchestra/blob/master/packages/xo-web/src/common/cloud-config.js#L78-L88

                      For growpart it depends of the template used. Last time I tested it was working with a Debian 12 template from the XOA Hub.

                      Tristis OrisT 1 Reply Last reply Reply Quote 0
                      • Tristis OrisT Offline
                        Tristis Oris Top contributor @nathanael-h
                        last edited by

                        @nathanael-h

                        • my custom template.
                        • forum, cloud-init doc.
                        1 Reply Last reply Reply Quote 0
                        • CyrilleC Cyrille referenced this topic on
                        • abreauxA abreaux referenced this topic on
                        • nathanael-hN nathanael-h referenced this topic on
                        • CyrilleC Offline
                          Cyrille Vates 🪐 DevOps Team
                          last edited by

                          Pulumi Xen Orchestra Provider - Release v2.0.0

                          We released a new version of the Pulumi Xen Orchestra provider.

                          You can fail more information about the release here : https://xcp-ng.org/forum/post/92858

                          1 Reply Last reply Reply Quote 1
                          • nathanael-hN Offline
                            nathanael-h Vates 🪐 DevOps Team
                            last edited by nathanael-h

                            🎉 xo-powershell moves from alpha to beta

                            The XO-PowerShell module is published in the :microsoft: PowerShell Gallery as v1.0.0-beta

                            https://www.powershellgallery.com/packages/xo-powershell/1.0.0-beta

                            Grab it with one powershell command:

                            Install-Module -Name xo-powershell -AllowPrerelease
                            

                            Doc here

                            Thanks to @dinhngtu @iButcat

                            1 Reply Last reply Reply Quote 4
                            • CyrilleC Offline
                              Cyrille Vates 🪐 DevOps Team
                              last edited by

                              Hello there,

                              We release a new version Terraform provider with improvements of the VM disk lifecycle!

                              Now you can expand a VM disk with Terraform without data loss.

                              Read the release note: https://github.com/vatesfr/terraform-provider-xenorchestra/releases/tag/v0.32.0

                              1 Reply Last reply Reply Quote 2
                              • nathanael-hN Offline
                                nathanael-h Vates 🪐 DevOps Team
                                last edited by

                                Hello 👋
                                We published a new blog post about our Kubernetes recipe:
                                You'll find there

                                • A step by step guide to create a production ready Kubernetes cluster, on top of your servers, in minutes!
                                • Some architecture insights 😉

                                https://xen-orchestra.com/blog/virtops-6-create-a-kubernetes-cluster-in-minutes/

                                Thanks to @Cyrille

                                1 Reply Last reply Reply Quote 2
                                • CyrilleC Offline
                                  Cyrille Vates 🪐 DevOps Team
                                  last edited by

                                  Xen Orchestra Cloud Controller Manager in development 🚀

                                  Hello everyone 👋

                                  We publish a development version of a Xen Orchestra Cloud Controller Manager!

                                  It support the controllers cloud-node and cloud-node-lifecycle and add labels to your Kubernetes nodes hosted on Xen Orchestra VMs.

                                  apiVersion: v1
                                  kind: Node
                                  metadata:
                                    labels:
                                      # Type generated base on CPU and RAM
                                      node.kubernetes.io/instance-type: 2VCPU-1GB
                                      # Xen Orchestra Pool ID of the node VM Host
                                      topology.kubernetes.io/region: 3679fe1a-d058-4055-b800-d30e1bd2af48
                                      # Xen Orchestra ID of the node VM Host
                                      topology.kubernetes.io/zone: 3d6764fe-dc88-42bf-9147-c87d54a73f21
                                      # Additional labels based on Xen Orchestra data (beta)
                                      topology.k8s.xenorchestra/host_id: 3d6764fe-dc88-42bf-9147-c87d54a73f21
                                      topology.k8s.xenorchestra/pool_id: 3679fe1a-d058-4055-b800-d30e1bd2af48
                                      vm.k8s.xenorchestra/name_label: cgn-microk8s-recipe---Control-Plane
                                      ...
                                    name: worker-1
                                  spec:
                                    ...
                                    # providerID - magic string:
                                    #   xeorchestra://{Pool ID}/{VM ID}
                                    providerID: xeorchestra://3679fe1a-d058-4055-b800-d30e1bd2af48/8f0d32f8-3ce5-487f-9793-431bab66c115
                                  

                                  For now, we have only tested the provider with Microk8s.

                                  What's next?

                                  We will test the CCM with other types of Kubernetes clusters and work on fixing known issues.
                                  Also a modification of the XOA Hub recipe will come to include the CCM.
                                  More label will be added (Pool Name, VM Name, etc.).

                                  Feedback is welcome!

                                  You can install and test the XO CCM, and provide feedback to help improve and speed up the release of the first stable version. This is greatly appreciated 🙂

                                  ➡ The XO CCM repository
                                  ➡ Installation doc

                                  CyrilleC 1 Reply Last reply Reply Quote 3
                                  • CyrilleC Offline
                                    Cyrille Vates 🪐 DevOps Team @Cyrille
                                    last edited by

                                    Pulumi Xen Orchestra Provider - Release v2.1.0

                                    This new version brings improvement on the VM disks lifecycle made on the Terraform Provider.

                                    https://github.com/vatesfr/pulumi-xenorchestra/releases/tag/v2.1.0

                                    1 Reply Last reply Reply Quote 1
                                    • A Offline
                                      afk
                                      last edited by

                                      Hi, I'm currently testing deployments with pulumi using packer templates.

                                      So far the basics work as expected but I'm stuck on a setting issue that seems to affect both pulumi and terraform providers. As far as I know there is no way to set the memory as static or changing memory_min when creating a VM from a template.

                                      The template was created with 1cpu and 2GB of RAM

                                      Screenshot 2025-07-15 at 11.56.01.png

                                      The VM created from this template using pulumi was assigned 2cpus and 4GB of RAM, however this only sets memory_max

                                      Screenshot 2025-07-15 at 11.56.21.png

                                      I found the following post that talks about this: https://xcp-ng.org/forum/topic/5628/xenorchestra-with-terraform

                                      and also the folllowing github issue https://github.com/vatesfr/terraform-provider-xenorchestra/issues/211

                                      Manually setting the memory limits after VM creation defeats the purpose of automation. I suppose that implementing those settings in the relevant providers is a core feature. In most cases, VMs need static memory limits.

                                      In the meantime, is there any workaround that I should investigate or anything that I missed ?

                                      slax81 created this issue in vatesfr/terraform-provider-xenorchestra

                                      open Dynamic memory control #211

                                      1 Reply Last reply Reply Quote 0
                                      • First post
                                        Last post