XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    DevOps Megathread: what you need and how we can help!

    Scheduled Pinned Locked Moved Infrastructure as Code
    40 Posts 15 Posters 4.4k Views 18 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • CyrilleC Offline
      Cyrille Vates πŸͺ DevOps Team @nathanael-h
      last edited by

      Hello,
      If you want to discuss the Pulumi Provider in detail, let discuss here: https://xcp-ng.org/forum/topic/10592/pulumi-xen-orchestra-news

      1 Reply Last reply Reply Quote 1
      • andrewperryA Offline
        andrewperry @Jonathon
        last edited by andrewperry

        @Jonathon this is really nice to have shared, as we are looking to migrate from the RKE cluster we've deployed on bare-metal Xen to XCP-ng VMs to setup an RKE2 cluster to migrate to.

        Will review this and probably have a bunch of questions!

        J 1 Reply Last reply Reply Quote 2
        • J Offline
          Jonathon @andrewperry
          last edited by

          @andrewperry I myself migrated our rancher management cluster from the original rke to a new rke2 cluster using this plan not too long ago, so you should not have much trouble. Feel free to ask questions πŸ™‚

          1 Reply Last reply Reply Quote 1
          • Tristis OrisT Offline
            Tristis Oris Top contributor
            last edited by Tristis Oris

            i found time to play with cloud init, most of examples looks outdated or don't work dunno why.

            hostname: {name} don't work, only hostname: {name}%. Also i don't find that macros at official doc.

            with manage_etc_hosts: true it changed /etc/hosts 127.0.1.1 basename to 127.0.1.1 basename test%. Maybe package itself bug, maybe XO problem.

            preserve_hostname: false looks not required, i don't see any difference.

            even if not use any network config, it change netplan (don't need it with dhcp).

            network:
              version: 2
              ethernets:
                enX0:
                  dhcp4: true
            

            to

            network:
              version: 2
              ethernets:
                enX0:
                  match:
                    macaddress: "my_mac"
                  dhcp4: true
                  dhcp6: true
                  set-name: "enX0"
            

            to save default netplan, need to use something like

            network:
              version: 1
              config:
                  subnets:
                  - type: dhcp4
                  type: physical
            

            can't make disk resize work, it looks like rocket science. And this is most important part for me.

            resize_rootfs: true
            growpart:
              mode: auto
              devices: ['/']
              ignore_growroot_disabled: false
            

            I'm fine enough with manually tuned templates, 99% time don't need to change anything except name\disk. Other tasks require manual attention anyway or already covered with ansible. Would be nice to see tutorial for IQ<3.

            nathanael-hN 1 Reply Last reply Reply Quote 0
            • nathanael-hN Offline
              nathanael-h Vates πŸͺ DevOps Team @Tristis Oris
              last edited by

              @Tristis-Oris
              Hello, thanks for the report. I will try to fix and improve things, but before I have a few questions.

              • What is the template you are using? Is it one from XOA Hub?
              • Where did you found the cloud-init config snippets?

              For your information, the defaults cloud-init configs snippets come from here: https://github.com/vatesfr/xen-orchestra/blob/master/packages/xo-web/src/common/cloud-config.js#L78-L88

              For growpart it depends of the template used. Last time I tested it was working with a Debian 12 template from the XOA Hub.

              Tristis OrisT 1 Reply Last reply Reply Quote 0
              • Tristis OrisT Offline
                Tristis Oris Top contributor @nathanael-h
                last edited by

                @nathanael-h

                • my custom template.
                • forum, cloud-init doc.
                1 Reply Last reply Reply Quote 0
                • CyrilleC Cyrille referenced this topic on
                • abreauxA abreaux referenced this topic on
                • nathanael-hN nathanael-h referenced this topic on
                • CyrilleC Offline
                  Cyrille Vates πŸͺ DevOps Team
                  last edited by

                  Pulumi Xen Orchestra Provider - Release v2.0.0

                  We released a new version of the Pulumi Xen Orchestra provider.

                  You can fail more information about the release here : https://xcp-ng.org/forum/post/92858

                  1 Reply Last reply Reply Quote 1
                  • nathanael-hN Offline
                    nathanael-h Vates πŸͺ DevOps Team
                    last edited by nathanael-h

                    πŸŽ‰ xo-powershell moves from alpha to beta

                    The XO-PowerShell module is published in the :microsoft: PowerShell Gallery as v1.0.0-beta

                    https://www.powershellgallery.com/packages/xo-powershell/1.0.0-beta

                    Grab it with one powershell command:

                    Install-Module -Name xo-powershell -AllowPrerelease
                    

                    Doc here

                    Thanks to @dinhngtu @iButcat

                    1 Reply Last reply Reply Quote 4
                    • CyrilleC Offline
                      Cyrille Vates πŸͺ DevOps Team
                      last edited by

                      Hello there,

                      We release a new version Terraform provider with improvements of the VM disk lifecycle!

                      Now you can expand a VM disk with Terraform without data loss.

                      Read the release note: https://github.com/vatesfr/terraform-provider-xenorchestra/releases/tag/v0.32.0

                      1 Reply Last reply Reply Quote 2
                      • nathanael-hN Offline
                        nathanael-h Vates πŸͺ DevOps Team
                        last edited by

                        Hello πŸ‘‹
                        We published a new blog post about our Kubernetes recipe:
                        You'll find there

                        • A step by step guide to create a production ready Kubernetes cluster, on top of your servers, in minutes!
                        • Some architecture insights πŸ˜‰

                        https://xen-orchestra.com/blog/virtops-6-create-a-kubernetes-cluster-in-minutes/

                        Thanks to @Cyrille

                        1 Reply Last reply Reply Quote 2
                        • CyrilleC Offline
                          Cyrille Vates πŸͺ DevOps Team
                          last edited by

                          Xen Orchestra Cloud Controller Manager in development πŸš€

                          Hello everyone πŸ‘‹

                          We publish a development version of a Xen Orchestra Cloud Controller Manager!

                          It support the controllers cloud-node and cloud-node-lifecycle and add labels to your Kubernetes nodes hosted on Xen Orchestra VMs.

                          apiVersion: v1
                          kind: Node
                          metadata:
                            labels:
                              # Type generated base on CPU and RAM
                              node.kubernetes.io/instance-type: 2VCPU-1GB
                              # Xen Orchestra Pool ID of the node VM Host
                              topology.kubernetes.io/region: 3679fe1a-d058-4055-b800-d30e1bd2af48
                              # Xen Orchestra ID of the node VM Host
                              topology.kubernetes.io/zone: 3d6764fe-dc88-42bf-9147-c87d54a73f21
                              # Additional labels based on Xen Orchestra data (beta)
                              topology.k8s.xenorchestra/host_id: 3d6764fe-dc88-42bf-9147-c87d54a73f21
                              topology.k8s.xenorchestra/pool_id: 3679fe1a-d058-4055-b800-d30e1bd2af48
                              vm.k8s.xenorchestra/name_label: cgn-microk8s-recipe---Control-Plane
                              ...
                            name: worker-1
                          spec:
                            ...
                            # providerID - magic string:
                            #   xeorchestra://{Pool ID}/{VM ID}
                            providerID: xeorchestra://3679fe1a-d058-4055-b800-d30e1bd2af48/8f0d32f8-3ce5-487f-9793-431bab66c115
                          

                          For now, we have only tested the provider with Microk8s.

                          What's next?

                          We will test the CCM with other types of Kubernetes clusters and work on fixing known issues.
                          Also a modification of the XOA Hub recipe will come to include the CCM.
                          More label will be added (Pool Name, VM Name, etc.).

                          Feedback is welcome!

                          You can install and test the XO CCM, and provide feedback to help improve and speed up the release of the first stable version. This is greatly appreciated πŸ™‚

                          ➑ The XO CCM repository
                          ➑ Installation doc

                          CyrilleC 1 Reply Last reply Reply Quote 3
                          • CyrilleC Offline
                            Cyrille Vates πŸͺ DevOps Team @Cyrille
                            last edited by

                            Pulumi Xen Orchestra Provider - Release v2.1.0

                            This new version brings improvement on the VM disks lifecycle made on the Terraform Provider.

                            https://github.com/vatesfr/pulumi-xenorchestra/releases/tag/v2.1.0

                            1 Reply Last reply Reply Quote 1
                            • A Offline
                              afk
                              last edited by afk

                              Hi, I'm currently testing deployments with pulumi using packer templates.

                              So far the basics work as expected but I'm stuck on a setting issue that seems to affect both pulumi and terraform providers. As far as I know there is no way to set the memory as static or changing memory_min when creating a VM from a template.

                              The template was created with 1cpu and 2GB of RAM

                              Screenshot 2025-07-15 at 11.56.01.png

                              The VM created from this template using pulumi was assigned 2cpus and 4GB of RAM, however this only sets memory_max

                              Screenshot 2025-07-15 at 11.56.21.png

                              I found the following post that talks about this: https://xcp-ng.org/forum/topic/5628/xenorchestra-with-terraform

                              and also the folllowing github issue https://github.com/vatesfr/terraform-provider-xenorchestra/issues/211

                              Manually setting the memory limits after VM creation defeats the purpose of automation. I suppose that implementing those settings in the relevant providers is a core feature. In most cases, VMs need static memory limits.

                              In the meantime, is there any workaround that I should investigate or anything that I missed ?

                              EDIT: Using the JSON-RPC API of XenOrchestra, I'm able to set the memory limits after the creation of the VM. This is great but unfortunately it is a bit too "imperative" in a declarative world.
                              I'll publish the code when I can clean up the hellish python I wrote, but a few pointers for those interested:

                              • See MickaΓ«l Baron's blog (in French, sorry!) for an exemple of working with XO JSON-RPC API: https://mickael-baron.fr/blog/2021/05/28/xo-server-websocket-jsonrcp

                              • The system.getMethodsInfo() RPC function will give you all available calls you can make to the server. For instance, you can sign-in with session.signInWithToken(token="XO_TOKEN") and vm.setAndRestart to change VM settings and restart it immediately after.

                              • You can use Pulumi's hooks: https://www.pulumi.com/docs/iac/concepts/options/hooks/

                              • In python, Pulumi is running in an asyncio loop already so bear that in mind: https://www.pulumi.com/docs/iac/languages-sdks/python/python-blocking-async/

                              slax81 created this issue in vatesfr/terraform-provider-xenorchestra

                              open Dynamic memory control #211

                              CyrilleC 1 Reply Last reply Reply Quote 0
                              • CyrilleC Offline
                                Cyrille Vates πŸͺ DevOps Team @afk
                                last edited by

                                Hi @afk!
                                We are working on a new version of the Xen Orchestra Terraform provider to improve VM memory control.

                                In this new version, the memory_max setting will now set the maximum limits for both the dynamic and static memory.

                                There is also an optional new setting called 'memory_min', which can be used to set the minimum limit for dynamic VM memory.
                                This version will also resolve the issue with template memory limits used during VM creation.

                                Can you test this pre-release version and provide us with some feedback? Or maybe just tell us if this new behaviour is more likely to meet your needs?

                                https://github.com/vatesfr/terraform-provider-xenorchestra/releases/tag/v0.33.0-alpha.1

                                I will try asap to do a pre-release version for the Pulumi provider.

                                A 1 Reply Last reply Reply Quote 1
                                • A Offline
                                  afk @Cyrille
                                  last edited by

                                  Hi @Cyrille, thank you for working on this, it will help a lot.

                                  This is indeed implementing the needed behavior when creating vms from templates, avoiding the use of additional ad-hoc code. For reference, I needed to call the vm.set XO JSON-RPC function after the creation with the following arguments

                                              rpc_args = {
                                                  "id": vm_id,
                                                  "memory": vm_memory,
                                                  "memoryMin": vm_memory,
                                                  "memoryMax": vm_memory,
                                                  "memoryStaticMax": vm_memory,
                                              }
                                              await rpcserver.vm.set(**rpc_args)
                                  

                                  I'll try to test the provider patch in the coming days.

                                  1 Reply Last reply Reply Quote 1
                                  • sidS Offline
                                    sid
                                    last edited by

                                    I'd like the terraform provider to have a xenorchestra_backup resource.

                                    For me, part of the process of spinning up a new set of VMs is to create backup jobs for those new VMs.

                                    I can today manually make a a backup job which applies to VMs with a certain tag, then later, via TF, make VMs with that tag. However I'd prefer being able to make a xenorchestra_backup resource, with settings specific to that VM (or set of VMs).

                                    Furthermore, if the idea with backup schedules is that they can be used across backup jobs, then that would mean a new xenorchestra_backup_schedule resource type too, which would be referenced in the xenorchestra_backup. Also, this might require creating a xenorchestra_remote data-source.

                                    Having said that, I am not a paying customer, so I understand this is a low priority request, and I do have a workaround.

                                    B 1 Reply Last reply Reply Quote 1
                                    • B Offline
                                      bufanda @sid
                                      last edited by

                                      @sid I made that request a while ago, and tried to look into myself too, but the current API of XenOrchestra just doesn't support it, there are many pieces missing from what I could see. I hope with XenOrchestra 6 the API wil support it though.

                                      CyrilleC 1 Reply Last reply Reply Quote 0
                                      • CyrilleC Offline
                                        Cyrille Vates πŸͺ DevOps Team @bufanda
                                        last edited by Cyrille

                                        @bufanda said in DevOps Megathread: what you need and how we can help!:

                                        @sid I made that request a while ago, and tried to look into myself too, but the current API of XenOrchestra just doesn't support it, there are many pieces missing from what I could see. I hope with XenOrchestra 6 the API wil support it though.

                                        That's the point. We started working on it, but it wasn't possible to implement the required functionality in the TF provider using the current JRPC API. We are working with the XO team to provide feedback to make it happen with the REST API. I hope the backup resource will be available with XO6 πŸ™‚

                                        NB: If you want to take a look, there are branches on the GitHub repository. These are for both the provider and the Golang client.

                                        CyrilleC sidS 2 Replies Last reply Reply Quote 1
                                        • CyrilleC Offline
                                          Cyrille Vates πŸͺ DevOps Team @Cyrille
                                          last edited by

                                          Pulumi Xen Orchestra Provider - Pre-Release v2.2.0-alpha.1

                                          This is the pre-release version, which includes the changes to the memory_max field and adds the new memory_min field.

                                          https://github.com/vatesfr/pulumi-xenorchestra/releases/tag/v2.2.0-alpha.1

                                          1 Reply Last reply Reply Quote 0
                                          • sidS Offline
                                            sid @Cyrille
                                            last edited by

                                            @Cyrille Aah, I didn't know about the branches. I had started my own attempt to implement the feature, good to know I can abandon that work. Oh boy discovering the settings map uses an empty key was a moment.

                                            OK, I will wait. Thanks to your team for the work on the terraform provider πŸ™‚

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post