XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Cyrille
    Offline
    • Profile
    • Following 0
    • Followers 0
    • Topics 2
    • Posts 40
    • Groups 2

    Cyrille

    @Cyrille

    Vates 🪐 DevOps Team
    25
    Reputation
    17
    Profile views
    40
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online

    Cyrille Unfollow Follow
    DevOps Team Vates 🪐

    Best posts made by Cyrille

    • Xen Orchestra Container Storage Interface (CSI) for Kubernetes

      Xen Orchestra Container Storage Interface (CSI) for Kubernetes

      We are pleased to announce the development of a CSI driver for Xen Orchestra 🎉

      It is currently under active development, but it's already available for testing with static volume provisioning only (i.e. use an existing VDI with its UUID).

      https://github.com/vatesfr/xenorchestra-csi-driver

      posted in Infrastructure as Code
      CyrilleC
      Cyrille
    • RE: DevOps Megathread: what you need and how we can help!

      Xen Orchestra Cloud Controller Manager in development 🚀

      Hello everyone 👋

      We publish a development version of a Xen Orchestra Cloud Controller Manager!

      It support the controllers cloud-node and cloud-node-lifecycle and add labels to your Kubernetes nodes hosted on Xen Orchestra VMs.

      apiVersion: v1
      kind: Node
      metadata:
        labels:
          # Type generated base on CPU and RAM
          node.kubernetes.io/instance-type: 2VCPU-1GB
          # Xen Orchestra Pool ID of the node VM Host
          topology.kubernetes.io/region: 3679fe1a-d058-4055-b800-d30e1bd2af48
          # Xen Orchestra ID of the node VM Host
          topology.kubernetes.io/zone: 3d6764fe-dc88-42bf-9147-c87d54a73f21
          # Additional labels based on Xen Orchestra data (beta)
          topology.k8s.xenorchestra/host_id: 3d6764fe-dc88-42bf-9147-c87d54a73f21
          topology.k8s.xenorchestra/pool_id: 3679fe1a-d058-4055-b800-d30e1bd2af48
          vm.k8s.xenorchestra/name_label: cgn-microk8s-recipe---Control-Plane
          ...
        name: worker-1
      spec:
        ...
        # providerID - magic string:
        #   xeorchestra://{Pool ID}/{VM ID}
        providerID: xeorchestra://3679fe1a-d058-4055-b800-d30e1bd2af48/8f0d32f8-3ce5-487f-9793-431bab66c115
      

      For now, we have only tested the provider with Microk8s.

      What's next?

      We will test the CCM with other types of Kubernetes clusters and work on fixing known issues.
      Also a modification of the XOA Hub recipe will come to include the CCM.
      More label will be added (Pool Name, VM Name, etc.).

      Feedback is welcome!

      You can install and test the XO CCM, and provide feedback to help improve and speed up the release of the first stable version. This is greatly appreciated 🙂

      ➡ The XO CCM repository
      ➡ Installation doc

      posted in Infrastructure as Code
      CyrilleC
      Cyrille
    • RE: DevOps Megathread: what you need and how we can help!

      Hello there,

      We release a new version Terraform provider with improvements of the VM disk lifecycle!

      Now you can expand a VM disk with Terraform without data loss.

      Read the release note: https://github.com/vatesfr/terraform-provider-xenorchestra/releases/tag/v0.32.0

      posted in Infrastructure as Code
      CyrilleC
      Cyrille
    • RE: How to deploy the new k8s on latest XOA 5.106?

      These two bugs have been fixed in the latest release 🙂

      posted in Advanced features
      CyrilleC
      Cyrille
    • RE: Powershell script for backup summary reports

      Whoo this looks very nice! Thank you for sharing this tool with us!

      posted in Backup
      CyrilleC
      Cyrille
    • RE: Xen Orchestra Container Storage Interface (CSI) for Kubernetes

      Actually, it's not a closed door; it's more a door that is opening for people who are already using both Xen Orchestra and Kubernetes.🤔

      From a technical point of view, it makes more sense for us to use XO, because its API is easier to use, especially with the new REST API. For the application side itself, it does many thing that we don't have to deal with. For VDIs, perhaps it's not so much. But for other things such as backups, live migrations, templates and VM creation... it's easier. Moreover, using a unique SDK to develop tools makes sense for our small DevOps team in terms of development speed, stability and security.

      posted in Infrastructure as Code
      CyrilleC
      Cyrille
    • RE: Terraform and disk migrations

      @carloum70 Disk migration isn't supported by the provider yet. What you can do it's only ignore the changes to the sr_id of a given disk.

      For example for the first disk:

        lifecycle {
          ignore_changes = [
            disk[0].sr_id
          ]
        }
      

      You can also manually do the migration in XO and then after edit your HCL to update the sr_id with the new ID. It should do the trick.

      posted in Infrastructure as Code
      CyrilleC
      Cyrille
    • RE: DevOps Megathread: what you need and how we can help!

      The release v0.35.0 improves the logging of both the Xen Orchestra golang SDK and the Terraform Provider.

      Now it should be easier to read the log using TF_LOG_PROVIDER=DEBUG (see the provider documentation)

      posted in Infrastructure as Code
      CyrilleC
      Cyrille
    • RE: DevOps Megathread: what you need and how we can help!

      Terraform Provider - Release 0.35.1

      The new version fixes bugs when creating a VM from a template #361:

      • All existing disks in the template are used if they are declared in the TF plan.
      • All unused disks in the template are deleted to avoid inconsistency between the TF plan and the actual state.
      • It is no longer possible to resize existing template disks to a smaller size (fixes potential source of data loss).

      The release: https://github.com/vatesfr/terraform-provider-xenorchestra/releases/tag/v0.35.1

      posted in Infrastructure as Code
      CyrilleC
      Cyrille
    • RE: Pulumi Xen Orchestra - News

      Release v2.2.0

      This new version introduces a new field, 'memory_min', for the VM resource and makes a slight change to the 'memory_max' field, which now sets both the dynamic and static maximum memory limits and providing better control of VM memory.

      What's Changed

      • feat: Update TF provider to get VM memory improvements by @gCyrille in https://github.com/vatesfr/pulumi-xenorchestra/pull/420

      Full Changelog: https://github.com/vatesfr/pulumi-xenorchestra/compare/v2.1.0...v2.2.0

      • JavaScript/TypeScript: @vates/pulumi-xenorchestra
      • Python: pulumi-xenorchestra
      • Go: github.com/vatesfr/pulumi-xenorchestra/sdk
      • .NET: Pulumi.Xenorchestra
      gCyrille opened this pull request in vatesfr/pulumi-xenorchestra

      closed feat: Update TF provider to get VM memory improvements #420

      posted in Infrastructure as Code
      CyrilleC
      Cyrille

    Latest posts made by Cyrille

    • RE: destroy_cloud_config_vdi_after_boot

      Can you share how you created the template?
      And copy here the template object from xo-cli or the rest api: xo-cli list-objects type=VM-template id=<your_template_id>?

      posted in Infrastructure as Code
      CyrilleC
      Cyrille
    • RE: destroy_cloud_config_vdi_after_boot

      Hi @carloum70

      I'm back now — sorry for the delay.
      If I understand correctly, this issue only occurs with a template created from a Debian 13 cloud-init raw file, is that right? I'm trying to understand how to reproduce the issue, as I've never seen it before.

      posted in Infrastructure as Code
      CyrilleC
      Cyrille
    • RE: Xen Orchestra Container Storage Interface (CSI) for Kubernetes

      Actually, it's not a closed door; it's more a door that is opening for people who are already using both Xen Orchestra and Kubernetes.🤔

      From a technical point of view, it makes more sense for us to use XO, because its API is easier to use, especially with the new REST API. For the application side itself, it does many thing that we don't have to deal with. For VDIs, perhaps it's not so much. But for other things such as backups, live migrations, templates and VM creation... it's easier. Moreover, using a unique SDK to develop tools makes sense for our small DevOps team in terms of development speed, stability and security.

      posted in Infrastructure as Code
      CyrilleC
      Cyrille
    • RE: Xen Orchestra Container Storage Interface (CSI) for Kubernetes

      @bvitnik As Olivier said, it's more of a design decision than a technical requirement. The idea behind using XO is to have a single point of entry, regardless of the number of pools, etc.

      For example, this allows the mapping of Kubernetes regions to Xen Orchestra pools and Kubernetes zones to Xen Orchestra hosts with a single entry point and credentials.

      posted in Infrastructure as Code
      CyrilleC
      Cyrille
    • Xen Orchestra Container Storage Interface (CSI) for Kubernetes

      Xen Orchestra Container Storage Interface (CSI) for Kubernetes

      We are pleased to announce the development of a CSI driver for Xen Orchestra 🎉

      It is currently under active development, but it's already available for testing with static volume provisioning only (i.e. use an existing VDI with its UUID).

      https://github.com/vatesfr/xenorchestra-csi-driver

      posted in Infrastructure as Code
      CyrilleC
      Cyrille
    • RE: Terraform and disk migrations

      @carloum70 Disk migration isn't supported by the provider yet. What you can do it's only ignore the changes to the sr_id of a given disk.

      For example for the first disk:

        lifecycle {
          ignore_changes = [
            disk[0].sr_id
          ]
        }
      

      You can also manually do the migration in XO and then after edit your HCL to update the sr_id with the new ID. It should do the trick.

      posted in Infrastructure as Code
      CyrilleC
      Cyrille
    • RE: DevOps Megathread: what you need and how we can help!

      Terraform Provider v0.36.0 and Pulumi Provider v2.3.0

      • Read and expose boot_firmware on template data-source by @sakaru in #381
      • Fixes VM creation from multi-disks template:
        • All existing disks in the template are used if they are declared in the plan.
        • All unused disks in the template are deleted to avoid inconsistency between the plan and the actual state.
        • It is no longer possible to resize existing template disks to a smaller size (fixes potential source of data loss).
        • Order of existing disk matches the declaration order in the plan

      Terraform provider release: https://github.com/vatesfr/terraform-provider-xenorchestra/releases/tag/v0.36.0

      Pulumi provider release: https://github.com/vatesfr/pulumi-xenorchestra/releases/tag/v2.3.0

      posted in Infrastructure as Code
      CyrilleC
      Cyrille
    • RE: CPU topology (sockets/cores) for new VMs deployed via Terraform

      I created a GitHub issue to track this feature request: https://github.com/vatesfr/terraform-provider-xenorchestra/issues/378

      gCyrille created this issue in vatesfr/terraform-provider-xenorchestra

      open CPU topology (sockets/cores) for new VMs deployed via Terraform #378

      posted in Infrastructure as Code
      CyrilleC
      Cyrille
    • RE: Powershell script for backup summary reports

      Whoo this looks very nice! Thank you for sharing this tool with us!

      posted in Backup
      CyrilleC
      Cyrille
    • RE: DevOps Megathread: what you need and how we can help!

      The release v0.35.0 improves the logging of both the Xen Orchestra golang SDK and the Terraform Provider.

      Now it should be easier to read the log using TF_LOG_PROVIDER=DEBUG (see the provider documentation)

      posted in Infrastructure as Code
      CyrilleC
      Cyrille