DevOps Megathread: what you need and how we can help!
-
Hello,
If you want to discuss the Pulumi Provider in detail, let discuss here: https://xcp-ng.org/forum/topic/10592/pulumi-xen-orchestra-news -
@Jonathon this is really nice to have shared, as we are looking to migrate from the RKE cluster we've deployed on bare-metal Xen to XCP-ng VMs to setup an RKE2 cluster to migrate to.
Will review this and probably have a bunch of questions!
-
@andrewperry I myself migrated our rancher management cluster from the original rke to a new rke2 cluster using this plan not too long ago, so you should not have much trouble. Feel free to ask questions
-
i found time to play with cloud init, most of examples looks outdated or don't work dunno why.
hostname: {name}
don't work, onlyhostname: {name}%
. Also i don't find that macros at official doc.with
manage_etc_hosts: true
it changed /etc/hosts127.0.1.1 basename
to127.0.1.1 basename test%
. Maybe package itself bug, maybe XO problem.preserve_hostname: false
looks not required, i don't see any difference.even if not use any network config, it change netplan (don't need it with dhcp).
network: version: 2 ethernets: enX0: dhcp4: true
to
network: version: 2 ethernets: enX0: match: macaddress: "my_mac" dhcp4: true dhcp6: true set-name: "enX0"
to save default netplan, need to use something like
network: version: 1 config: subnets: - type: dhcp4 type: physical
can't make disk resize work, it looks like rocket science. And this is most important part for me.
resize_rootfs: true growpart: mode: auto devices: ['/'] ignore_growroot_disabled: false
I'm fine enough with manually tuned templates, 99% time don't need to change anything except name\disk. Other tasks require manual attention anyway or already covered with ansible. Would be nice to see tutorial for IQ<3.
-
@Tristis-Oris
Hello, thanks for the report. I will try to fix and improve things, but before I have a few questions.- What is the template you are using? Is it one from XOA Hub?
- Where did you found the cloud-init config snippets?
For your information, the defaults cloud-init configs snippets come from here: https://github.com/vatesfr/xen-orchestra/blob/master/packages/xo-web/src/common/cloud-config.js#L78-L88
For growpart it depends of the template used. Last time I tested it was working with a Debian 12 template from the XOA Hub.
-
- my custom template.
- forum, cloud-init doc.
-
C Cyrille referenced this topic on
-
A abreaux referenced this topic on
-
N nathanael-h referenced this topic on
-
Pulumi Xen Orchestra Provider - Release v2.0.0
We released a new version of the Pulumi Xen Orchestra provider.
You can fail more information about the release here : https://xcp-ng.org/forum/post/92858
-
xo-powershell moves from alpha to beta
The XO-PowerShell module is published in the :microsoft: PowerShell Gallery as v1.0.0-beta
https://www.powershellgallery.com/packages/xo-powershell/1.0.0-beta
Grab it with one powershell command:
Install-Module -Name xo-powershell -AllowPrerelease
Doc here
-
Hello there,
We release a new version Terraform provider with improvements of the VM disk lifecycle!
Now you can expand a VM disk with Terraform without data loss.
Read the release note: https://github.com/vatesfr/terraform-provider-xenorchestra/releases/tag/v0.32.0
-
Hello
We published a new blog post about our Kubernetes recipe:
You'll find there- A step by step guide to create a production ready Kubernetes cluster, on top of your servers, in minutes!
- Some architecture insights
https://xen-orchestra.com/blog/virtops-6-create-a-kubernetes-cluster-in-minutes/
Thanks to @Cyrille
-
Xen Orchestra Cloud Controller Manager in development
Hello everyone
We publish a development version of a Xen Orchestra Cloud Controller Manager!
It support the controllers cloud-node and cloud-node-lifecycle and add labels to your Kubernetes nodes hosted on Xen Orchestra VMs.
apiVersion: v1 kind: Node metadata: labels: # Type generated base on CPU and RAM node.kubernetes.io/instance-type: 2VCPU-1GB # Xen Orchestra Pool ID of the node VM Host topology.kubernetes.io/region: 3679fe1a-d058-4055-b800-d30e1bd2af48 # Xen Orchestra ID of the node VM Host topology.kubernetes.io/zone: 3d6764fe-dc88-42bf-9147-c87d54a73f21 # Additional labels based on Xen Orchestra data (beta) topology.k8s.xenorchestra/host_id: 3d6764fe-dc88-42bf-9147-c87d54a73f21 topology.k8s.xenorchestra/pool_id: 3679fe1a-d058-4055-b800-d30e1bd2af48 vm.k8s.xenorchestra/name_label: cgn-microk8s-recipe---Control-Plane ... name: worker-1 spec: ... # providerID - magic string: # xeorchestra://{Pool ID}/{VM ID} providerID: xeorchestra://3679fe1a-d058-4055-b800-d30e1bd2af48/8f0d32f8-3ce5-487f-9793-431bab66c115
For now, we have only tested the provider with Microk8s.
What's next?
We will test the CCM with other types of Kubernetes clusters and work on fixing known issues.
Also a modification of the XOA Hub recipe will come to include the CCM.
More label will be added (Pool Name, VM Name, etc.).Feedback is welcome!
You can install and test the XO CCM, and provide feedback to help improve and speed up the release of the first stable version. This is greatly appreciated