Hey @deefdragon did the installation finished using static IP addresses?
Team - DevOps
Posts
-
RE: Kubernetes Recipe VM failed to start - Raise Network Interfaces
-
RE: Kubernetes Recipe VM failed to start - Raise Network Interfaces
@deefdragon Can you try with static IPs?
Does your network have Internet access? The recipe needs it to update Debian and install microk8s. -
RE: Kubernetes Recipe VM failed to start - Raise Network Interfaces
Hi @deefdragon ,
Can you share the settings you used in the form?
According to the logs you have an issue with the network: can you check that the dhcp is working well (attributed IP address, netmask, gateway, etc. you can see in the console/screenshot) ?
Maybe you can try to reboot the VM, and try to connect again in ssh. If it works, you can extract the cloud-init logs from
/var/log/cloud-init-output.log
.Cyrille
-
RE: Pulumi Xen Orchestra - News
Release v2.1.0
This new version builds on the improvements to the Terraform Provider regarding the disk lifecycle of a VM (TF provider release).
Full Release Note: https://github.com/vatesfr/pulumi-xenorchestra/releases/tag/v2.1.0
-
RE: DevOps Megathread: what you need and how we can help!
Pulumi Xen Orchestra Provider - Release v2.1.0
This new version brings improvement on the VM disks lifecycle made on the Terraform Provider.
https://github.com/vatesfr/pulumi-xenorchestra/releases/tag/v2.1.0
-
RE: DevOps Megathread: what you need and how we can help!
Xen Orchestra Cloud Controller Manager in development
Hello everyone
We publish a development version of a Xen Orchestra Cloud Controller Manager!
It support the controllers cloud-node and cloud-node-lifecycle and add labels to your Kubernetes nodes hosted on Xen Orchestra VMs.
apiVersion: v1 kind: Node metadata: labels: # Type generated base on CPU and RAM node.kubernetes.io/instance-type: 2VCPU-1GB # Xen Orchestra Pool ID of the node VM Host topology.kubernetes.io/region: 3679fe1a-d058-4055-b800-d30e1bd2af48 # Xen Orchestra ID of the node VM Host topology.kubernetes.io/zone: 3d6764fe-dc88-42bf-9147-c87d54a73f21 # Additional labels based on Xen Orchestra data (beta) topology.k8s.xenorchestra/host_id: 3d6764fe-dc88-42bf-9147-c87d54a73f21 topology.k8s.xenorchestra/pool_id: 3679fe1a-d058-4055-b800-d30e1bd2af48 vm.k8s.xenorchestra/name_label: cgn-microk8s-recipe---Control-Plane ... name: worker-1 spec: ... # providerID - magic string: # xeorchestra://{Pool ID}/{VM ID} providerID: xeorchestra://3679fe1a-d058-4055-b800-d30e1bd2af48/8f0d32f8-3ce5-487f-9793-431bab66c115
For now, we have only tested the provider with Microk8s.
What's next?
We will test the CCM with other types of Kubernetes clusters and work on fixing known issues.
Also a modification of the XOA Hub recipe will come to include the CCM.
More label will be added (Pool Name, VM Name, etc.).Feedback is welcome!
You can install and test the XO CCM, and provide feedback to help improve and speed up the release of the first stable version. This is greatly appreciated
-
RE: XCP-NG Kubernetes micro8k
Hello @msupport we published a step by step guide, read more in the announcement there https://xcp-ng.org/forum/post/94268
-
RE: DevOps Megathread: what you need and how we can help!
Hello
We published a new blog post about our Kubernetes recipe:
You'll find there- A step by step guide to create a production ready Kubernetes cluster, on top of your servers, in minutes!
- Some architecture insights
https://xen-orchestra.com/blog/virtops-6-create-a-kubernetes-cluster-in-minutes/
Thanks to @Cyrille
-
RE: Pulumi Xen Orchestra - News
@john.c You can already use Yaml with the Xen Orchestra Pulumi provider
name: test-yaml description: A minimal Pulumi YAML program runtime: yaml config: { 'pulumi:tags': { value: { 'pulumi:template': yaml } } } variables: poolId: fn::invoke: function: xenorchestra:getXoaPool arguments: nameLabel: "Lab" return: id netId: fn::invoke: function: xenorchestra:getXoaNetwork arguments: nameLabel: "Lab" poolId: ${poolId} return: id localStorageId: fn::invoke: function: xenorchestra:getXoaStorageRepository arguments: nameLabel: "Local Storage" return: id templateId: fn::invoke: function: xenorchestra:getXoaTemplate arguments: nameLabel: "Debian 12 Cloud-init (Hub)" poolId: ${poolId} return: id resources: vm: type: xenorchestra:Vm properties: nameLabel: "Pulumi yaml test" nameDescription: "test with pulumi yaml provider" cpus: 1 memoryMax: 1073733632 template: ${templateId} tags: - pulumi cloudConfig: | #cloud-config ssh_authorized_keys: - *** disks: - nameLabel: "OS" size: 8294967296 srId: ${localStorageId} networks: - networkId: ${netId} powerState: "Running" outputs: poolId: ${poolId} vmIp: ${vm.ipv4Addresses}
Tell us if it ins't working
-
RE: DevOps Megathread: what you need and how we can help!
Hello there,
We release a new version Terraform provider with improvements of the VM disk lifecycle!
Now you can expand a VM disk with Terraform without data loss.
Read the release note: https://github.com/vatesfr/terraform-provider-xenorchestra/releases/tag/v0.32.0