I created a GitHub issue to track this feature request: https://github.com/vatesfr/terraform-provider-xenorchestra/issues/378

Posts
-
RE: CPU topology (sockets/cores) for new VMs deployed via Terraform
-
RE: Powershell script for backup summary reports
Whoo this looks very nice! Thank you for sharing this tool with us!
-
RE: DevOps Megathread: what you need and how we can help!
The release v0.35.0 improves the logging of both the Xen Orchestra golang SDK and the Terraform Provider.
Now it should be easier to read the log using
TF_LOG_PROVIDER=DEBUG
(see the provider documentation) -
RE: DevOps Megathread: what you need and how we can help!
Terraform Provider - Release 0.35.1
The new version fixes bugs when creating a VM from a template #361:
- All existing disks in the template are used if they are declared in the TF plan.
- All unused disks in the template are deleted to avoid inconsistency between the TF plan and the actual state.
- It is no longer possible to resize existing template disks to a smaller size (fixes potential source of data loss).
The release: https://github.com/vatesfr/terraform-provider-xenorchestra/releases/tag/v0.35.1
-
RE: Pulumi Xen Orchestra - News
Release v2.2.0
This new version introduces a new field, 'memory_min', for the VM resource and makes a slight change to the 'memory_max' field, which now sets both the dynamic and static maximum memory limits and providing better control of VM memory.
What's Changed
- feat: Update TF provider to get VM memory improvements by @gCyrille in https://github.com/vatesfr/pulumi-xenorchestra/pull/420
Full Changelog: https://github.com/vatesfr/pulumi-xenorchestra/compare/v2.1.0...v2.2.0
- JavaScript/TypeScript: @vates/pulumi-xenorchestra
- Python: pulumi-xenorchestra
- Go: github.com/vatesfr/pulumi-xenorchestra/sdk
- .NET: Pulumi.Xenorchestra
-
RE: DevOps Megathread: what you need and how we can help!
New releases for Terraform and Pulumi providers!
This new version introduces a new field,
memory_min
, for the VM resource and makes a slight change to thememory_max
field, which now sets both the dynamic and static maximum memory limits and providing better control of VM memory.Pulumi Provider v2.2.0
Terraform Provider v0.33.0
Xen Orchestra Go SDK v1.4.0 -
RE: DevOps Megathread: what you need and how we can help!
Pulumi Xen Orchestra Provider - Pre-Release v2.2.0-alpha.1
This is the pre-release version, which includes the changes to the memory_max field and adds the new memory_min field.
https://github.com/vatesfr/pulumi-xenorchestra/releases/tag/v2.2.0-alpha.1
-
RE: DevOps Megathread: what you need and how we can help!
@bufanda said in DevOps Megathread: what you need and how we can help!:
@sid I made that request a while ago, and tried to look into myself too, but the current API of XenOrchestra just doesn't support it, there are many pieces missing from what I could see. I hope with XenOrchestra 6 the API wil support it though.
That's the point. We started working on it, but it wasn't possible to implement the required functionality in the TF provider using the current JRPC API. We are working with the XO team to provide feedback to make it happen with the REST API. I hope the backup resource will be available with XO6
NB: If you want to take a look, there are branches on the GitHub repository. These are for both the provider and the Golang client.
-
RE: DevOps Megathread: what you need and how we can help!
Hi @afk!
We are working on a new version of the Xen Orchestra Terraform provider to improve VM memory control.In this new version, the
memory_max
setting will now set the maximum limits for both the dynamic and static memory.There is also an optional new setting called 'memory_min', which can be used to set the minimum limit for dynamic VM memory.
This version will also resolve the issue with template memory limits used during VM creation.Can you test this pre-release version and provide us with some feedback? Or maybe just tell us if this new behaviour is more likely to meet your needs?
https://github.com/vatesfr/terraform-provider-xenorchestra/releases/tag/v0.33.0-alpha.1
I will try asap to do a pre-release version for the Pulumi provider.
-
RE: Packer / Pulumi examples for Ubuntu and Windows VMs
Hi, thank you for the example! We will take a look. It could be a good idea to have a dedicated documentation/web page with usage examples of 'DevOps' tools
-
RE: VM UUID via dmidecode does not match VM ID in xen-orchestra
@deefdragon can you check if
/sys/hypervisor/uuid
matches your VMs UUID? -
RE: VM UUID via dmidecode does not match VM ID in xen-orchestra
Hi,
Do you know what causes the system UUID to change and not match the VM UUID?In the tests I have run (with Debian and Microk8s), it never changed.
-
RE: Kubernetes Recipe VM failed to start - Raise Network Interfaces
@deefdragon We are still trying to figure out why the DHCP isn't working. Our tests showed no issues with the recipe and DHCP.
We don't yet plan to enable up/down scaling for the recipe, but we are working on adding the CCM. We welcome suggestions: what do you need and how would you like the up/down scaling feature to work in the XOA recipe?
-
RE: Kubernetes Recipe VM failed to start - Raise Network Interfaces
@deefdragon Can you try with static IPs?
Does your network have Internet access? The recipe needs it to update Debian and install microk8s. -
RE: Kubernetes Recipe VM failed to start - Raise Network Interfaces
Hi @deefdragon ,
Can you share the settings you used in the form?
According to the logs you have an issue with the network: can you check that the dhcp is working well (attributed IP address, netmask, gateway, etc. you can see in the console/screenshot) ?
Maybe you can try to reboot the VM, and try to connect again in ssh. If it works, you can extract the cloud-init logs from
/var/log/cloud-init-output.log
.Cyrille
-
RE: Pulumi Xen Orchestra - News
Release v2.1.0
This new version builds on the improvements to the Terraform Provider regarding the disk lifecycle of a VM (TF provider release).
Full Release Note: https://github.com/vatesfr/pulumi-xenorchestra/releases/tag/v2.1.0
-
RE: DevOps Megathread: what you need and how we can help!
Pulumi Xen Orchestra Provider - Release v2.1.0
This new version brings improvement on the VM disks lifecycle made on the Terraform Provider.
https://github.com/vatesfr/pulumi-xenorchestra/releases/tag/v2.1.0
-
RE: DevOps Megathread: what you need and how we can help!
Xen Orchestra Cloud Controller Manager in development
Hello everyone
We publish a development version of a Xen Orchestra Cloud Controller Manager!
It support the controllers cloud-node and cloud-node-lifecycle and add labels to your Kubernetes nodes hosted on Xen Orchestra VMs.
apiVersion: v1 kind: Node metadata: labels: # Type generated base on CPU and RAM node.kubernetes.io/instance-type: 2VCPU-1GB # Xen Orchestra Pool ID of the node VM Host topology.kubernetes.io/region: 3679fe1a-d058-4055-b800-d30e1bd2af48 # Xen Orchestra ID of the node VM Host topology.kubernetes.io/zone: 3d6764fe-dc88-42bf-9147-c87d54a73f21 # Additional labels based on Xen Orchestra data (beta) topology.k8s.xenorchestra/host_id: 3d6764fe-dc88-42bf-9147-c87d54a73f21 topology.k8s.xenorchestra/pool_id: 3679fe1a-d058-4055-b800-d30e1bd2af48 vm.k8s.xenorchestra/name_label: cgn-microk8s-recipe---Control-Plane ... name: worker-1 spec: ... # providerID - magic string: # xeorchestra://{Pool ID}/{VM ID} providerID: xeorchestra://3679fe1a-d058-4055-b800-d30e1bd2af48/8f0d32f8-3ce5-487f-9793-431bab66c115
For now, we have only tested the provider with Microk8s.
What's next?
We will test the CCM with other types of Kubernetes clusters and work on fixing known issues.
Also a modification of the XOA Hub recipe will come to include the CCM.
More label will be added (Pool Name, VM Name, etc.).Feedback is welcome!
You can install and test the XO CCM, and provide feedback to help improve and speed up the release of the first stable version. This is greatly appreciated
-
RE: Pulumi Xen Orchestra - News
@john.c You can already use Yaml with the Xen Orchestra Pulumi provider
name: test-yaml description: A minimal Pulumi YAML program runtime: yaml config: { 'pulumi:tags': { value: { 'pulumi:template': yaml } } } variables: poolId: fn::invoke: function: xenorchestra:getXoaPool arguments: nameLabel: "Lab" return: id netId: fn::invoke: function: xenorchestra:getXoaNetwork arguments: nameLabel: "Lab" poolId: ${poolId} return: id localStorageId: fn::invoke: function: xenorchestra:getXoaStorageRepository arguments: nameLabel: "Local Storage" return: id templateId: fn::invoke: function: xenorchestra:getXoaTemplate arguments: nameLabel: "Debian 12 Cloud-init (Hub)" poolId: ${poolId} return: id resources: vm: type: xenorchestra:Vm properties: nameLabel: "Pulumi yaml test" nameDescription: "test with pulumi yaml provider" cpus: 1 memoryMax: 1073733632 template: ${templateId} tags: - pulumi cloudConfig: | #cloud-config ssh_authorized_keys: - *** disks: - nameLabel: "OS" size: 8294967296 srId: ${localStorageId} networks: - networkId: ${netId} powerState: "Running" outputs: poolId: ${poolId} vmIp: ${vm.ipv4Addresses}
Tell us if it ins't working
-
RE: DevOps Megathread: what you need and how we can help!
Hello there,
We release a new version Terraform provider with improvements of the VM disk lifecycle!
Now you can expand a VM disk with Terraform without data loss.
Read the release note: https://github.com/vatesfr/terraform-provider-xenorchestra/releases/tag/v0.32.0