Ansible with Xen Orchestra
-
VirtOps #3: Ansible with Xen Orchestra
With the release of Ansible Community 4.1.0 came a new inventory plugin for Xen Orchestra. This plugin allows the listing and grouping of XOA virtual machines, hosts and pools.
For more details, read the blog post: https://xen-orchestra.com/blog/virtops3-ansible-with-xen-orchestra
Your feedback
Test it, comment it, ask for features, this is the place!
-
Looks interesting!
We started using the xo terraform provider around 12 months ago, and then built a small http service (node/typescript) that talks to the xo-api to generate our ansible inventory. We've been using both in production since then, and i'll share some of the details for you here.
We took the approach of implementing this as a service on our network and then leveraging ansible's ability to execute a shell script to retrieve the inventory.
In our environment, we decided it was ok for the inventory to only include vm's (or hosts) that have an ip address - i mean if they don't ansible can't really work with them so thats ok for us. So the inventory service has a couple of env vars to provide a filter for which entities and ips to pick
// no tag required by default required_tag: env.get('REQUIRED_TAG').default('').asString(), // any ip is valid for the inventory management_subnet: env.get('MANAGEMENT_SUBNETS').default('0.0.0.0/0').asArray(),
First off we can require any vm or host to have a tag, e.g.
ansible_managed:true
to appear in the inventory
Then it must have an ip in our management subnet, if more than one ip is available (e.g. management and public) the service will filter them.The http api for the inventory service uses the same filtering as xen-orchestra, so we can construct urls to retrieve partial inventories. This is useful for example as we have dev, production, etc, pools, and it gives us an easy way to target
https://inventory.internal/inventory?filter=env:monitoring%20mytag:foo
The response for the above request would look like this
{ "all":{ "hosts":[ "monitoring-1.internal" ] }, "_meta":{ "hostvars":{ "monitoring-1.internal":{ "mytag":"foo", "ansible_group":"prometheus", "env":"monitoring", "inventory_name":"monitoring-1.internal", "ansible_host":"10.0.12.51", "xo_pool":"monitoring-pool", "xo_type":"VM", "xo_id":"033f8b6d-88e2-92e4-3c3e-bcaa01213772" } } }, "prometheus":{ "hosts":[ "monitoring-1.internal" ] } }
This vm has these tags in xen orchestra
ansible_group
can be repeated, and places the vm/host into this group in the inventory. Other tags get split into key=value and placed into the host varsthe
xo_*
are added from the info in the api
ansible_host
will be our management ip
inventory_name
is a slugified version of the vm name, but by convention our names are saneWe also include hosts in the inventory, as we have various playbooks to run against them. All the same tagging and grouping applies to hosts as it does to VM's
{ ... "hostvars":{ "xcp-001":{ "ansible_group":"xen-host", "inventory_name":"xcp-001", "ansible_host":"10.0.55.123", "xo_pool":"monitoring-pool", "xo_type":"host", "xo_id":"92c1c2ab-fd1e-46e9-85f7-70868f1e9106", "xo_version":"8.2.0", "xo_product":"XCP-ng" } } ... }
When we setup some infra for management by terraform/ansible we'll typically use a combination of shell script inventory, static grouping and then extra group_vars if needed. For example our /inventory directory
01_inventory.sh
#!/bin/bash curl -k https://inventory.internal/inventory?filter=k8s-cluster:admin 2>/dev/null
02_kubespray - which has its own group name convention, so we map them between our tags and their group names
[kube-master:children] k8s-master [etcd:children] k8s-master [kube-node:children] k8s-node k8s-monitoring [k8s-cluster:children] kube-master kube-node
Executing
ansible-playbook -i /inventory
where /inventory is a directory will then combine all the shell scripts and ini files to make the final inventor . nice!I did think about trying to package this api directly as a plugin for xo, but haven't had time to look into that yet. But let me know if any of this looks interesting.
-
Hi there. Author of said inventory plugin.
If you ever wish to migrate you should be able to retain most of what you did on XOA side (I'm thinking tags), but you'll have something that's more standard and require less setup as long as the XOA API is accessible to the machine running the playbook.
You can keep the groups with the composable groups in your inventory plugin configuration:
simple_config_file: plugin: community.general.xen_orchestra api_host: 192.168.1.255 user: xo password: xo_pwd validate_certs: true use_ssl: true groups: kube-master: "name_label == 'kube-master'" compose: ansible_port: 2222
-
Hey,
I'm using this plugin, and I spent a few minutes (around 30 minutes) to find the issue:
pip3 install websocket-client
Could be nice to add it in the doc/article.
Regards
-
Nice catch, let me ping @shinuza so he can fix the doc
-
@wowi42 Hi there. It's already in the documentation:
Also, you should see an error message if it's not installed.
-
Hi
Just started playing around with the xo api, do we need a specific user and port to be opened on the firewall? I opened 8443 but stil get connection refused.
h4yadm@ansible01:/opt/system/inventories/production$ ansible-inventory -i xen_orchestra.yml --list [WARNING]: * Failed to parse /opt/system/inventories/production/xen_orchestra.yml with auto plugin: [Errno 111] Connection refused [WARNING]: * Failed to parse /opt/system/inventories/production/xen_orchestra.yml with yaml plugin: Plugin configuration YAML file, not YAML inventory [WARNING]: * Failed to parse /opt/system/inventories/production/xen_orchestra.yml with ini plugin: Invalid host pattern 'plugin:' supplied, ending in ':' is not allowed, this character is reserved to provide a port. [WARNING]: Unable to parse /opt/system/inventories/production/xen_orchestra.yml as an inventory source [WARNING]: No inventory was parsed, only implicit localhost is available { "_meta": { "hostvars": {} }, "all": { "children": [ "ungrouped" ] } }
config:
plugin: community.general.xen_orchestra api_host: 10.10.1.120 user: xo password: "pwd" validate_certs: true use_ssl: true
-
Hi,
It's hard to answer since you aren't providing any detail on your XO installation. XOA or XO from the source? If sources, how is it configured? Which port it's listening?
-
@olivierlambert XO build from source listening on port 8443
-
If you followed the doc correctly (Node version, being entirely up to date), then it should work. Maybe it's the plugin. Any feedback for others in the community?
-
O olivierlambert moved this topic from News
-
@hostingforyou said in Ansible with Xen Orchestra:
@olivierlambert XO build from source listening on port 8443
The plug-in doesn't make any assumptions about the port.
Can you try with
api_host: "10.10.1.120:8443"
? -
@shinuza thanks, that works.
after changing to api_host: "10.10.1.120:8443"
[WARNING]: * Failed to parse /opt/system/inventories/production/xen_orchestra.yml with auto plugin: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: IP address mismatch, certificate is not valid for '10.10.1.120'. (_ssl.c:997)
setting validate_certs: false gave me the working output
looks very nice
-
This post is deleted! -
Is their a way to use the ansible plugin for creating VM's in XCP-NG?
-
@hostingforyou Hi !
The best way to create VM is with the terraform provider for Xen Orchestra
See https://xen-orchestra.com/blog/virtops1-xen-orchestra-terraform-provider/ -
@AtaxyaNetwork looks good, not sure to post issue here as its is not ansible related, but I get the following error:
$ terraform plan Planning failed. Terraform encountered an error while generating this plan. ╷ │ Error: unexpected EOF │ │ with provider["registry.terraform.io/terra-farm/xenorchestra"], │ on provider.tf line 10, in provider "xenorchestra": │ 10: provider "xenorchestra" {
$ cat provider.tf # provider.tf terraform { required_providers { xenorchestra = { source = "terra-farm/xenorchestra" version = "~> 0.9" } } } provider "xenorchestra" { username = "xo" password = "password" url = "ws://10.10.1.120:8443" insecure = true }
$ cat vm.tf data "xenorchestra_pool" "pool" { name_label = "OTA" } data "xenorchestra_template" "vm_template" { name_label = "Ubuntu-22-template" } data "xenorchestra_sr" "sr" { name_label = "Tintri-Intern-Intern01" pool_id = data.xenorchestra_pool.pool.id } data "xenorchestra_network" "network" { name_label = "LAN Private" pool_id = data.xenorchestra_pool.pool.id }
any idea how to debug?