Looks interesting!
We started using the xo terraform provider around 12 months ago, and then built a small http service (node/typescript) that talks to the xo-api to generate our ansible inventory. We've been using both in production since then, and i'll share some of the details for you here.
We took the approach of implementing this as a service on our network and then leveraging ansible's ability to execute a shell script to retrieve the inventory.
In our environment, we decided it was ok for the inventory to only include vm's (or hosts) that have an ip address - i mean if they don't ansible can't really work with them so thats ok for us. So the inventory service has a couple of env vars to provide a filter for which entities and ips to pick
// no tag required by default
required_tag: env.get('REQUIRED_TAG').default('').asString(),
// any ip is valid for the inventory
management_subnet: env.get('MANAGEMENT_SUBNETS').default('0.0.0.0/0').asArray(),
First off we can require any vm or host to have a tag, e.g. ansible_managed:true
to appear in the inventory
Then it must have an ip in our management subnet, if more than one ip is available (e.g. management and public) the service will filter them.
The http api for the inventory service uses the same filtering as xen-orchestra, so we can construct urls to retrieve partial inventories. This is useful for example as we have dev, production, etc, pools, and it gives us an easy way to target
https://inventory.internal/inventory?filter=env:monitoring%20mytag:foo
The response for the above request would look like this
{
"all":{
"hosts":[
"monitoring-1.internal"
]
},
"_meta":{
"hostvars":{
"monitoring-1.internal":{
"mytag":"foo",
"ansible_group":"prometheus",
"env":"monitoring",
"inventory_name":"monitoring-1.internal",
"ansible_host":"10.0.12.51",
"xo_pool":"monitoring-pool",
"xo_type":"VM",
"xo_id":"033f8b6d-88e2-92e4-3c3e-bcaa01213772"
}
}
},
"prometheus":{
"hosts":[
"monitoring-1.internal"
]
}
}
This vm has these tags in xen orchestra
ansible_group
can be repeated, and places the vm/host into this group in the inventory. Other tags get split into key=value and placed into the host vars
the xo_*
are added from the info in the api
ansible_host
will be our management ip
inventory_name
is a slugified version of the vm name, but by convention our names are sane
We also include hosts in the inventory, as we have various playbooks to run against them. All the same tagging and grouping applies to hosts as it does to VM's
{
...
"hostvars":{
"xcp-001":{
"ansible_group":"xen-host",
"inventory_name":"xcp-001",
"ansible_host":"10.0.55.123",
"xo_pool":"monitoring-pool",
"xo_type":"host",
"xo_id":"92c1c2ab-fd1e-46e9-85f7-70868f1e9106",
"xo_version":"8.2.0",
"xo_product":"XCP-ng"
}
}
...
}
When we setup some infra for management by terraform/ansible we'll typically use a combination of shell script inventory, static grouping and then extra group_vars if needed. For example our /inventory directory
01_inventory.sh
#!/bin/bash
curl -k https://inventory.internal/inventory?filter=k8s-cluster:admin 2>/dev/null
02_kubespray - which has its own group name convention, so we map them between our tags and their group names
[kube-master:children]
k8s-master
[etcd:children]
k8s-master
[kube-node:children]
k8s-node
k8s-monitoring
[k8s-cluster:children]
kube-master
kube-node
Executing ansible-playbook -i /inventory
where /inventory is a directory will then combine all the shell scripts and ini files to make the final inventor . nice!
I did think about trying to package this api directly as a plugin for xo, but haven't had time to look into that yet. But let me know if any of this looks interesting.