@guiand888 Hey, the plugin does support composite vars. However I believe the idiomatic way to do what you want is this:
group_vars/
gp1.yml
gp2.yml
And then:
# gp1.yml
ansible_user: admin
# gp2.yml
ansible_user: anotheradmin
@guiand888 Hey, the plugin does support composite vars. However I believe the idiomatic way to do what you want is this:
group_vars/
gp1.yml
gp2.yml
And then:
# gp1.yml
ansible_user: admin
# gp2.yml
ansible_user: anotheradmin
@tuckertt said in XOA receipe not creating VIP address (balancer):
Hi,
Long time user (xcp-ng) first time commentor. I've attempted to use the recipe to deploy k8s, having thought about having a cluster for a while but never had the motivation to look into creating one so the functionality of the recipe sounded awesome. Unfortunately I've hit the same problem by the sounds of it. I can create a single control plane node with workers but when attempting to deploy a more resilient configuration it stops at one node and the screen output reports that cloud-init failed and the logs report it's an issue connecting to the vip by the looks of it. Hopefully it's ok to upload my log in place of igorf's but looking at it it talks about checking the various containers so I did for the vip container and get:
root@cp-1:/home/debian# crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs 8f33bda832123
time="2024-07-16T09:16:08Z" level=info msg="Starting kube-vip.io [v0.8.1]"
time="2024-07-16T09:16:08Z" level=info msg="namespace [kube-system], Mode: [ARP], Features(s): Control Plane:[true], Services:[true]"
time="2024-07-16T09:16:08Z" level=info msg="prometheus HTTP server started"
time="2024-07-16T09:16:08Z" level=info msg="Using node name [cp-1]"
time="2024-07-16T09:16:08Z" level=info msg="Starting Kube-vip Manager with the ARP engine"
time="2024-07-16T09:16:08Z" level=info msg="beginning services leadership, namespace [kube-system], lock name [plndr-svcs-lock], id [cp-1]"
I0716 09:16:08.494929 1 leaderelection.go:250] attempting to acquire leader lease kube-system/plndr-svcs-lock...
time="2024-07-16T09:16:08Z" level=info msg="Beginning cluster membership, namespace [kube-system], lock name [plndr-cp-lock], id [cp-1]"
I0716 09:16:08.496428 1 leaderelection.go:250] attempting to acquire leader lease kube-system/plndr-cp-lock...
E0716 09:16:10.511560 1 leaderelection.go:332] error retrieving resource lock kube-system/plndr-svcs-lock: leases.coordination.k8s.io "plndr-svcs-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
E0716 09:16:10.511638 1 leaderelection.go:332] error retrieving resource lock kube-system/plndr-cp-lock: leases.coordination.k8s.io "plndr-cp-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
........( message loops )....which, although I haven't really touched the stuff so can't be sure looks like it could possibly be to do with https://github.com/kube-vip/kube-vip/issues/684
Hi,
Thank you for the report.
Can you tell us the version of your xoa-server plug-in? This is fixed in 0.29.1, you probably have version 0.29.0 or lower.
With regards,
Hi there. Author of said inventory plugin.
If you ever wish to migrate you should be able to retain most of what you did on XOA side (I'm thinking tags), but you'll have something that's more standard and require less setup as long as the XOA API is accessible to the machine running the playbook.
You can keep the groups with the composable groups in your inventory plugin configuration:
simple_config_file:
plugin: community.general.xen_orchestra
api_host: 192.168.1.255
user: xo
password: xo_pwd
validate_certs: true
use_ssl: true
groups:
kube-master: "name_label == 'kube-master'"
compose:
ansible_port: 2222
@typerlc @dc31xx @olivierlambert
Currently the Kubernetes version list is pulled from:
https://api.github.com/repos/kubernetes/kubernetes/releases
Which lists every Kubernetes available releases, but the latest available version for the xenial
repo is v1.28.2
as mentioned by @typerlc. You can use that while we figure out the best way to fix the issue on our side.
@guiand888 Hey, the plugin does support composite vars. However I believe the idiomatic way to do what you want is this:
group_vars/
gp1.yml
gp2.yml
And then:
# gp1.yml
ansible_user: admin
# gp2.yml
ansible_user: anotheradmin
@tuckertt said in XOA receipe not creating VIP address (balancer):
Hi,
Long time user (xcp-ng) first time commentor. I've attempted to use the recipe to deploy k8s, having thought about having a cluster for a while but never had the motivation to look into creating one so the functionality of the recipe sounded awesome. Unfortunately I've hit the same problem by the sounds of it. I can create a single control plane node with workers but when attempting to deploy a more resilient configuration it stops at one node and the screen output reports that cloud-init failed and the logs report it's an issue connecting to the vip by the looks of it. Hopefully it's ok to upload my log in place of igorf's but looking at it it talks about checking the various containers so I did for the vip container and get:
root@cp-1:/home/debian# crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs 8f33bda832123
time="2024-07-16T09:16:08Z" level=info msg="Starting kube-vip.io [v0.8.1]"
time="2024-07-16T09:16:08Z" level=info msg="namespace [kube-system], Mode: [ARP], Features(s): Control Plane:[true], Services:[true]"
time="2024-07-16T09:16:08Z" level=info msg="prometheus HTTP server started"
time="2024-07-16T09:16:08Z" level=info msg="Using node name [cp-1]"
time="2024-07-16T09:16:08Z" level=info msg="Starting Kube-vip Manager with the ARP engine"
time="2024-07-16T09:16:08Z" level=info msg="beginning services leadership, namespace [kube-system], lock name [plndr-svcs-lock], id [cp-1]"
I0716 09:16:08.494929 1 leaderelection.go:250] attempting to acquire leader lease kube-system/plndr-svcs-lock...
time="2024-07-16T09:16:08Z" level=info msg="Beginning cluster membership, namespace [kube-system], lock name [plndr-cp-lock], id [cp-1]"
I0716 09:16:08.496428 1 leaderelection.go:250] attempting to acquire leader lease kube-system/plndr-cp-lock...
E0716 09:16:10.511560 1 leaderelection.go:332] error retrieving resource lock kube-system/plndr-svcs-lock: leases.coordination.k8s.io "plndr-svcs-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
E0716 09:16:10.511638 1 leaderelection.go:332] error retrieving resource lock kube-system/plndr-cp-lock: leases.coordination.k8s.io "plndr-cp-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
........( message loops )....which, although I haven't really touched the stuff so can't be sure looks like it could possibly be to do with https://github.com/kube-vip/kube-vip/issues/684
Hi,
Thank you for the report.
Can you tell us the version of your xoa-server plug-in? This is fixed in 0.29.1, you probably have version 0.29.0 or lower.
With regards,
@igorf said in XOA receipe not creating VIP address (balancer):
Hi,
I have tried to create k8s (v 1.30) cluster several times using Hub-Receipes automation in XOA, and if I chose only one control node, than configuration completes successfully and I can use that cluster.But if I choose scenario with fault tolerant control planes, than installation is failing and never completes, only first control plane is being created (only one VM for k8s is created), and in VM logs I can see that control node is trying to connect to VIP address (balancer) which does not exist. VM for VIP/balancer is never being created automatically.
Did I miss something? Should VIP (balancer) be automatically created/configured, or do I need to create it first manually?
I was trying to find more documentation on this subject but I was unlucky in finding it.XOA is on version : 5.93.1 - XOA build: 20240401
XCP-NG is on 8.3 beta 2Can you please advise how to proceed if I want to have fault tolerant k8s cluster?
Thank you in advance and best regards, Igor
Hello,
Can you please send the output of /var/log/cloud-init-output.log
?
With regards
@xyhhx said in How to kubernetes on xcp-ng (csi?):
fwiw, i'm about to set up a kubernetes cluster on xcp-ng. i'm still in the process, but i'm planning on just passing disks as hba storage to worker/storage nodes, then using openebs jiva (or maybe rook/ceph)
if anybody is interested in how that goes, i can post about it later on
Jiva is probably not the way to go, I believe the supported way is to use Mayastor (via nvme-tcp)
@mohammadm said in How to kubernetes on xcp-ng (csi?):
@olivierlambert said in How to kubernetes on xcp-ng (csi?):
Is it working correctly now?
Currently it is stuck on this.
[FAILED] Failed to start Execute cloud user/final scripts.
cp-1 login:
I did not specify login credentials.
The error is probably earlier than that. You can see the full output of the cloudinit script in /var/log/cloud-init-output.log
The error you are seeing is most likely due to the fact the kubernetes cluster could not be initialized. (kubeadm init failed)
@typerlc @dc31xx @olivierlambert
Currently the Kubernetes version list is pulled from:
https://api.github.com/repos/kubernetes/kubernetes/releases
Which lists every Kubernetes available releases, but the latest available version for the xenial
repo is v1.28.2
as mentioned by @typerlc. You can use that while we figure out the best way to fix the issue on our side.
@hostingforyou said in Ansible with Xen Orchestra:
@olivierlambert XO build from source listening on port 8443
The plug-in doesn't make any assumptions about the port.
Can you try with api_host: "10.10.1.120:8443"
?
@olivierlambert said in Exports OVA Timeout...:
I think @shinuza is now working on this. Don't forget to be sure you are using the latest commit.
Also, please try with XOA too in
latest
channel to compare the result.
I'm trying to find a way to schedule these long running tasks so we are not dependent on the browser staying opened and connected to XOA. Also we are looking at means to improve the export speed.
@wowi42 Hi there. It's already in the documentation:
Also, you should see an error message if it's not installed.
@tristis-oris Nope, unfortunately:
Using hostnames or ip as inventory id means there's no way to for example to: start a vm using the inventory, but you can do it using the inventory and a JSON-RPC call.