Running two hosts on the beta in my lab and so far have had no issues at all (touch wood). Looking forward to this hitting production.
@badrAZ Yep.. that did the trick..
All working now.. nice.
I'm trying to follow the directions here Docker-in-XCP-ng to get a core.os host working to play with docker on xcp.
However, I'm not getting far.
I have finally managed to get the coreos VM installed but I'm not seeing any sign of docker on the interface anywhere.
I'm can't ssh into the vm because it seems the ssh keys have been ignored from the cloud config (and I see from the core.os docs that cloud-config is deprecated anyway, not sure if that's the issue) so I'm stuck using the XEO console for the moment which is a pain because you can't paste into it.
I manually added my ssh keys to the core users .ssh/authorized_keys file, but to no avail.
Has anyone managed to get this working? I'm curious how.
I might also say, that although the coreOS vm is not saying that tools are not installed, I'm not seeing an IP address in the networking tab either.
OK, seems I wasn't trying hard enough.
I was trying to mount the tools under the iso list, but when I look further down the list to xc-ng Tools it turns out I can mount the tools from there as long as I do it manually (mount /dev/sr0 /media/blahblah).
Installed now, all good.
Groan... maybe testing with a machine what actually boots in kvm to start with would be a good idea... trust me to pick a broken machine to test with... now.. lets try that again shall we? Sorry for wasting your time all... will report back how I go with a working machine.
Just a note for dumb bums like me... I tried out XO Lite (coming along nicely) but it took be a bit to work out that it needed to be installed on the pool master.
I put it onto one of the slaves, and tried to log in, but got no feedback at all, just nothing changed.
I eventually looked in the xo logs and found it was complaining about being on the slave not the master.. duh.. when you think about it.. but some feedback in the gui would help
Not wanting to be a "me too" but I'm seeing exactly the same thing.
Some VM's (unfortunately in my case large ones) and doing daily fulls rather than deltas.
Backup taking 14 hours instead of 2 or 3.
Similar etup to @S-Pam with nfs as the remote target.
just having a look at the new kubernetes cluster creation (was about to start trying microk8s when I found this good stuff).
I have run the creation, got the master and three workers running, got the green finished popup (after a long time) and can log into the master and nodes happily, however,
kubectl get nodes (on the master) gives me
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Not sure where to go from here to be honest.
Thanks for the input @tony and @stormi. Turns out I've managed to get most of the machines back on the second node after getting some help to force it back to being the master and powering off dead VM's etc. so I think I've avoided having to do this.
It is really useful info however, and I will probably give this a try at some point just so I can if I ever need too.
I'm a bit stuck here. My XCP system has crashed horribly (in another thread) and I really need to get one of the VM's running somewhere reasonably quickly.
As the VM was running on shared storage I have the vhd file on the storage, but the question is, which one of the dozens of vhd files is the one I need.
Is there any way to determine which vhd was attached to which vm without the ability to connect to a running XO or XSconsole?
I'm guessing there probably isn't, but thought it worth the ask.
Yep.. that did it... started it on the older host, and can now migrate it backwards and forwards with on issues...
Thanks Olive... as always.