@Danp That worked a treat thanks for the link!
Best posts made by crembz
-
RE: programmatically connect SR (NFS)
Latest posts made by crembz
-
Intel Arc & GVT-g
Has anyone tried getting an Intel Arc working with with GVT-d for vgpu? I've seen some people tried it with the igpus.
-
RE: VM Autostart order
@olivierlambert Yes I was playing around with that, but I couldn't seem to get it behaving properly, so I went looking through the doco, and then came here. I must be missing something.
Are VMs started in random sequence or will everything will boot storm?
If I add delays, how can that control the order with a larger sequence? -
VM Autostart order
I'm just getting my head around xcpng/xo after spending a fair few years in esxi and proxmox.
I'm trying to understand how I can control the order in which VMs start and shutdown.
eg, I want XO up first, followed by any primary services followed by 'the rest'.
proxmox has the concept of both order and delay. Is there a similar mechanism or a different way of achieving this in XO?
-
RE: programmatically connect SR (NFS)
@Danp That worked a treat thanks for the link!
-
programmatically connect SR (NFS)
I'm somewhat new to xcpng and am trying to figure out a way to connect an NFS SR (or several) once an NFS server comes online.
Use Case:
Following a power outage there is a race condition between xcpng and my NAS. Typically the xcpng hosts start first.
The SR does not connect to the NAS as it is unavailable at that time
Any VMs hosted on the NFS server fail to start
SR remains disconnected even once the NAS is upCurrently I need to manually reconnect the SR and start the VMs
I'm trying to understand how I can have the NAS make an API call to XO to reconnect the SRs and trigger the VMs to boot.
Using the xo-cli I'm able to issue
xo-cli sr.connectAllPbds id={SRUID}
but I'm not sure how to automate this on XO or call it via an API call from the NAS.
Can anyone help guide me here?
Cheers
-
RE: Nas/Home/Lab heterogeneous pools and failure
@olivierlambert interesting, I do like how xo manages all hosts in the one place. My understanding though is ha is not possible across clusters right?
I tend to pick up hosts over time, so having a homogenous cluster is practically impossible.
-
Nas/Home/Lab heterogeneous pools and failure
I'm new to xcp-ng so please bear with me. I'm in the process of rebuilding my home network and lab and have been having a play with xc-png. I'm coming from over years of running proxmox and kvm before that.
I'm trying to understand whether I can achieve the following using xc-png:
I have a cluster of 7 hosts.
- Virtual NAS with PCIE passthrough - threadripper 2950x
- One host is for ciritical network infra, fw, dns, vpn, omada- i5 8500t
- Virtual ESXi host - 5900x
- Virtual Hyper V host - i5 10500t
- Virtual Nutanix host - i7 6700
- Virtual KVM host - i5 6500t
- Docker host - i5 8500
I run everything virtual to easily snapshot and revert states, I'm testing a bunch of automation across hypervisors and public clouds.
Host 1,2,3,7 are connected with 10gb NICs. The others all have 1gb NICs. The NAS hosts VM drives for all but host #2
I'd love to have everything in one giant pool, and turn all bar the critical network host off at night (thus why host #2 I run VMs on local storage) to save on power. I've not found an elegant way to do this on pve. Are there any clever ideas on how I could achieve this on xcp-ng?
Also, I noticed VIFs are setup pool wide and mapped to a PIF. How do you handle hosts with different NIC counts? In the 10g boxes my PIF0 is mapped to 1g ports with is not being used (using the 10g).
NOTE - I've been able to have both AMD and Intel hosts in the same cluster in pve and live migrate guests between them. I don't think having mixed CPU vendors in the same pool works in xcp-ng. My understanding though on xcp-ng is that I can cold migrate between pools which I might be able to live with.