To automatically update the hosts? I expect, to work, a host reboot would be required however how can this be automated if the host has running VMs?
Posts
-
RE: VM association with shared storage
-
RE: Alarms in XO
This host does not run any VMs, just used for CR
I've increased the dom0 ram to 4GB with no more alarms.
-
Alarms in XO
When I check "Health" in XO everything appears fine but I do see a number of Alarms, problems is I have no idea what they mean. I do not think I have any system performance issues but am sure these should not be ignored.
HST150 is a host for disaster recovery using CR
-
RE: VM association with shared storage
Why did I not do this sooner
-
VM association with shared storage
I have recently changed our setup to use FreeNAS shared storage for VMs. Now I have shared storage and two hosts I can move running VMs between hosts. This makes it easy to patch & restart a host by moving the VMs off it first.
As opposed to moving the VMs, I could schedule a maint windows and down the VMs then patch and reboot the host. In this scenario, if the host was to fail I expect nothing will be lost as the shared storage is independent. Then I can simple start the VMs on the remaining host, meaning there is no hard link between the host and VM.
Does this sound correct?
-
RE: Backup Issue: "timeout reached while waiting for OpaqueRef"
I believe this issue was resolved when the health check system was changed to detect network connectivity at startup so it did not need to wait for then entire VM to boot. Needs the Xen tools to be installed. I have not had an issue since this change.
-
VM resource usage
I have a host with 48 CPUs and 96GB RAM
I understand that the RAM cannot be over allocated to VMs or I get the no hosts available message a startup however I understand CPUs can, so I could allocate all VMs 48 CPUs and they will only utilise what they require. Is this correct and if so is there any reason not to allocate the max CPUs to a VM.
Further to this, if I only allocate the VMs a smaller number of CPUs do they all overlap on the first X CPUs and leave the others unused. i.e. if VM1 has 8 then it will only use the 1st 8 and if VM2 has 4 then it will only use the first 4.
I ask as I currently have multiple VMs using a lesser number of CPUs than the host has however the status only show the first 15 CPUs in use on the host:
-
RE: Best strategy for Continuous Replication
Here's the new model. We've tried a few combinations and I think with TrueNAS shared storage this will now work well.
-
RE: Best strategy for Continuous Replication
For anyone else looking to connect TrueNAS with XCP-NG
-
RE: Best strategy for Continuous Replication
In TrueNAS NFS share settings I set this and it now works.
-
RE: Best strategy for Continuous Replication
I have setup TrueNAS with an NFS share however am unable to connect as a remote.
Is there a guide on how to configure connecting XO to NFS?
-
RE: Best strategy for Continuous Replication
Makes perfect sense.
I expect having separate storage for the production VMs and CR VMs makes sense too.
I am now thinking a good robust model would be:
- One or more production hosts in a single pool (allows host migration for updates)
- One TrueNAS Scale for production shared storage
- One CR host with local storage
-
Best strategy for Continuous Replication
I had a server dedicated to CR that was part of my pool.
i recently lost the pool master and in turn lost access to the CR host too.
The official docs state that the CR can be used if the main pools fails which indicates having the CR host as part of the pool is not a good idea.
Is it best practice to not have the CR host as part of the main pool?
Alternatively, would a better setup not being having multiple xcp-ng hosts with central shared storage for both production VMs and CR VMs. This way if a single xcp-ng host fails the CR VMs can be easily started on the other host? A variation of this would be to have two shared storage repos, one for production VMs and one for CR VMs.
I am keen to hear other's thought on this.
-
RE: Host failure after patches
All this complexity makes me question the advantages of having all hosts in the same pool.
-
RE: Host failure after patches
My setup is pretty basic.
I have two hosts in the pool, one for running VMs on local storage and one for DR backups on local storage.
I'd like to setup shared storage so i could run the VMs on multiple hosts and seamlessly move them between hosts without migrating storage too.To setup shared storage would this be on an xcp-ng host or totally independent of xcp-ng?
-
RE: Host failure after patches
Can I restart the slave and install patches if the master has not been patched yet?
-
RE: Host failure after patches
Have restarted the tool stack but no different.
When I view hst110 I see:
I was tempted to restart it however it has Patches pending installation after a restart and as the mater is not fully patched I thought best not to restart it. I understand the master needs to be patched first.
I just checked xsconsole on hst110 and it shill shows pool master unavailable
Do I need to change the pool master used by the slave?
-
RE: Host failure after patches
Apologies, I missed that. I have run the command and now see:
hst103 is the new pool master. hst100 is the old pool master that failed.
In XO I can only see hst103 under hosts however all three hosts are listed under the pool:
-
RE: Host failure after patches
No I didn't. Does this need to be run on the slave itself or the new master?
When you say "after selecting a new master" do you mean after I did this on the new master?
xe pool-emergency-transition-to-master
Edit:
Found this which shows this is run on the new pool master:
https://www.ervik.as/how-to-change-the-pool-master-in-a-xenserver-farm/