updated and tested:
test pool (2 hosts) iscsi storage
no problems so far.
(live-migration, snapshots, changed master in pool)
guest tools updated on
Debian 10 / 11 / Rocky Linux 8.5
updated and tested:
test pool (2 hosts) iscsi storage
no problems so far.
(live-migration, snapshots, changed master in pool)
guest tools updated on
Debian 10 / 11 / Rocky Linux 8.5
Server:
Intel S5520UR Dual Xeon E5645
test-pv64-cpuid-faulting SKIP
test-pv64-pv-fsgsbase SKIP
with or without xl set-parameters ept=no-exec-sp
@Jsawyer77
You could boot into the Active Directory Service Recovery Mode (DSRM) and perform a non-authoritive recovery. You will receive the missing data from another DC.
A simple backup and restore with XO is not possible if any other DC remains online. (If you have more than one single DC.)
Please note that the script below is a combination of various scripts found in the web. I have modified it to my needs as far as I am able to do so.
We use a two node shared storage pool with the HALizard extention in combination with the 'vapp' function, (in order to start and stop VMs in a defined order and time). If you do not use it, you could strip off these portions of the script below. Also 'sleep' is not really needed, however I am felling better with it
If you find something to improve I am happy to learn from you.
#!/bin/bash
# XenCenter Custom Field for HA-Lizard HA
XC_FIELD_NAME=ha-lizard-enabled
# Put your pool uuid here
POOL_UUID="you_pool_UUID"
# get uuid of pool master
MASTER_UUID=`xe pool-list params=master --minimal`
# get uuid of current host
CURRENT_HOST_UUID=`cat /etc/xensource-inventory | grep -i installation_uuid |egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}"`
# Check if current host is pool master, as only pool master should run this script
if [ $CURRENT_HOST_UUID != $MASTER_UUID ]
then
###(uncomment following line to exit the script)
exit
fi
# This is supposed to switch off HA-Lizard VM restart
xe pool-param-set uuid=$POOL_UUID other-config:XenCenter.CustomFields.$XC_FIELD_NAME=false
sleep 5s
###enumerate uuid's of all _running_ VAPPs in the pool
for VAPP in `xe appliance-list params=uuid | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}"`
do
xe appliance-shutdown uuid=$VAPP
done
sleep 10s
###enumerate uuid's of all _running_ VMs in the pool
for VM in `xe vm-list is-control-domain=false power-state=running params=uuid | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}"`
do
###(uncomment following line to perform actual shutdown)
xe vm-shutdown vm=$VM
done
sleep 5s
###enumerate of all XCP NG hosts in the pool except master
for HOST in `xe host-list params=uuid --minimal | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}"`
do
if [ $HOST != $MASTER_UUID ]
then
###(uncomment following line to put any host except master in maintenance)
xe host-disable uuid=$HOST
sleep 10s
elif [ $HOST = $MASTER_UUID ]
then
###(uncomment following line to put master in maintenance)
xe host-disable uuid=$HOST
fi
done
sleep 10s
###Shutdown all XCP NG hosts in the pool except master
for HOST in `xe host-list params=uuid --minimal | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}"`
do
if [ $HOST != $MASTER_UUID ]
then
###(uncomment following line to perform actual shutdown)
xe host-shutdown host=$HOST
fi
done
sleep 10s
# Before we perform the shutdown sequence we turn on again HA-Lizard HA
# as after restarting we want to have the VMs in the pool running again!!!
xe pool-param-set uuid=$POOL_UUID other-config:XenCenter.CustomFields.$XC_FIELD_NAME=true
###finally shutdown pool master
for HOST in `xe host-list params=uuid --minimal | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}"`
do
if [ $HOST = $MASTER_UUID ]
then
###(uncomment following line to perform actual shutdown)
xe host-shutdown host=$HOST
fi
done
Hi @randomlyhere ,
I have installed so far apcupsd and have copied from my exisiting XenServer 6.5 pool the configuration, which was tested and have also proven in production more than once the intended funtionality, (shoutdown first vApps, then remaining VMs, finally pool member hosts and pool master).
Testing on the new XCP-NG pool is still to be done and will happen in the next three weeks.
So I will report here the results after testing.
I am using the vapp approach with XCP-NG Center. All VMs are put in one group and I have set a fixed start order and delay for each VM.
I think it should be possible to setup this also via Cli but I havbe not done this myself.
sorry, I was not detailed enough.
In order to use your APC ups via management NIC or usb cable you have to install the "apcupsd" package.
In the config file apcupsd.conf for apcupsd located in
/etc/apcupsd
you could set /define how your ups is communicating with your host(s). The possible parameters are well documented in this file.
I have not modified any iptables entry to work with my systems.
I am answering myself:
As I do not want to install unwanted and unknown packages for a simple and small purpose I have choosen the apcupsd route to go.
My old pool was running without a hassle and has proven its intended function with that setup, there is no reason to change it towards NUT.
However, I have found a script from a NUT install which seems to be much more elegant than the currently used one.
I will check, test and report here the results.
I am setting up a new pool with XCP-NG 8.2.1 to replace an existing pool with XenServer 6.5.
In this forum I do not find anything about apcupsd and some limited information about NUT.
On my old pool I have used apcupsd and I was going to use it again.
The footprint on Dom0 is very limited, only one packet plus one dependency.
If I would follow the forum post with the NUT service I would have to install one package plus 51 dependencies!
In order to follow the top rule - keep DOM0 clean and mostly untouched - I wonder if the NUT way is not "recommended"?
Any thoughts and opinions about this topic are welcome?
the install process went fine in BIOS mode!
And yes, the checksum is correct.
Yes, Sandisk USB Stick (Ventoy) in UEFI boot mode. I had also created a standard DVD ROM in order to exclude any differences in such a boot construct.
@john-c
PXE boot ROM for network boot. (Never used)
@john-c
The system freezes and the last screenshots are above visible.
the Intel Bios (64) has already EFI support included, so I can select EFI to boot the install media and everything works fine until the step were the boot entries are created, which performs still correct, but with the EFI boot entries creation my install dies.
@john-c
Intel Server (SR2625/S5520UR / Dual Xeon E5645)
@yann
Yes, it was a fresh install. I have removed any partition layout from the Hardware RAID1 via gparted before I have started from scratch.
The previous 8.2.1 install was also created in BIOS mode. For me it is clear that this old EFI Intel BIOS from 2012 lacks support for this kind of install process.
I have started the installer again in the BIOS mode and have finished this process successful. It seems to be that the "EFI" Mode from the old intel server is the problem.
As this old equipment is only used as a testing playground I could stay with it as it is. The hardware will never be used as a production system.
the systems freezes after the last screen and it is diffult to catch the log just before the freeze...