updated and tested:
test pool (2 hosts) iscsi storage
no problems so far.
(live-migration, snapshots, changed master in pool)
guest tools updated on
Debian 10 / 11 / Rocky Linux 8.5
updated and tested:
test pool (2 hosts) iscsi storage
no problems so far.
(live-migration, snapshots, changed master in pool)
guest tools updated on
Debian 10 / 11 / Rocky Linux 8.5
Server:
Intel S5520UR Dual Xeon E5645
test-pv64-cpuid-faulting SKIP
test-pv64-pv-fsgsbase SKIP
with or without xl set-parameters ept=no-exec-sp
@Jsawyer77
You could boot into the Active Directory Service Recovery Mode (DSRM) and perform a non-authoritive recovery. You will receive the missing data from another DC.
A simple backup and restore with XO is not possible if any other DC remains online. (If you have more than one single DC.)
Please note that the script below is a combination of various scripts found in the web. I have modified it to my needs as far as I am able to do so.
We use a two node shared storage pool with the HALizard extention in combination with the 'vapp' function, (in order to start and stop VMs in a defined order and time). If you do not use it, you could strip off these portions of the script below. Also 'sleep' is not really needed, however I am felling better with it
If you find something to improve I am happy to learn from you.
#!/bin/bash
# XenCenter Custom Field for HA-Lizard HA
XC_FIELD_NAME=ha-lizard-enabled
# Put your pool uuid here
POOL_UUID="you_pool_UUID"
# get uuid of pool master
MASTER_UUID=`xe pool-list params=master --minimal`
# get uuid of current host
CURRENT_HOST_UUID=`cat /etc/xensource-inventory | grep -i installation_uuid |egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}"`
# Check if current host is pool master, as only pool master should run this script
if [ $CURRENT_HOST_UUID != $MASTER_UUID ]
then
###(uncomment following line to exit the script)
exit
fi
# This is supposed to switch off HA-Lizard VM restart
xe pool-param-set uuid=$POOL_UUID other-config:XenCenter.CustomFields.$XC_FIELD_NAME=false
sleep 5s
###enumerate uuid's of all _running_ VAPPs in the pool
for VAPP in `xe appliance-list params=uuid | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}"`
do
xe appliance-shutdown uuid=$VAPP
done
sleep 10s
###enumerate uuid's of all _running_ VMs in the pool
for VM in `xe vm-list is-control-domain=false power-state=running params=uuid | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}"`
do
###(uncomment following line to perform actual shutdown)
xe vm-shutdown vm=$VM
done
sleep 5s
###enumerate of all XCP NG hosts in the pool except master
for HOST in `xe host-list params=uuid --minimal | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}"`
do
if [ $HOST != $MASTER_UUID ]
then
###(uncomment following line to put any host except master in maintenance)
xe host-disable uuid=$HOST
sleep 10s
elif [ $HOST = $MASTER_UUID ]
then
###(uncomment following line to put master in maintenance)
xe host-disable uuid=$HOST
fi
done
sleep 10s
###Shutdown all XCP NG hosts in the pool except master
for HOST in `xe host-list params=uuid --minimal | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}"`
do
if [ $HOST != $MASTER_UUID ]
then
###(uncomment following line to perform actual shutdown)
xe host-shutdown host=$HOST
fi
done
sleep 10s
# Before we perform the shutdown sequence we turn on again HA-Lizard HA
# as after restarting we want to have the VMs in the pool running again!!!
xe pool-param-set uuid=$POOL_UUID other-config:XenCenter.CustomFields.$XC_FIELD_NAME=true
###finally shutdown pool master
for HOST in `xe host-list params=uuid --minimal | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}"`
do
if [ $HOST = $MASTER_UUID ]
then
###(uncomment following line to perform actual shutdown)
xe host-shutdown host=$HOST
fi
done
Hi @randomlyhere ,
I have installed so far apcupsd and have copied from my exisiting XenServer 6.5 pool the configuration, which was tested and have also proven in production more than once the intended funtionality, (shoutdown first vApps, then remaining VMs, finally pool member hosts and pool master).
Testing on the new XCP-NG pool is still to be done and will happen in the next three weeks.
So I will report here the results after testing.
I am using the vapp approach with XCP-NG Center. All VMs are put in one group and I have set a fixed start order and delay for each VM.
I think it should be possible to setup this also via Cli but I havbe not done this myself.
sorry, I was not detailed enough.
In order to use your APC ups via management NIC or usb cable you have to install the "apcupsd" package.
In the config file apcupsd.conf for apcupsd located in
/etc/apcupsd
you could set /define how your ups is communicating with your host(s). The possible parameters are well documented in this file.
I have not modified any iptables entry to work with my systems.
I am answering myself:
As I do not want to install unwanted and unknown packages for a simple and small purpose I have choosen the apcupsd route to go.
My old pool was running without a hassle and has proven its intended function with that setup, there is no reason to change it towards NUT.
However, I have found a script from a NUT install which seems to be much more elegant than the currently used one.
I will check, test and report here the results.
I am setting up a new pool with XCP-NG 8.2.1 to replace an existing pool with XenServer 6.5.
In this forum I do not find anything about apcupsd and some limited information about NUT.
On my old pool I have used apcupsd and I was going to use it again.
The footprint on Dom0 is very limited, only one packet plus one dependency.
If I would follow the forum post with the NUT service I would have to install one package plus 51 dependencies!
In order to follow the top rule - keep DOM0 clean and mostly untouched - I wonder if the NUT way is not "recommended"?
Any thoughts and opinions about this topic are welcome?
Server:
Intel S5520UR Dual Xeon E5645
test-pv64-cpuid-faulting SKIP
test-pv64-pv-fsgsbase SKIP
with or without xl set-parameters ept=no-exec-sp
@Ajmind-0 said in XCP-ng 8.3 beta :
...
Update 22.02.2023:
Without feedback I have returned this host to XCP-NG 8.1 and from this version I updated via recent ISO to 8.2.1 without any problem.
Access from XO from sources or https no problem.
It seems to be that this old host does not work with 8.3 beta for whatever reason.
Update 23.02.2023:
Just a quick note for a quick test:
Upgraded from the smooth running 8.2.1 towards 8.3 beta, no XO / http /https connection possible. Still only local or via ssh possible.
@olivierlambert said in XCP-ng 8.3 beta :
No, you had a choice during 8.3 install (IPv4 only, or v4+v6 or v6 only)
The was no choice option or dialogue!
(I had selected to upgrade the exisiting 8.1 installation.)
@Ajmind-0 said in XCP-ng 8.3 beta :
I have today upgraded an very old Intel Server (S5520UR) running XCP-NG 8.1 to XCP-NG 8.3 beta with the latest beta ISO:
Server:
- Intel S5520UR
- Dual Xeon E5645
- 48GB RAM
- Intel RAID Cotnroller
- RAID1 73GB SSD for DOM0
- RAID5 1100GB SAS HDD for Localstorage 2
As the Server was initially a XenServer 6.5 I had to change the partition layout with
touch /var/preserve/save2upgrade
The installation was running smooth, no error message.
I could manage local via CLI and via SSH but not via XO (commit a2c36) from sources or via Xolite.server.enable { "id": "a1886440-588f-4630-86b9-b1e888553a12" } { "errno": -111, "code": "ECONNREFUSED", "syscall": "connect", "address": "192.168.1.116", "port": 443, "originalUrl": "https://192.168.1.116/jsonrpc", "url": "https://192.168.1.116/jsonrpc", "call": { "method": "session.login_with_password", "params": "* obfuscated *" }, "message": "connect ECONNREFUSED 192.168.1.116:443", "name": "Error", "stack": "Error: connect ECONNREFUSED 192.168.1.116:443 at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16) at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17)" }
I could start VMs and they seem to run as expected.
I have not yet stated to use yum update.Today I have updated the host via yum update. Everyting runs smooth, however still no luck in getting a https connection to that host.
Any advise where I could look into? Wbserver configuration or firewall settings?
Update 22.02.2023:
Without feedback I have returned this host to XCP-NG 8.1 and from this version I updated via recent ISO to 8.2.1 without any problem.
Access from XO from sources or https no problem.
It seems to be that this old host does not work with 8.3 beta for whatever reason.
I have today upgraded an very old Intel Server (S5520UR) running XCP-NG 8.1 to XCP-NG 8.3 beta with the latest beta ISO:
Server:
As the Server was initially a XenServer 6.5 I had to change the partition layout with
touch /var/preserve/save2upgrade
The installation was running smooth, no error message.
I could manage local via CLI and via SSH but not via XO (commit a2c36) from sources or via Xolite.
server.enable
{
"id": "a1886440-588f-4630-86b9-b1e888553a12"
}
{
"errno": -111,
"code": "ECONNREFUSED",
"syscall": "connect",
"address": "192.168.1.116",
"port": 443,
"originalUrl": "https://192.168.1.116/jsonrpc",
"url": "https://192.168.1.116/jsonrpc",
"call": {
"method": "session.login_with_password",
"params": "* obfuscated *"
},
"message": "connect ECONNREFUSED 192.168.1.116:443",
"name": "Error",
"stack": "Error: connect ECONNREFUSED 192.168.1.116:443
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16)
at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17)"
}
I could start VMs and they seem to run as expected.
I have not yet stated to use yum update.
Today I have updated the host via yum update. Everyting runs smooth, however still no luck in getting a https connection to that host.
Any advise where I could look into? Wbserver configuration or firewall settings?
@Jsawyer77
You could boot into the Active Directory Service Recovery Mode (DSRM) and perform a non-authoritive recovery. You will receive the missing data from another DC.
A simple backup and restore with XO is not possible if any other DC remains online. (If you have more than one single DC.)
Ups, I was this first here to report the lack of managing vAPPS.
And I second the requirement to include it.
I do not know how else I could manage the start order and start delay of included VMs.
@olivierlambert said in Epyc VM to VM networking slow:
I wonder about the guest kernel too (Debian 11 vs 12)
Here are my results with Debian11 vs. Debian12 on our EPYC 7313P 16-Core Processor on the same host. Fresh and fully updated VMs with 4vcpu /4GB RAM, XCP-NG guest tools 7.30.0-11 are installed.:
All tests were made 3 times showing the best result.
All tests with multiple connections were made three times -P2 /-P4/-P8/-P12/-P16 showing here the best result:
DEBIAN11>DEBIAN11
-------------------------
**root@deb11-master:~# iperf3 -c 192.168.1.95**
Connecting to host 192.168.1.95, port 5201
------------------------------------------------
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 8.84 GBytes 7.60 Gbits/sec 1687 sender
[ 5] 0.00-10.04 sec 8.84 GBytes 7.56 Gbits/sec receiver
**root@deb11-master:~# iperf3 -c 192.168.1.95 -P2**
Connecting to host 192.168.1.95, port 5201
------------------------------------------------------------
[SUM] 0.00-10.00 sec 12.0 GBytes 10.3 Gbits/sec 2484 sender
[SUM] 0.00-10.04 sec 12.0 GBytes 10.3 Gbits/sec receiver
DEBIAN12>DEBIAN12
-------------------------
**root@deb12master:~# iperf3 -c 192.168.1.98**
Connecting to host 192.168.1.98, port 5201
-----------------------------------------------
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 5.12 GBytes 4.40 Gbits/sec 953 sender
[ 5] 0.00-10.00 sec 5.12 GBytes 4.39 Gbits/sec receiver
**root@deb12master:~# iperf3 -c 192.168.1.98 -P4**
Connecting to host 192.168.1.98, port 5201
-----------------------------------------------
[SUM] 0.00-10.00 sec 3.58 GBytes 3.08 Gbits/sec 3365 sender
[SUM] 0.00-10.00 sec 3.57 GBytes 3.07 Gbits/sec receiver
Conclusion: Debian12 with kernel 6.1.55-1 compared to Debain 11 with kernel 5.10.197-1 run less performant on this EPYC host.
I will check now if I could perform the same test with a Windows VM.
Update
A quick test with two Windows 7 VMs, both with 2 vcpu / 2GB RAM have shown the best result with, the latest available Citrix guest tools are installed:
C:\Tools\Iperf3\iperf3.exe -c 192.168.1.108 -P8
In average 11.3 GBits/sec were reached.