@stormi yes, that we be helpful indeed so we can stick to a build version at least for a while.
Posts made by roland.schmidt
-
RE: XOA - fail to change pool master
-
RE: XOA - fail to change pool master
@stormi we were upgrading hardware/blades in the DCs and they were shipped 2 in each month and it took a while to replace all. During that each blade join the pool with the latest build at that moment.
But we had issues first adding new patched blades to the "older" pool until found that if not patching the blade it can be added to the pool and upgraded later. Lately if you not patch the host you can't add it to the pool because of the too old build, so have to update the pool master at least and then add the new patched host to the pool.
Then as mentioned above we had to run a maintenance on the pool master hardware and just to be safe was thinking to switch the master and making sure remote hands will not leave us without it
As we are using mostly local disks in production for VM storage we can't migrate VMs for each build release to patch all the hosts in the pool, at least it would take a lot of work and time.
But as @olivierlambert mentioned we we keep patching them from now on by restarting the toolstack (I was not sure before that toolstack restart is enough for xcp-ng system update).
Hope I was not too confusing. Thanks all for your help. -
RE: XOA - fail to change pool master
The thing I wanted to highlight (which is not the same you are mentioned above/I was aware of) is that even if master and the new host to which I was trying to switch the master were BOTH patched to latest build, even rebooted I still couldn't change the master to the new host until ALL the other hosts in the pool were patched to latest version and toolstack restarted on them.
Even if only the master and the new host were involved in that switch I had to update 8 more hosts in order to be able to do the switch.
This did not happened on older builds.
From what I saw in xensource logs I suppose that existing and/or new master is trying to sync its database to ALL the hosts in the pool not just between two of them. -
RE: XOA - fail to change pool master
solved but, to be more explicit (at least what I understand):
- when you try to add a host to you xen pool it need to be exact same "build" as the pool master
- you can update only to the latest build
- you can update your pool starting with the pool master and restart the toolstack after each host patched to "apply" updates except new kernel
- you can't change your pool master if ANY of your pool members are not patched = ALL your pool members need to be fully patched - I think I encountered this issue starting with late 2011 xcp-ng builds
-
RE: XOA - fail to change pool master
As said both hosts are at the same (latest) patch level
The issue is I can't switch the master to the other host, getting "RESTORE_INCOMPATIBLE_VERSION()" error, at least from XOA GUI
What else should I check beside patch level?
Is there other safe way to do the master switch? -
RE: XOA - fail to change pool master
@olivierlambert said in XOA - fail to change pool master:
Why do you want to change the master in the first place?
Master is on the latest version as well as the host in the pool I want to switch the master to
I need to change master as some maintenance need to be done on that hardware and want to make sure the pool is manageable while that hardware is down or possibly broken during maintenance -
XOA - fail to change pool master
Hi,
I am getting errors when trying to change the pool master on a pool
RESTORE_INCOMPATIBLE_VERSION()
This is a XenServer/XCP-ng errorAll hosts are xcp-ng 8.2 but some are different build numbers.
Current pool master is last build "Build date: 2022-02-11" same as the node I wanted to switch the master to.
others are builds "2021-05-20", "2020-11-05", "2021-11-16" depending on the time they were (re)installed.
XOA is latest version 5.68.0.
Is there a way to get rid of the above error?
What would be the best practice on keeping the hosts xcp-ng versions?
Currently we are installing each blade from xcp-ng 8.2 (iso) image then apply updates using yum update.
Usually we try to update pool master to latest xcp-ng version used on pool hosts.
Should we keep a specific build on all? If so, how do I update a new host to a specific build before adding it to the pool?
Using local storage and high amount of data on the hosts does not allow us to apply regular xcp-ng updates on all hosts.