XOA - fail to change pool master
-
Hi,
I am getting errors when trying to change the pool master on a pool
RESTORE_INCOMPATIBLE_VERSION()
This is a XenServer/XCP-ng errorAll hosts are xcp-ng 8.2 but some are different build numbers.
Current pool master is last build "Build date: 2022-02-11" same as the node I wanted to switch the master to.
others are builds "2021-05-20", "2020-11-05", "2021-11-16" depending on the time they were (re)installed.
XOA is latest version 5.68.0.
Is there a way to get rid of the above error?
What would be the best practice on keeping the hosts xcp-ng versions?
Currently we are installing each blade from xcp-ng 8.2 (iso) image then apply updates using yum update.
Usually we try to update pool master to latest xcp-ng version used on pool hosts.
Should we keep a specific build on all? If so, how do I update a new host to a specific build before adding it to the pool?
Using local storage and high amount of data on the hosts does not allow us to apply regular xcp-ng updates on all hosts. -
- The master should always been on the latest version.
- Always try to keep all your hosts up to date within a pool
- Why do you want to change the master in the first place?
-
@olivierlambert said in XOA - fail to change pool master:
Why do you want to change the master in the first place?
Master is on the latest version as well as the host in the pool I want to switch the master to
I need to change master as some maintenance need to be done on that hardware and want to make sure the pool is manageable while that hardware is down or possibly broken during maintenance -
You need to be sure that the future slave fully up to date before making it a master, because a master MUST be always the most up to date. Otherwise, if you have a slave with more updates than the master, it won't be able to contact it.
-
As said both hosts are at the same (latest) patch level
The issue is I can't switch the master to the other host, getting "RESTORE_INCOMPATIBLE_VERSION()" error, at least from XOA GUI
What else should I check beside patch level?
Is there other safe way to do the master switch? -
You need to least restart the toolstack on all hosts. Updates will changes files on the disk, but not on the running programs.
-
solved but, to be more explicit (at least what I understand):
- when you try to add a host to you xen pool it need to be exact same "build" as the pool master
- you can update only to the latest build
- you can update your pool starting with the pool master and restart the toolstack after each host patched to "apply" updates except new kernel
- you can't change your pool master if ANY of your pool members are not patched = ALL your pool members need to be fully patched - I think I encountered this issue starting with late 2011 xcp-ng builds
-
Yes and that's the way it's meant to be. A pool master can NEVER be older than any slave.
-
The thing I wanted to highlight (which is not the same you are mentioned above/I was aware of) is that even if master and the new host to which I was trying to switch the master were BOTH patched to latest build, even rebooted I still couldn't change the master to the new host until ALL the other hosts in the pool were patched to latest version and toolstack restarted on them.
Even if only the master and the new host were involved in that switch I had to update 8 more hosts in order to be able to do the switch.
This did not happened on older builds.
From what I saw in xensource logs I suppose that existing and/or new master is trying to sync its database to ALL the hosts in the pool not just between two of them. -
There are a lot of operations that are disabled when in the middle of a pool upgrade and in any case I would always discourage changing the pool master during this process even if it worked. It's possible that XAPI developer added a check exactly to prevent that, in a previous update.
Apparently, you did this several times in the past. What's the use case?
-
@stormi we were upgrading hardware/blades in the DCs and they were shipped 2 in each month and it took a while to replace all. During that each blade join the pool with the latest build at that moment.
But we had issues first adding new patched blades to the "older" pool until found that if not patching the blade it can be added to the pool and upgraded later. Lately if you not patch the host you can't add it to the pool because of the too old build, so have to update the pool master at least and then add the new patched host to the pool.
Then as mentioned above we had to run a maintenance on the pool master hardware and just to be safe was thinking to switch the master and making sure remote hands will not leave us without it
As we are using mostly local disks in production for VM storage we can't migrate VMs for each build release to patch all the hosts in the pool, at least it would take a lot of work and time.
But as @olivierlambert mentioned we we keep patching them from now on by restarting the toolstack (I was not sure before that toolstack restart is enough for xcp-ng system update).
Hope I was not too confusing. Thanks all for your help. -
I wouldn't go as far as saying that restarting the toolstack is enough. If there's a kernel or Xen system update, I would wait for a maintenance window that allows you to reboot the hosts rather than installing the updates now and only restarting the toolstack.
-
Yes, exactly. You might need to restart Xen or the dom0 to get them running on the new version. In that case, any XAPI restart won't help you.
If you want to limit the production impact of reboots, a shared storage + rolling pool update is helpful. It's also doable with no shared storage but more painful. Alternatively, a maintenance window is also a possibility.
Using the flexibility of virtualization is a great way to make reboot transparent, but it doesn't mean reboot aren't needed at all, especially if you take security seriously.
-
Regarding how to add new pool members, it's still on my TODO list to document how to bring a host to the exact update level of the pool master.
Basically this would be something like:
- Get the list of RPMs from the master with
rpm -qa | sort
- Update the newly installed host to that level by feeding
yum update
with that list.
- Get the list of RPMs from the master with
-
@stormi yes, that we be helpful indeed so we can stick to a build version at least for a while.