Yeah, I was going to but wanted to put it as a discussion with other people. Maybe someone else has had similar situation.
I'll open a support ticket now.
Thanks.
Yeah, I was going to but wanted to put it as a discussion with other people. Maybe someone else has had similar situation.
I'll open a support ticket now.
Thanks.
The 3 hypervisors that we need to remove from the pool are old and have been replaced by 3 new servers. Everything has been migrated to the new hypervisors and those 3 are only running 1 vm each with a passed-through disks.
We have XCP-NG licenses for only 3 of the hosts, so we were looking at moving out the old hosts into a separate pool which will remain unlicensed.
Eventually, those 3 VMs will be turned off but we need to keep them running for few more moths (if we're lucky).
@Danp said in Moving hypervisors to a new pool with VMs on local storage:
Those are important details. Are you running Windows VMs?
Yes, I'm sorry I thought I wrote them but I've only said local storage where I meant local storage and pass-through disks ....
Nope, they run on Debian.
Each VM has 2x4TB SSDs passed through which are stitched with LVM inside the VM. We don't do backup or replication of them historically due to the size of the disks.
We can't really migrate them elsewhere.
Hi,
We are looking at moving hypervisors to a new pool due to a license constraints.
Currently we have 6 in the pool and we need to go back to 3. However, we have 3 VMs running on local storage on the each one of the hypervisors which we need to move out. Since all data about the storage and VMs is stored in XO's database, we are wondering what would be the best option to do that.
Current plan is to:
Shutdown VMs.
Forget the local SRs.
Remove hosts from the Production Pool and create a new pool.
Re-import SRs.
Re-import VMs.
Would this work, as I'm not sure about the re-importing bit, in case the SRs need to be clean/empty.
Is there any other way ?
We can't do a backup/restore of those machines, or replication.
Thanks,
H
Hi @olivierlambert,
Thank you for the quick reply.
It is indeed complicated process and we're trying to work around it. We do have only XO premium subscription.
We haven't considered licensing the hypervisors yet.
I guess we'll have to look at manual replica jobs to failback for now.
Hristo
Hi,
We are exploring DR scenarios currently. We have multiple XO Pools on different Geo locations. So far we've been doing Continuous Replicas from Pool A to Pool B.
We have had few isolated incidents where we had to turn on the replica, but we've not had the need to failback to the original machine (sync changes back).
In an event of failure of the Pool A (data center A), we can quickly turn on all replicas as they'll be replicated in Pool B - Great ! However, we don't see any automated way for the failback process (going from Pool B to Pool A, after Pool A is recovered)
So far we are thinking to have additional Continuous Replication jobs, going from Pool B to Pool A, in case we failover to Pool B. Though that would mean that we'll have to delete the VMs in Pool A, and do a full replica, which will take a long time, as well as bandwidth.
Is there any right / approved way of setting up the continuous replicas for disaster recovery with a failback option which doesn't need to do a full replica ?
Hristo
I should have done that. However, I thought might be useful to open it here in case anyone else is seeing this.
I just saw there's a new XO update. It's deployed now, so to confirm again,
XO Version: 5.79.0
XO Netbox Plugin Version: 0.3.6
Worth mentioning that we're using XO Premium, as I saw there was a netbox premium update too.
I just ran sync again from the pool's advanced settings, but still no platform or tags information on Netbox.
XO Version: 5.78.0
XO Netbox Plugin Version: v0.3.5
Netbox Version: 3.4.3
XO is successfully pushing the inventory to Netbox, however, it doesn't pass the tags and the platform type of the VM. All VMs from the pool have the tools installed too.
Is this expected to work out of the box or we'd have to do something else to make it work like different custom fields ?
Thanks,
Hristo