Shared SR (two pools)
-
Re: Shared SR between two pools?
I have a need to move a sizable (i.e. 30T) VM between two pools of compute nodes. I can do this move cold, but I'd rather not have the VM offline for several days, which is what it's going to look like if I do a standard cold VM migration.
As I understand it, SRs are essentially locked to a specific pool (particularly the pool master). Is it possible to basically unmount (i.e. forget) the SR from one pool, remount it on the target pool then just import the VM while still basically continuing to reside on the same LUN?
VMware made this pretty easy with VMFS/VMX/VMDKs, but it seems like Xen may not be as flexible.
-
Hi,
For moving such large amount, indeed the live migration would be very long and uncertain (due to the fact that if at any moment you write faster than it's replicating, it will fail after few tries).
Warm migration is an option though.
Finally, if it's on the same physical place (like a NAS), you can also "cheat". But this require to be 100% sure about what are you doing. For example, is this VM disk in a dedicated SR without anything else? Is your VHDs fully flat without a chain? (if you want to copy file, it's a bit more complex since a snapshot will result in 3 different files).
But since you have a very very large amount, it might worth taking the time to plan this correctly. You can indeed attach a previously forgotten SR to a new pool, but you will lose the VHD metadata (ie names and descriptions). Which is very very fine if you only have few disks you know where to plug them.
-
@jimmymiller Without more details on the VM (use case, OS, etc) it is hard to answer but I would rather try to remove the big amount of data from the virtualisation host themself. Tom Lawrence once suggested that for larger amount of data you should not store them within a Xen VHDD but rather on a ZFS storage as dataset (for files) or zvol via iSCSI (for block storage). Then attach the external data store (so to speak) via smb, nfs, iscsi from within the VM. Classical storage systems like FreeNAS, TrueNAS and the alike are much better in handling fast access to big amounts data. This way the base VMs moves quickly between hosts. Even with larger amounts of data. From my experiences VMs with big virtual drives always hurt you for backup / migration / etc.
-
@olivierlambert Well even a cold migration seemed to fail. Bah!
The LUN/SR is dedicated to just the one VM. 1 x 16g disk for the OS, 20 x 1.5T disks for data. Inside the VM, I'm using ZFS as a stripe to bring them all together into a single zpool. I know because this is zfs I could theoretically do a zfs replication job to another VM, but I'm also using this as a test to figure out how I'm going to move those larger VMs we have that don't have the convenience of in-OS replication option. For our larger VMs we almost always dedicate LUNs specifically them and we have block-based replication options on our arrays so in theory we should be able to fool any device into thinking the replica is a legit pre-existing SR.
No snaps -- the data on this VM is purely an offsite backup target so we didn't feel the need to backup the backup of the backup.
Let me try testing the forget SR + connect to different pool. I swear I tried this before but when I went to try creating the SR, it flagged the LUN as having a preexisting SR, but it forced me to reprovision a new SR and wouldn't let me map the existing.
-
On the CLI that should be easier, thanks to
xe sr-introduce
command -
@HolgiB For this use, it's actually a virtual TrueNAS instance sitting on a LUN mapped to the source XCP-ng pool. I know there are in-OS options using zfs send|receive, but the point is to get an understanding of what we would do without that convenience.
I know Xen and VMware do things differently, but having VMFS in the mix allowed us to unmount a datastore, move the mapping to a new host, mount that datastore, then just point that host at the existing LUN and quickly import the VMX (for a full VM config) or VMDK (with configuring a new VM to use those existing disks). This completely eliminated the need to truly copy the data--we were just changing the host that had access to it. We didn't use it very often because VMware handled moving VMs with big disks pretty well, but it was our ace-in-the-hole if the option for storage vMotion wasn't available.
-
@olivierlambert Okay. I'll give it a shot.