VMware migration tool: we need your feedback!
-
@florent I'm ready to test when you have something, thanks.
-
@brezlord great, that is a usable mesage
could you post the ( or send by email ) , the vmx file ?
-
.encoding = "UTF-8" config.version = "8" virtualHW.version = "19" vmci0.present = "TRUE" floppy0.present = "FALSE" memSize = "2048" tools.upgrade.policy = "manual" sched.cpu.units = "mhz" vm.createDate = "1613734854100000" scsi0.virtualDev = "lsilogic" scsi0.present = "TRUE" sata0.present = "TRUE" sata0:0.startConnected = "FALSE" sata0:0.deviceType = "atapi-cdrom" sata0:0.clientDevice = "TRUE" sata0:0.fileName = "emptyBackingString" sata0:0.present = "TRUE" scsi0:0.deviceType = "scsi-hardDisk" scsi0:0.fileName = "Graylog-000001.vmdk" sched.scsi0:0.shares = "normal" sched.scsi0:0.throughputCap = "off" scsi0:0.present = "TRUE" ethernet0.virtualDev = "vmxnet3" ethernet0.shares = "normal" ethernet0.addressType = "vpx" ethernet0.generatedAddress = "00:50:56:8f:51:24" ethernet0.uptCompatibility = "TRUE" ethernet0.present = "TRUE" displayName = "Graylog" guestOS = "ubuntu-64" toolScripts.afterPowerOn = "TRUE" toolScripts.afterResume = "TRUE" toolScripts.beforeSuspend = "TRUE" toolScripts.beforePowerOff = "TRUE" tools.syncTime = "FALSE" uuid.bios = "42 0f 33 f8 9b 7f 7d 26-a8 bc 61 26 ee 46 16 22" vc.uuid = "50 0f dd 17 4c f8 3c 79-1f 26 ac 99 23 6a 06 d4" sched.cpu.min = "0" sched.cpu.shares = "normal" sched.mem.min = "0" sched.mem.minSize = "0" sched.mem.shares = "normal" migrate.encryptionMode = "opportunistic" ftcpt.ftEncryptionMode = "ftEncryptionOpportunistic" vmci0.id = "-297396701" cleanShutdown = "FALSE" ethernet0.networkName = "vLAN_40" uuid.location = "56 4d 44 f3 60 00 fc b5-f6 4b f4 e7 50 0a 74 f2" sched.cpu.affinity = "all" tools.guest.desktop.autolock = "FALSE" nvram = "Graylog.nvram" pciBridge0.present = "TRUE" svga.present = "TRUE" pciBridge4.present = "TRUE" pciBridge4.virtualDev = "pcieRootPort" pciBridge4.functions = "8" pciBridge5.present = "TRUE" pciBridge5.virtualDev = "pcieRootPort" pciBridge5.functions = "8" pciBridge6.present = "TRUE" pciBridge6.virtualDev = "pcieRootPort" pciBridge6.functions = "8" pciBridge7.present = "TRUE" pciBridge7.virtualDev = "pcieRootPort" pciBridge7.functions = "8" hpet0.present = "TRUE" sched.cpu.latencySensitivity = "normal" numa.autosize.cookie = "10012" numa.autosize.vcpu.maxPerVirtualNode = "1" pciBridge0.pciSlotNumber = "17" pciBridge4.pciSlotNumber = "21" pciBridge5.pciSlotNumber = "22" pciBridge6.pciSlotNumber = "23" pciBridge7.pciSlotNumber = "24" scsi0.pciSlotNumber = "16" ethernet0.pciSlotNumber = "160" vmci0.pciSlotNumber = "32" sata0.pciSlotNumber = "33" monitor.phys_bits_used = "45" vmotion.checkpointFBSize = "4194304" vmotion.checkpointSVGAPrimarySize = "4194304" softPowerOff = "FALSE" svga.guestBackedPrimaryAware = "TRUE" guestOS.detailed.data = "architecture='X86' bitness='64' distroName='Ubuntu' distroVersion='20.04' familyName='Linux' kernelVersion='5.4.0-88-generic' prettyName='Ubuntu 20.04.3 LTS'" toolsInstallManager.updateCounter = "4" viv.moid = "a3752e64-f7d6-4473-8fd5-3b91a194cccc:vm-7093:9iC7plvv3uF5K6K4G++QYV5VZf6Mjp3GrS0FdLa2/1w=" guestInfo.detailed.data = "architecture='X86' bitness='64' distroName='Ubuntu' distroVersion='20.04' familyName='Linux' kernelVersion='5.4.0-137-generic' prettyName='Ubuntu 20.04.3 LTS'" checkpoint.vmState.readOnly = "FALSE" SCSI0:0.ctkEnabled = "TRUE" ctkEnabled = "TRUE" sched.swap.derivedName = "/vmfs/volumes/dc0ddd0a-e7e89bd7/Graylog/Graylog-eda98aed.vswp" migrate.hostLog = "Graylog-50f77235.hlog" guestinfo.vmtools.buildNumber = "18090558" guestinfo.vmtools.description = "open-vm-tools 11.3.0 build 18090558" guestinfo.vmtools.versionNumber = "11360" guestinfo.vmtools.versionString = "11.3.0" scsi0:0.redo = "" vmotion.svga.mobMaxSize = "4194304" vmotion.svga.graphicsMemoryKB = "4096"
-
I edited your post to get the right "code" display (you need to have the three
```
on a new line) -
@olivierlambert thank and dually noted.
-
@brezlord I found it :
guestOS
is first used as a string, and then with additionnal propertiescan you pull and try again ?
-
@florent ok give me 10 to rebuild and report back.
-
@florent you've fixed it. The task has started. I'll report back when the import has finished. XO is saying 1 hour.
-
\o/
Now:
-
@brezlord you may also try the "thin=true" option, that will take longer, but will build a thin disk, without the unallocated sectors ( the progress is , for now, only visible in the xo logs side )
-
@florent I'll give it a go once this import has finished and report back.
-
@florent The import was successful, the VM is up and running. I will try another the "thin=true" option.
-
@brezlord said in VMware migration tool: we need your feedback!:
@florent The import was successful, the VM is up and running. I will try another the "thin=true" option.
yeah
thank you for your patience -
\o/
-
@florent thank you for you work.
-
@florent Great news. I'm looking forward to testing when it has been pushed to the vmware channel.
-
This will land on XOA
latest
for the end of the month -
@olivierlambert Got this working myself as well, very seamless once everything is prepped for it!
I do have a question though, since I noticed you mentioned the thin=true here, so doing this without that command creates a thick provisioned disk? Maybe I misunderstood that in the blog post.
I noticed mine took around 2.5 hours to complete a VM with 6 disks but the total data usage on disks was only around 20GB, so seems very slow, but if it's transferring the thick provisioning, then that is a different story. (in case it matters everything is 10GbE here).
-
Yes, it's because we have no way to know in advance which blocks are really used. So we transfer even the empty blocks by default.
When using
thin=true
, it will read the file on VMware twice:- once to see which blocks are really used
- then actually sending the used blocks
And finally, only create the used VHD blocks on XCP-ng.
So it's slower because you read twice, but you actually send less data to your XCP-ng storage.
After adding the "warm" algorithm, the transfer time will be meaningless (or at least a lot shorter), since what matters is the actual downtime, ie when you transfer the delta after the initial full while the VM was on.
-
@olivierlambert Got it, so this does bring up a couple additional questions I have.
Does this mean the VHD that everything is transferred to is also thick? Or does it still only used the used space AFTER transferring all the blocks?
Forgive me if I'm misunderstanding, just trying to figure this all out since I have more VMs to transfer.
So, per my example, if I have a VM with 500GB of space on VMware (but only 20GB is used), transfer with this method, it'll transfer all 500GB, but does it also USE 500GB on the SR?
And does that mean this disk is "forever" thick provisioned so snapshots use way more space?
Asking all this because I have a VM with 13TB of available space to move, which only has 5TB of used space. After migration I don't want to end up with 13TB of space used on my SR since it only has 10TB remaining and we probably won't go over about 6TB used on this VM ever.
Hope I am making some sense.
Edit: also, is there a way to tell if a disk is thin or thick in XOA? The SR shows thin but I don't see anything indicating if the disk is thin or not. Would also be kinda nice to see how much ACTUAL space each disk uses up in here.