XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    VMware migration tool: we need your feedback!

    Scheduled Pinned Locked Moved Migrate to XCP-ng
    318 Posts 37 Posters 174.3k Views 30 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • brezlordB Offline
      brezlord @florent
      last edited by

      @florent I will rebuild XO and re-import and give feedback. Thanks.

      1 Reply Last reply Reply Quote 2
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        It's now available directly on master (from the sources) or on latest XOA release channel šŸ™‚ (I updated the first post accordingly)

        planedropP 1 Reply Last reply Reply Quote 1
        • planedropP Offline
          planedrop Top contributor @olivierlambert
          last edited by

          @olivierlambert Awesome, super exciting stuff!

          1 Reply Last reply Reply Quote 0
          • brezlordB Offline
            brezlord @florent
            last edited by

            @florent Everything is working now, Thanks. I have sucsefully migrated Windows and Linux VMs.

            1 Reply Last reply Reply Quote 1
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              Perfect! Now expect all of this but in a simple wizard in the UI, that will be a great tool for everyone who want to make the migration (and we hope to get a decent share of those users with our new support bundle)

              1 Reply Last reply Reply Quote 1
              • S Offline
                severhart
                last edited by

                This post is deleted!
                1 Reply Last reply Reply Quote 0
                • S Offline
                  severhart
                  last edited by olivierlambert

                  After running for hours, get the following

                  āœ– sesparse Vmdk reading is not functionnal yet FP-FileServer/FP-FileServer-000001-sesparse.vmdk
                  JsonRpcError: sesparse Vmdk reading is not functionnal yet FP-FileServer/FP-FileServer-000001-sesparse.vmdk
                      at Peer._callee$ (/usr/local/lib/node_modules/xo-cli/node_modules/json-rpc-peer/dist/index.js:139:44)
                      at tryCatch (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:44:17)
                      at Generator.<anonymous> (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:125:22)
                      at Generator.next (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:69:21)
                      at asyncGeneratorStep (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/asyncToGenerator.js:3:24)
                      at _next (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/asyncToGenerator.js:22:9)
                      at /usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/asyncToGenerator.js:27:7
                      at new Promise (<anonymous>)
                      at Peer.<anonymous> (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/asyncToGenerator.js:19:12)
                      at Peer.exec (/usr/local/lib/node_modules/xo-cli/node_modules/json-rpc-peer/dist/index.js:182:20)
                  
                  florentF 1 Reply Last reply Reply Quote 0
                  • florentF Offline
                    florent Vates 🪐 XO Team @severhart
                    last edited by florent

                    @severhart the sesparse format is used for disk greater than 2TB , and for all disks after esxi 6.5 , and it's not very documented
                    https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-88E5A594-DEBC-4662-812F-EA421591C70F.html
                    we are working on implementing this reader to allow migration during Q1 2023

                    For now esxi 6.5+ vm are limited to migration of sopped vm without snasphots

                    S 2 Replies Last reply Reply Quote 0
                    • S Offline
                      severhart @florent
                      last edited by

                      @florent thanks! for the fast reply, I will take the outage and rerun, and let you know outcome.

                      1 Reply Last reply Reply Quote 0
                      • olivierlambertO Offline
                        olivierlambert Vates 🪐 Co-Founder CEO
                        last edited by

                        Also, 2TiB+ disk can't be imported since it's limited to 2TiB tops on XCP-ng default storage stack.

                        planedropP 1 Reply Last reply Reply Quote 0
                        • planedropP Offline
                          planedrop Top contributor @olivierlambert
                          last edited by

                          @olivierlambert Would love to see some way to have it import larger than 2TiB disks as multiple disks in XCP-ng, since most OSes just let you span drives anyway.

                          Just realized I may not be able to leave VMWare with this method since one of the disks on a VM I'm trying to move is over 3TiB.

                          1 Reply Last reply Reply Quote 0
                          • olivierlambertO Offline
                            olivierlambert Vates 🪐 Co-Founder CEO
                            last edited by

                            We could detect 2TiB+ drives and create a raw disk on our side, but it won't support snap nor live storage migration.

                            Only another format (used in SMAPIv3) will allow us to solve this.

                            planedropP 1 Reply Last reply Reply Quote 0
                            • planedropP Offline
                              planedrop Top contributor @olivierlambert
                              last edited by

                              @olivierlambert Gotcha, this makes sense.

                              Is there any way to skip a specific drive with this migration script? I'm thinking I could skip the larger than 2TiB disk and then just create 2 x 2TiB disks after migration, span them in Windows, and then copy the data manually from the VMWare VM.

                              1 Reply Last reply Reply Quote 0
                              • olivierlambertO Offline
                                olivierlambert Vates 🪐 Co-Founder CEO
                                last edited by

                                I think we could probably skip it (since it's likely not a system disk) so you can then manually copy the rest the way you prefer. We should probably add an option like "just skip 2TiB+ disk without failing"

                                planedropP 2 Replies Last reply Reply Quote 0
                                • planedropP Offline
                                  planedrop Top contributor @olivierlambert
                                  last edited by

                                  @olivierlambert Yes, I think that would be great, this would be a good workaround for people that have larger than 2TiB disks.

                                  1 Reply Last reply Reply Quote 0
                                  • planedropP Offline
                                    planedrop Top contributor @olivierlambert
                                    last edited by

                                    @olivierlambert Also, do you know if the disk is OVER 2TiB thick provisioned but actual data usage on it is like 1TiB, will the script still fail or will it just create the 1TiB disk?

                                    1 Reply Last reply Reply Quote 0
                                    • olivierlambertO Offline
                                      olivierlambert Vates 🪐 Co-Founder CEO
                                      last edited by

                                      The problem by doing that automatically is you can have bad surprises. We'll probably just skip it after adding the option.

                                      1 Reply Last reply Reply Quote 1
                                      • S Offline
                                        severhart @florent
                                        last edited by severhart

                                        @florent none of the drives are over 2tb, largest is 900GB, so I will assume it is due to snapshots?

                                        Drives are
                                        127GB
                                        900GB
                                        325GB
                                        250GB
                                        500GB

                                        1 Reply Last reply Reply Quote 0
                                        • olivierlambertO Offline
                                          olivierlambert Vates 🪐 Co-Founder CEO
                                          last edited by

                                          @severhart as Flo said, it's because of your VMware version (6.5+) that is using another "diff" format not yet supported. In your case, you should do a cold migration for now, until we support this diff format šŸ™‚

                                          M 1 Reply Last reply Reply Quote 0
                                          • M Offline
                                            magicker @olivierlambert
                                            last edited by olivierlambert

                                            Total noob here jumping in at the deep end.

                                            I get

                                             xo-cli vm.importFromEsxi host=xxx.xxx.xxx.xxx user=w...w password='u .... l' sslVerify=false vm=16 network=a1044bf9-4c06-8ae0-060c-e3462dd4524f sr=9b465ed4-e6d2-7a67-b5e0-5edc4915adac stopSource=true thin=true  
                                            
                                            āœ– Cannot read properties of undefined (reading 'stream')
                                            JsonRpcError: Cannot read properties of undefined (reading 'stream')
                                                at Peer._callee$ (/opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/json-rpc-peer/dist/index.js:139:44)
                                                at tryCatch (/opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:44:17)
                                                at Generator.<anonymous> (/opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:125:22)
                                                at Generator.next (/opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:69:21)
                                                at asyncGeneratorStep (/opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/@babel/runtime/helpers/asyncToGenerator.js:3:24)
                                                at _next (/opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/@babel/runtime/helpers/asyncToGenerator.js:22:9)
                                                at /opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/@babel/runtime/helpers/asyncToGenerator.js:27:7
                                                at new Promise (<anonymous>)
                                                at Peer.<anonymous> (/opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/@babel/runtime/helpers/asyncToGenerator.js:19:12)
                                                at Peer.exec (/opt/xo/xo-builds/xen-orchestra-202302081722/node_modules/json-rpc-peer/dist/index.js:182:20)
                                            
                                            florentF 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post