XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    VMware migration tool: we need your feedback!

    News
    25
    249
    6783
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • florentF
      florent Vates 🪐 XO Team 🔭 @brezlord
      last edited by

      @brezlord mac address and uefi should works now

      brezlordB 2 Replies Last reply Reply Quote 2
      • brezlordB
        brezlord @florent
        last edited by

        @florent I will rebuild XO and re-import and give feedback. Thanks.

        1 Reply Last reply Reply Quote 1
        • olivierlambertO
          olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼
          last edited by

          It's now available directly on master (from the sources) or on latest XOA release channel 🙂 (I updated the first post accordingly)

          planedropP 1 Reply Last reply Reply Quote 1
          • planedropP
            planedrop @olivierlambert
            last edited by

            @olivierlambert Awesome, super exciting stuff!

            1 Reply Last reply Reply Quote 0
            • brezlordB
              brezlord @florent
              last edited by

              @florent Everything is working now, Thanks. I have sucsefully migrated Windows and Linux VMs.

              1 Reply Last reply Reply Quote 1
              • olivierlambertO
                olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼
                last edited by

                Perfect! Now expect all of this but in a simple wizard in the UI, that will be a great tool for everyone who want to make the migration (and we hope to get a decent share of those users with our new support bundle)

                1 Reply Last reply Reply Quote 1
                • S
                  severhart
                  last edited by

                  This post is deleted!
                  1 Reply Last reply Reply Quote 0
                  • S
                    severhart
                    last edited by olivierlambert

                    After running for hours, get the following

                    ✖ sesparse Vmdk reading is not functionnal yet FP-FileServer/FP-FileServer-000001-sesparse.vmdk
                    JsonRpcError: sesparse Vmdk reading is not functionnal yet FP-FileServer/FP-FileServer-000001-sesparse.vmdk
                        at Peer._callee$ (/usr/local/lib/node_modules/xo-cli/node_modules/json-rpc-peer/dist/index.js:139:44)
                        at tryCatch (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:44:17)
                        at Generator.<anonymous> (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:125:22)
                        at Generator.next (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/regeneratorRuntime.js:69:21)
                        at asyncGeneratorStep (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/asyncToGenerator.js:3:24)
                        at _next (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/asyncToGenerator.js:22:9)
                        at /usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/asyncToGenerator.js:27:7
                        at new Promise (<anonymous>)
                        at Peer.<anonymous> (/usr/local/lib/node_modules/xo-cli/node_modules/@babel/runtime/helpers/asyncToGenerator.js:19:12)
                        at Peer.exec (/usr/local/lib/node_modules/xo-cli/node_modules/json-rpc-peer/dist/index.js:182:20)
                    
                    florentF 1 Reply Last reply Reply Quote 0
                    • florentF
                      florent Vates 🪐 XO Team 🔭 @severhart
                      last edited by florent

                      @severhart the sesparse format is used for disk greater than 2TB , and for all disks after esxi 6.5 , and it's not very documented
                      https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-88E5A594-DEBC-4662-812F-EA421591C70F.html
                      we are working on implementing this reader to allow migration during Q1 2023

                      For now esxi 6.5+ vm are limited to migration of sopped vm without snasphots

                      S 2 Replies Last reply Reply Quote 0
                      • S
                        severhart @florent
                        last edited by

                        @florent thanks! for the fast reply, I will take the outage and rerun, and let you know outcome.

                        1 Reply Last reply Reply Quote 0
                        • olivierlambertO
                          olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼
                          last edited by

                          Also, 2TiB+ disk can't be imported since it's limited to 2TiB tops on XCP-ng default storage stack.

                          planedropP 1 Reply Last reply Reply Quote 0
                          • planedropP
                            planedrop @olivierlambert
                            last edited by

                            @olivierlambert Would love to see some way to have it import larger than 2TiB disks as multiple disks in XCP-ng, since most OSes just let you span drives anyway.

                            Just realized I may not be able to leave VMWare with this method since one of the disks on a VM I'm trying to move is over 3TiB.

                            1 Reply Last reply Reply Quote 0
                            • olivierlambertO
                              olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼
                              last edited by

                              We could detect 2TiB+ drives and create a raw disk on our side, but it won't support snap nor live storage migration.

                              Only another format (used in SMAPIv3) will allow us to solve this.

                              planedropP 1 Reply Last reply Reply Quote 0
                              • planedropP
                                planedrop @olivierlambert
                                last edited by

                                @olivierlambert Gotcha, this makes sense.

                                Is there any way to skip a specific drive with this migration script? I'm thinking I could skip the larger than 2TiB disk and then just create 2 x 2TiB disks after migration, span them in Windows, and then copy the data manually from the VMWare VM.

                                1 Reply Last reply Reply Quote 0
                                • olivierlambertO
                                  olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼
                                  last edited by

                                  I think we could probably skip it (since it's likely not a system disk) so you can then manually copy the rest the way you prefer. We should probably add an option like "just skip 2TiB+ disk without failing"

                                  planedropP 2 Replies Last reply Reply Quote 0
                                  • planedropP
                                    planedrop @olivierlambert
                                    last edited by

                                    @olivierlambert Yes, I think that would be great, this would be a good workaround for people that have larger than 2TiB disks.

                                    1 Reply Last reply Reply Quote 0
                                    • planedropP
                                      planedrop @olivierlambert
                                      last edited by

                                      @olivierlambert Also, do you know if the disk is OVER 2TiB thick provisioned but actual data usage on it is like 1TiB, will the script still fail or will it just create the 1TiB disk?

                                      1 Reply Last reply Reply Quote 0
                                      • olivierlambertO
                                        olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼
                                        last edited by

                                        The problem by doing that automatically is you can have bad surprises. We'll probably just skip it after adding the option.

                                        1 Reply Last reply Reply Quote 1
                                        • S
                                          severhart @florent
                                          last edited by severhart

                                          @florent none of the drives are over 2tb, largest is 900GB, so I will assume it is due to snapshots?

                                          Drives are
                                          127GB
                                          900GB
                                          325GB
                                          250GB
                                          500GB

                                          1 Reply Last reply Reply Quote 0
                                          • olivierlambertO
                                            olivierlambert Vates 🪐 Co-Founder🦸 CEO 🧑‍💼
                                            last edited by

                                            @severhart as Flo said, it's because of your VMware version (6.5+) that is using another "diff" format not yet supported. In your case, you should do a cold migration for now, until we support this diff format 🙂

                                            M 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post