XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Migrating complicated windows fileservers from vmware quickly?

    Scheduled Pinned Locked Moved Migrate to XCP-ng
    8 Posts 3 Posters 345 Views 3 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • V Offline
      vaewyn
      last edited by

      We have several large file servers (each has 20+ disks that are from 2-4TB in size). VMWare is currently connecting via iSCSI to the SAN they are on and exposing them as virtual disks to the VM.... and therein is the problem. VMFS 😕 I can't just attach those iSCSI targets to XCP-NG and have them as happy disks on the new system.

      I have room on other SANs or on XOSTOR for the new disk.... but I can think of only one (sketchy) way to do a conversion that won't take DAYS of downtime to complete. I'm wondering, even if I changed datastores from the external SANs to vSan, if the migration process would even have a prayer of completing at all.

      The only idea I have come up with, and yes.. this seems VERY stupid, is to:
      Expose some iSCSI targets from a SAN directly to the files server VM
      Setup an OS RAID mirror for each disk
      Let them complete mirroring in the OS
      Remove the old drives from the mirror
      Shutdown the VM
      Migrate the VM to our XCP cluster
      Bring up the VM
      Add new disks coming from XOSTOR to the VM
      Add the new disks as mirror members in the OS raid mirror
      Wait for the mirror to complete
      Remove the iSCSI disks on the SAN from the mirror
      Destroy the mirror and return it to a standard disk setup

      Looking for feedback on my level of stupidity 😅 and wondering if someone has been down this road before.

      R 1 Reply Last reply Reply Quote 0
      • R Offline
        rtjdamen @vaewyn
        last edited by

        @vaewyn so your disks are not vmdk but direct attached to the vm? wouldn't it work to just connect those iscsi targets direct from the san to the vm itself? if i understand your situation correct there wouldn't be a reason of migrating them? otherwise your idea of creating a mirror could work as well.

        We did have the same issue and we decided to create a new machine and use robocopy for migration... not ideal but this way it did take only a few days of preseeding the fileserver and a quick last synch when do the migration itself.

        it's a hard one but i believe it would be possible

        V 1 Reply Last reply Reply Quote 0
        • V Offline
          vaewyn @rtjdamen
          last edited by

          @rtjdamen They are currently VMware datastores (VMFS) that have drives on them (vmdks) attached to the VMs so 😞 I can easily expose the iSCSI targets but neither Windows or XCP-NG understands what to do with a datastore target from VMware

          S R 2 Replies Last reply Reply Quote 0
          • S Offline
            Shufflebox @vaewyn
            last edited by

            I like the idea of exposing a new LUN to the VMware guest via iSCSI. That would get you around the 2TB vdi limitation on your xcp-ng cluster, and let you just detach from the old and reattach on the new all via iSCSI.

            The part that feels a bit risky is setting up the Windows RAID mirror with an existing drive with data. I don't work with Windows Server all that often, so maybe it's easy and knows not to risk the existing data.
            I would probably replace that step with some Robocopy tasks.

            R 1 Reply Last reply Reply Quote 0
            • R Offline
              rtjdamen @vaewyn
              last edited by

              @vaewyn so they are not raw device maps? if they were the disks are allready ntfs formatted and mapping them directly should not be an issue. i am not familiar with an other option for this is vmware. maybe it is worth a try to create a testcase for it.

              V 1 Reply Last reply Reply Quote 0
              • R Offline
                rtjdamen @Shufflebox
                last edited by

                @Shufflebox agree

                1 Reply Last reply Reply Quote 0
                • V Offline
                  vaewyn @rtjdamen
                  last edited by

                  @rtjdamen Correct... in VMWare RDMs created quite a few issues so they are all as VMFS datastores with VMDKs inside them that are then exposed to the VM as drives.

                  Unfortunately, this is like the ONLY thing that RDMs would actually shine for 😟

                  R 1 Reply Last reply Reply Quote 0
                  • R Offline
                    rtjdamen @vaewyn
                    last edited by

                    @vaewyn another reason for leaving vmware 😉 in would go for robocopy then.

                    1 Reply Last reply Reply Quote 0
                    • First post
                      Last post