XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    VM qcow2 --> vhd Drive Migration Failures

    Scheduled Pinned Locked Moved Migrate to XCP-ng
    9 Posts 4 Posters 131 Views 3 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • cichyC Offline
      cichy
      last edited by cichy

      Hello again ..

      I have batched transferred and exported a series of *.qcow2 VHD's from Proxmox to remote NFS storage:

      qemu-img convert -O qcow2 /dev/zvol/zpool1/name-of-vm-plus-id /mnt/pve/nfs-share/dump/name-of-vm-plus-id.qcow2
      

      I then converted those exported images to *.vhd using the qemu-img command:

      qemu-img convert -O vpc /mnt/pve/nfs-share/dump/name-of-vm-plus-id.qcow2 /mnt/pve/nfs-share/dump/'uuid-target-vdi-xcp-ng'.vhd
      

      Finally, I attempted to "IMPORT" the 'uuid-target-vdi-xcp-ng'.vhd images into XCP-ng and they fail, every single time. When I attempt to comb the log for errors from within XO there is nothing (see screenshot). Which is very, very weird.

      Screenshot 2025-09-02 at 1.29.50β€―PM.png

      I originally was referencing this post when searching for a means to convert Proxmox VHD's to XCP-ng. Prior to finding the forum post, I did persuse and review the official docs here.

      Hopefully there's a simple way for me to view the logs and understand why XCP-ng is failing to import the converted *.vhd's.

      Thanks in advance for your help! πŸ™

      DanpD 1 Reply Last reply Reply Quote 0
      • DanpD Offline
        Danp Pro Support Team @cichy
        last edited by

        @cichy The docs specifically mention that you need to repair the VHD files following the rsync operation.

        cichyC 1 Reply Last reply Reply Quote 0
        • cichyC Offline
          cichy @Danp
          last edited by cichy

          @Danp right, this:

          • Use rsync to copy the converted files (VHD) to your XCP-ng host.
          • After the rsync operation, the VHD are not valid for the XAPI, so repair them

          But rsync to where? @AtaxyaNetwork mentions /run/sr-mount/uuid-of-vm but I don't see a folder for the UUID of the VM I created there. Likely because it resides on an NFS mount?? Or does the *.vhd just need to exist somewhere on the XCP-ng host?

          Confused.

          I want the *.vhd's to be on the NFS mount/share. Though, I could rsync to a local folder, then migrate once added?

          cichyC 1 Reply Last reply Reply Quote 0
          • cichyC Offline
            cichy @cichy
            last edited by

            Also, where does XCP-ng mount NFS volumes? Not in /mnt or /media and I cannot seem to find anything in /dev either. If I am going to rsync the image to the host for repair I need to be able to access the files on the NFS mount.

            1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates πŸͺ Co-Founder CEO
              last edited by olivierlambert

              It's in /run/sr-mount/<SR-UUID> (not VM UUID!!)

              cichyC 1 Reply Last reply Reply Quote 0
              • cichyC Offline
                cichy @olivierlambert
                last edited by

                @olivierlambert literally in the midst of replying! Yes, got it. 🀦

                After reading over the reference post I realized sr-mount (ie Storage Repo πŸ™„) was where all my *.vhd's are. Duh. So, now only one question remains for me ..

                Do I copy the qcow2 to my "buffer" machine (as noted in docs), perform the conversion to vhd there and then rsync the resulting output to the NFS share dir that is identified under sr-mountor to the XCP-ng host directly? There shouldn't be any difference here as they are the same NFS mount.

                Sorry, one additional follow up: do name the vhd image exactly the same as the existing 'dummy' vhd that was created with the VDI? Following this, do I then import disconnect the drive from the VDI and re-import?

                Perhaps my confusion is around the fact that I have already created a VDI for the drive I am migrating. As the docs say not to do this until after the image has been converted.

                1 Reply Last reply Reply Quote 0
                • cichyC Offline
                  cichy
                  last edited by cichy

                  Okay. I have cracked this nut!

                  Answered all my questions by muddling my way through things.

                  Will document my steps after I finish the next batch of VM's/VDI's. I will say though, after having figured this out and looking for ways to automate and streamline the process, this is hella labour intensive! Honestly, for many of these VM's it is less work re-creating from scratch. I wish there were a way that were as easy as VMWare. 🀦

                  In short, I re-read the docs over and over. Then I followed the advice of a "staging/buffer" environment to carry out all of the rsync and qemu-img convert tasks. The tricky part (for me) was locating the uuid info etc for the disk images -- I copied converted via rsync to the local datastor. I booted my first vm and all checked out. Though, I am not able to migrate the VDI off of the local sr-mount repo and onto my NAS NFS volume. It fails every time.

                  AtaxyaNetworkA 1 Reply Last reply Reply Quote 0
                  • AtaxyaNetworkA Offline
                    AtaxyaNetwork Ambassador @cichy
                    last edited by

                    @cichy the documentation for the Kvm/proxmox part is really weird with unnecessary steps

                    Normally you just have to do:

                    qemu-img convert -O vpc proxmox-disk.qcow2 `uuidgen`.vhd
                    scp thegenerateduuid.vhd ip.xcp-ng.server:/run/sr-mount/uuid-of-your-SR
                    

                    Then with XO, create a VM and attach the disk

                    I really need to find some time to finish the rework of this part of the docs, I'm sorry I didn't do it earlier

                    cichyC 1 Reply Last reply Reply Quote 2
                    • cichyC Offline
                      cichy @AtaxyaNetwork
                      last edited by

                      @AtaxyaNetwork I appreciate the sentiment. I think this one is all on me as pointed out by @Danp ..
                      My VDI's would not register without this step. I'm unsure as to why because the error logs were completely blank within XO. Your post in conjunction with the docs were extremely helpful though! πŸ™

                      1 Reply Last reply Reply Quote 0
                      • First post
                        Last post