XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    V2V - Stops at 99%

    Scheduled Pinned Locked Moved Migrate to XCP-ng
    6 Posts 5 Posters 47 Views 4 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • T Offline
      tsukraw
      last edited by

      Hey Guys.

      Got a general question about the V2V conversion.
      We have two different clients that we are working on VMware to XCP migrations. Both clients have a half dozen VMs, and in both cases all the VMs migrated fine except for the file servers, which are around 1.5TB each.

      Again, completely separate clients.

      When running the import, everything appears to be running fine, but the import hangs at 99% and never gets past that.

      I'm curious what is the import doing at 99%?
      Thinking back to the VMware Converter days, the data move was actually finished at 97% and the last couple percent was configuration stuff. So if a conversation was to fail at 97% you could actually attach the VMDK to a VM and it would still be usable.

      I'm curious if the same holds true with XCP if the import would be finished moving data at 99% and something else might be causing them to hang where we could create a new VM and attach the vhd.

      (We do have a support ticket open on this issue)

      Thanks

      A florentF 2 Replies Last reply Reply Quote 0
      • A Offline
        acebmxer @tsukraw
        last edited by

        @tsukraw

        Wait for vates to reply for more technical information. Take a look at this - https://xcp-ng.org/blog/2025/10/16/qcow2-beta-announcement/

        you are at 1.5tb so I think you should be ok with out qcow2 support. I personal had some performance and other issues with it and disable as I didnt need 2tb support yet.

        1 Reply Last reply Reply Quote 0
        • florentF Offline
          florent Vates 🪐 XO Team @tsukraw
          last edited by

          @tsukraw hard to answer as is
          do you have anything in your xo log ( console / journalctl ) or in the task log ?
          If not, the log of xo should say something about " nbdkit logs of ${diskPath} are in /tmp/xo-serverxxxx"
          can you check if there is something at the end ?

          1.5TB is ok in vhd

          D 1 Reply Last reply Reply Quote 0
          • D Offline
            dnordmann @florent
            last edited by

            @florent
            I'll add some more on this.
            I did attach the task log here.
            .tasklog.txt

            Now I did have to restart the toolstack as I wasn't able to cancel the migration task (after it hung). As stated, it hangs at 99% and says "estimated 3 minutes" to complete. I have left this run for over an hour but still says 99%.

            I have tried this multiple times but get the same result. Support did mention to try and kick off the import via CLI which was tried in my last attempt. Unfortunately, same result.

            I'm new to XCP, but you mention "xo should say something about " nbdkit logs of ${diskPath} are in /tmp/xo-serverxxxx" do you know where exactly I can pull those? Tried looking under /var/log of XOA but not seeing what you mention.

            Side note. After restarting the toolstack and going into storage I could see the drives for that server. For testing I did create a new VM and attached those existing disks. The VM did boot up and the data drive was there and accessible. Though skeptical as technically the migration of that drive didn't get to 100%.

            DanpD 1 Reply Last reply Reply Quote 0
            • DanpD Offline
              Danp Pro Support Team @dnordmann
              last edited by

              @dnordmann said in V2V - Stops at 99%:

              you mention "xo should say something about " nbdkit logs of ${diskPath} are in /tmp/xo-serverxxxx" do you know where exactly I can pull those? Tried looking under /var/log of XOA but not seeing what you mention.

              You can check to XO logs by running the command journalctl -u xo-server -f -n 50 from the CLI of the XOA VM.

              D 1 Reply Last reply Reply Quote 0
              • D Offline
                dnordmann @Danp
                last edited by dnordmann

                @florent

                Thank you for pointing me in the right direction for those logs.

                I have attached those here.
                Disk that seems to hang at 99% is WE-FS1/WE-FS1_1.vmdk. I see in the logs "Error: task has been destroyed before completion" but this would have been me from restarting the toolstack as the task just hangs there. I don't see much info/errors before.

                Tried a couple of times to migrate this machine so might see a couple of attempts.
                11/14 @ 2:05 ish
                11/14 @ 7:41 ish
                Logs.txt

                1 Reply Last reply Reply Quote 0
                • First post
                  Last post