XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    V2V - Stops at 99%

    Scheduled Pinned Locked Moved Migrate to XCP-ng
    14 Posts 5 Posters 360 Views 4 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • DanpD Offline
      Danp Pro Support Team @dnordmann
      last edited by

      @dnordmann said in V2V - Stops at 99%:

      you mention "xo should say something about " nbdkit logs of ${diskPath} are in /tmp/xo-serverxxxx" do you know where exactly I can pull those? Tried looking under /var/log of XOA but not seeing what you mention.

      You can check to XO logs by running the command journalctl -u xo-server -f -n 50 from the CLI of the XOA VM.

      D 1 Reply Last reply Reply Quote 0
      • D Online
        dnordmann @Danp
        last edited by dnordmann

        @florent

        Thank you for pointing me in the right direction for those logs.

        I have attached those here.
        Disk that seems to hang at 99% is WE-FS1/WE-FS1_1.vmdk. I see in the logs "Error: task has been destroyed before completion" but this would have been me from restarting the toolstack as the task just hangs there. I don't see much info/errors before.

        Tried a couple of times to migrate this machine so might see a couple of attempts.
        11/14 @ 2:05 ish
        11/14 @ 7:41 ish
        Logs.txt

        1 Reply Last reply Reply Quote 0
        • T Offline
          tsukraw
          last edited by

          Did some more testing over the weekend and got some cleaner logs that are fully matched up.

          We ran the command 'journalctl -u xo-server -f -n 50' and see these to entries:

          Nov 15 14:26:10 xoa xo-server[552]: 2025-11-15T19:26:10.935Z xo:vmware-explorer:esxi INFO nbdkit logs of [WE-DS] WE-FS1/WE-FS1.vmdk are in /tmp/xo-serverGKTRzl
          Nov 15 14:26:10 xoa xo-server[552]: 2025-11-15T19:26:10.943Z xo:vmware-explorer:esxi INFO nbdkit logs of [WE-DS] WE-FS1/WE-FS1_1.vmdk are in /tmp/xo-serverCFf19t

          The log files were fairly large, wouldn't allow me to attach them, so i have provided them in a zip on dropbox if you want to take a look.

          From what I can pick out in the log files it appears that the transfer from VMware is complete or so it looks that way.

          After that I ran these two commands against the import tasks:
          xo-cli rest get tasks/0mi0ogg5f
          xo-cli rest get tasks/0mi0ogg5g

          The output for those is also attached.
          The one thing that stands out is in the "importing vms 17" is that WE-FS1.vmdk shows 'success' and WE-FS1_1.vmdk shows 'pending'

          Finally, I attached the logs from the host as well.
          I'm not seeing anything that jumps out at me as being wrong there either.

          Zip file of logs:
          https://www.dropbox.com/scl/fi/glxm5tvebf5vjizpnfvmx/Package-of-Logs.zip?rlkey=uyl0kltxhfnfcq0carbrg0lrv&e=1&dl=0

          D 1 Reply Last reply Reply Quote 0
          • D Online
            dnordmann @tsukraw
            last edited by

            @florent

            Is there anything in the logs he provided that stand out?

            florentF 1 Reply Last reply Reply Quote 0
            • florentF Offline
              florent Vates 🪐 XO Team @dnordmann
              last edited by

              @dnordmann @tsukraw
              thank you for your patience, we found something while working with the xcp storage team, and there is an issue with the last block size

              Can you confirm that the failing VM has at least one disk with a disk with a size not aligned with 2MB ?

              Could you test this PR on the failing import ? https://github.com/vatesfr/xen-orchestra/pull/9233

              regards

              fbeauchamp opened this pull request in vatesfr/xen-orchestra

              closed fix(nbd-client): nbddisk must emit full size block #9233

              D 1 Reply Last reply Reply Quote 1
              • D Online
                dnordmann @florent
                last edited by

                @florent
                Thanks for the update on this. I have confirmed the 2 clients that we are having issues with the data drives are not divisible by 2MB.

                How do we go about applying this patch?

                florentF 1 Reply Last reply Reply Quote 0
                • florentF Offline
                  florent Vates 🪐 XO Team @dnordmann
                  last edited by

                  @dnordmann if I remember well you have opened a ticket on your xoa , can provide us the ticket number and we can patch your xoa ?

                  if you are using xo from source, you need to change branch and then restart xo-server

                  D 1 Reply Last reply Reply Quote 1
                  • D Online
                    dnordmann @florent
                    last edited by

                    @florent

                    Ticket#7747444
                    and I just opened another ticket for the other client that is having the same issue. Ticket#7748053.
                    Support tunnels should be open for both clients.
                    Thanks!

                    florentF 1 Reply Last reply Reply Quote 0
                    • florentF Offline
                      florent Vates 🪐 XO Team @dnordmann
                      last edited by

                      @dnordmann said in V2V - Stops at 99%:

                      @florent

                      Ticket#7747444
                      and I just opened another ticket for the other client that is having the same issue. Ticket#7748053.
                      Support tunnels should be open for both clients.
                      Thanks!

                      I deployed the patch on the new client, if it's ok I will do the second one after

                      D 1 Reply Last reply Reply Quote 0
                      • D Online
                        dnordmann @florent
                        last edited by

                        @florent
                        Didn't see your message until now about only applying the fix to only 1 client.

                        I did do a warm migration on client with ticket #7748053. This completed without issue!
                        I tried a warm migration on client with ticket #7747444 and this failed again. Sounds expected as the patch was not on this one yet.

                        Can you push the patch to client with ticket #7747444. Thanks!

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post