XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Warm migration - abort and monitor

    Scheduled Pinned Locked Moved Xen Orchestra
    8 Posts 2 Posters 585 Views 2 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • K Offline
      KPS Top contributor
      last edited by

      Hi!

      I did just try your new "warm migration"-feature, but there are some questions...

      • Is there any possibility to abort and monitor the migration task?
      • What is the expected migration speed? I was not able to push more than 100 Mbit/s

      Thank you and best wishes
      KPS

      1 Reply Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        Hello,

        Until we got "XO tasks" (it's a work in great progress!) it's not possible to get an "overall" monitoring on the progress.

        However, you can check the progress on each XCP-ng individual tasks in the XO "Tasks" view. What takes most of the time is the initial replication.

        Regarding the speed, remember it can be limited by the Dom0 VHD export speed capability. What's the hardware do you have both on source and destination? Also, the traffic has to flow to your XO, so depending where it is in your network, it can be also a bottleneck.

        K 1 Reply Last reply Reply Quote 0
        • K Offline
          KPS Top contributor @olivierlambert
          last edited by

          @olivierlambert
          The cluster is a mid-performing-system with Xeon Gold 6154 CPUs, 192 GB memory and 10 GbE NICs. XOA is running on the same host, as the source VM. There is nearly 0 load on the systems. Did you see speeds which are far above 100 MBit/s

          1 Reply Last reply Reply Quote 0
          • olivierlambertO Offline
            olivierlambert Vates 🪐 Co-Founder CEO
            last edited by

            I could get more than this on my setup yes. What about the destination host?

            K 1 Reply Last reply Reply Quote 0
            • K Offline
              KPS Top contributor @olivierlambert
              last edited by KPS

              @olivierlambert
              The destination host has the same specs. They are connected with 10 GbE on three paths. Local storage are Micron 9300 SSDs.

              Edit: That is the same speed, I can achieve with StorageMigration

              1 Reply Last reply Reply Quote 0
              • olivierlambertO Offline
                olivierlambert Vates 🪐 Co-Founder CEO
                last edited by

                That is the same speed, I can achieve with StorageMigration

                Then XO isn't the bottleneck 🙂 It can be anything like Dom0 usage (CPU/memory), SR usage/speed, max CPU freq/efficiency and so on.

                K 1 Reply Last reply Reply Quote 0
                • K Offline
                  KPS Top contributor @olivierlambert
                  last edited by KPS

                  @olivierlambert
                  My problem is, that I cannot see the bottleneck....
                  tapdisk on source an destination is about 8/17%
                  System CPU is <5%
                  Top shows I/O-wait at <2%
                  Network is only loaded at 135 Mbit/s peak, but if I transfer data manually, I am getting about 10 Gbit/s:

                  [ ID] Interval           Transfer     Bitrate
                  [  5]   0.00-4.00   sec   450 MBytes   944 Mbits/sec                  receiver
                  [  8]   0.00-4.00   sec  1.12 GBytes  2.40 Gbits/sec                  receiver
                  [ 10]   0.00-4.00   sec  1.48 MBytes  3.10 Mbits/sec                  receiver
                  [ 12]   0.00-4.00   sec  3.23 MBytes  6.78 Mbits/sec                  receiver
                  [ 14]   0.00-4.00   sec  2.85 MBytes  5.98 Mbits/sec                  receiver
                  [ 16]   0.00-4.00   sec  3.37 MBytes  7.08 Mbits/sec                  receiver
                  [ 18]   0.00-4.00   sec  2.30 MBytes  4.82 Mbits/sec                  receiver
                  [ 20]   0.00-4.00   sec   410 MBytes   861 Mbits/sec                  receiver
                  [ 22]   0.00-4.00   sec  2.18 MBytes  4.58 Mbits/sec                  receiver
                  [ 24]   0.00-4.00   sec  12.9 MBytes  27.1 Mbits/sec                  receiver
                  [ 26]   0.00-4.00   sec   425 MBytes   890 Mbits/sec                  receiver
                  [ 28]   0.00-4.00   sec  2.69 MBytes  5.63 Mbits/sec                  receiver
                  [ 30]   0.00-4.00   sec  2.68 MBytes  5.62 Mbits/sec                  receiver
                  [ 32]   0.00-4.00   sec  1.13 GBytes  2.42 Gbits/sec                  receiver
                  [ 34]   0.00-4.00   sec  1.21 GBytes  2.61 Gbits/sec                  receiver
                  [ 36]   0.00-4.00   sec  3.46 MBytes  7.26 Mbits/sec                  receiver
                  [ 38]   0.00-4.00   sec  3.87 MBytes  8.13 Mbits/sec                  receiver
                  [ 40]   0.00-4.00   sec  2.98 MBytes  6.24 Mbits/sec                  receiver
                  [ 42]   0.00-4.00   sec  3.39 MBytes  7.11 Mbits/sec                  receiver
                  [ 44]   0.00-4.00   sec  12.1 MBytes  25.4 Mbits/sec                  receiver
                  [SUM]   0.00-4.00   sec  4.77 GBytes  10.2 Gbits/sec                  receiver
                  

                  But - of course: The issue is not related to your warm migration feature. It is the same for Storage Migration.

                  1 Reply Last reply Reply Quote 0
                  • olivierlambertO Offline
                    olivierlambert Vates 🪐 Co-Founder CEO
                    last edited by

                    It's really hard to answer precisely in your case what's your bottleneck without spending more time doing a closer investigation on your setup 😞

                    1 Reply Last reply Reply Quote 0
                    • First post
                      Last post