XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    What should i expect from VM migration performance from Xen-ng ?

    Scheduled Pinned Locked Moved Advanced features
    18 Posts 5 Posters 1.8k Views 7 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • nikadeN Offline
      nikade Top contributor
      last edited by

      Are you doing storage/vdi migration?
      The speed of that is "limited" some how, I dont know why but migrating VDI's is always slow.

      If we migrate 3-4 VM's at the same time we could see it spike to a total of 166MB/s but that's about it.
      I've even created a report on the Citrix Xenserver bugtracker but they never bothered to do anything about it.

      V 1 Reply Last reply Reply Quote 0
      • V Offline
        vahric 0 @nikade
        last edited by vahric 0

        @nikade hi , no storage migration , only move one vm from one node to another
        this is what exactly i saw, almost seeing 1.24 or 1.3 Gbit always which equal 166MB/s

        1 Reply Last reply Reply Quote 0
        • olivierlambertO Offline
          olivierlambert Vates 🪐 Co-Founder CEO
          last edited by

          Live memory migration (without storage) can reach far higher speed. We had some numbers I remember about close to 8Gbits.

          V 1 Reply Last reply Reply Quote 0
          • V Offline
            vahric 0 @olivierlambert
            last edited by

            @olivierlambert hi,

            This iperf3 performance between two nodes 141 and 143 via migration network which is 10.111.178 network

            785cb4df-d497-4a09-9a4b-c3b095c00950-image.png

            I'm sure MTUs are 9000 on switch , nics and related sub interface for migration network work ...

            3 VM migrations are kicked
            deacb3db-2047-4f82-9e2c-e937442b5b58-image.png

            c8722591-1b4f-40c3-b473-dd585112fd76-image.png
            No luck to catch high throughput

            Between two vms mtu 1500, i got this

            b7b90d23-2987-4e50-b53b-addbcffd6149-image.png

            Means couldn't imagine what is the issue here ? Maybe Dom0 limiter for VMs but for migration process nothing should be , isn't it ?

            Thanks
            VM

            1 Reply Last reply Reply Quote 0
            • nikadeN Offline
              nikade Top contributor
              last edited by

              When I live migrate a VM without disk (VDI on shared SR) we're seeing ~4Gbit/s with 2x10G in a bond0 with MTU 9000, within the same broadcast domain/subnet:

              2beae77d-f6d1-4cd9-b813-42011066aea7-image.png

              We've seen around 9Gbit/s max but thats when we migrate a large VM with a lot of memory.

              V 1 Reply Last reply Reply Quote 0
              • V Offline
                vahric 0 @nikade
                last edited by

                increasing memory of Dom0 did not effect ...
                increasing vCPU of Dom0 did not effect (actually not all vcpu already used for it but i just want to try )
                I run stress*ng for load vms memory but did not effect
                No pinning or numa config need because single cpu and shared L3 cache for all cores
                Also MTU size is not effecting its working same with 1500 and 9000 MTU
                I saw and change tcp_limit_output_bytes but did not help me

                Only what effect is changing the hardware
                My Intel servers are Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz --> 0.9 Gbit/s per migration
                My AMD servers are AMD EPYC 7502P 32-Core Processor --> 1.76 Gbit/s per migration

                Do you have any advise ?

                H 1 Reply Last reply Reply Quote 0
                • H Offline
                  henri9813 @vahric 0
                  last edited by

                  Hello,

                  So, does someone know why I can't go over 145MB/s and how to increase the vdi speed transfert ?

                  5faf4c21-b0ae-44f6-82cc-27f24fc0b94e-image.png

                  ( I have 5Gb/s between nodes )

                  Thanks !

                  nikadeN 1 Reply Last reply Reply Quote 0
                  • olivierlambertO Offline
                    olivierlambert Vates 🪐 Co-Founder CEO
                    last edited by

                    Hi,

                    I think it's related to how it's transferred, I wouldn't expect much more.

                    1 Reply Last reply Reply Quote 0
                    • nikadeN Offline
                      nikade Top contributor @henri9813
                      last edited by

                      @henri9813 said in What should i expect from VM migration performance from Xen-ng ?:

                      Hello,

                      So, does someone know why I can't go over 145MB/s and how to increase the vdi speed transfert ?

                      5faf4c21-b0ae-44f6-82cc-27f24fc0b94e-image.png

                      ( I have 5Gb/s between nodes )

                      Thanks !

                      Are you migrating storage or just the VM?
                      How can you have 5Gb/s between the nodes? What kind of networking setup are you using?

                      H 1 Reply Last reply Reply Quote 0
                      • H Offline
                        henri9813 @nikade
                        last edited by

                        Hello,

                        Thanks for your answer @nikade

                        I migrate both storage and vm.

                        I have 25Gb/s nic, but I have a 5Gb/s limitation at switch level.

                        But I found on another topic an explanation about this, this could be related to SMAPIv1.

                        Best regards

                        nikadeN 1 Reply Last reply Reply Quote 0
                        • nikadeN Offline
                          nikade Top contributor @henri9813
                          last edited by

                          @henri9813 said in What should i expect from VM migration performance from Xen-ng ?:

                          Hello,

                          Thanks for your answer @nikade

                          I migrate both storage and vm.

                          I have 25Gb/s nic, but I have a 5Gb/s limitation at switch level.

                          But I found on another topic an explanation about this, this could be related to SMAPIv1.

                          Best regards

                          Yeah, when migrating storage the speeds are pretty much the same no matter what NIC you have...
                          This is a known limitation and I hope it will be resolved soon.

                          G 1 Reply Last reply Reply Quote 0
                          • G Offline
                            Greg_E @nikade
                            last edited by Greg_E

                            @nikade

                            I've spent a bunch of time trying to find some dark magic to making the VDI migration faster, so far nothing. My VM (memory) migration is fast enough that I'm not concerned right now. and don't have any testing to show for it.

                            Currently migrating the test VDI from storage1 to storage2 (again) and getting an average of 400/400mbps (lower case m and b). If I do three VDI at once, I can get over a gigbit and sometimes close to 2 gigabit.

                            It's either SMAPIv1 or it is a file "block" size issue, bigger blocks can get me benchmarks up to 600MBps to almost 700MBps (capital M and B) on my slow storage over a 10gbps network. Testing this with XCP-NG 8.3 release to see if anything changed from the Beta, so far all is the same. Also all testing done with thin provisioned file shares (SMB and NFS). If I could get half my maximum tests for the VDI migration, I'd be happy. In fact I'm extremely pleased that my storage can go as fast as it is showing, it's all old stuff on SATA.

                            I have a whole thread on this testing if you want to read more.

                            migrate-benchmark.png

                            You can see the migrate which was 400/400 and then the benchmark across the ethernet interface of my Truenas, this example was migrate from SMB to NFS, and benchmark on the NFS. Settings for that NFS are in the thread mentioned and certainly my fastest non-real world performance to date.

                            nikadeN 1 Reply Last reply Reply Quote 0
                            • nikadeN Offline
                              nikade Top contributor @Greg_E
                              last edited by

                              @Greg_E said in What should i expect from VM migration performance from Xen-ng ?:

                              @nikade

                              I've spent a bunch of time trying to find some dark magic to making the VDI migration faster, so far nothing. My VM (memory) migration is fast enough that I'm not concerned right now. and don't have any testing to show for it.

                              Currently migrating the test VDI from storage1 to storage2 (again) and getting an average of 400/400mbps (lower case m and b). If I do three VDI at once, I can get over a gigbit and sometimes close to 2 gigabit.

                              It's either SMAPIv1 or it is a file "block" size issue, bigger blocks can get me benchmarks up to 600MBps to almost 700MBps (capital M and B) on my slow storage over a 10gbps network. Testing this with XCP-NG 8.3 release to see if anything changed from the Beta, so far all is the same. Also all testing done with thin provisioned file shares (SMB and NFS). If I could get half my maximum tests for the VDI migration, I'd be happy. In fact I'm extremely pleased that my storage can go as fast as it is showing, it's all old stuff on SATA.

                              I have a whole thread on this testing if you want to read more.

                              migrate-benchmark.png

                              You can see the migrate which was 400/400 and then the benchmark across the ethernet interface of my Truenas, this example was migrate from SMB to NFS, and benchmark on the NFS. Settings for that NFS are in the thread mentioned and certainly my fastest non-real world performance to date.

                              That's impressive!
                              We're not seeing as high speeds are you are, we have 3 different storages, mostly doing NFS tho. We're still running 8.2.0 but I dont really think it matters as the issue is most likely tied to the SMAPIv1.

                              We also noted that it goes a bit faster when doing 3-4 VDI's in parallell, but the individual speed per migration is about the same.

                              G 1 Reply Last reply Reply Quote 1
                              • G Offline
                                Greg_E @nikade
                                last edited by

                                @nikade

                                I'm doing upgrades on both XCP-NG and Truenas, when they are done I have 4 small VMs that I'm going to fire up at the same time and see what I can see. I might be able to hit 2gbps reads while all 4 are booting.

                                I have one last thing to try in my speed testing, going to see if increasing the RAM for each host makes a difference. Probably not since each already has a decent amount (defaults go up when you pass a certain amount of available). I read somewhere that going up to 16GB for each host might help, with my lab being so lightly used, I might try going up to 32GB for fun. I have lots of slots in my production system where I can add RAM if this helps on lab, the lab machines are full.

                                nikadeN 1 Reply Last reply Reply Quote 0
                                • nikadeN Offline
                                  nikade Top contributor @Greg_E
                                  last edited by

                                  @Greg_E said in What should i expect from VM migration performance from Xen-ng ?:

                                  @nikade

                                  I'm doing upgrades on both XCP-NG and Truenas, when they are done I have 4 small VMs that I'm going to fire up at the same time and see what I can see. I might be able to hit 2gbps reads while all 4 are booting.

                                  I have one last thing to try in my speed testing, going to see if increasing the RAM for each host makes a difference. Probably not since each already has a decent amount (defaults go up when you pass a certain amount of available). I read somewhere that going up to 16GB for each host might help, with my lab being so lightly used, I might try going up to 32GB for fun. I have lots of slots in my production system where I can add RAM if this helps on lab, the lab machines are full.

                                  We usually set our busy hosts to 16Gb for the dom0 - It does make a stability difference in our case (We could have 30-40 running VM's per host) especially when there is a bit of load inside the VM's.
                                  Normal hosts gets 4-8Gb ram depending on the total amount of ram in the host, 4Gb on the ones with 32Gb and then 6Gb for 64Gb and 8Gb and upwards for the ones with +128Gb.

                                  G 1 Reply Last reply Reply Quote 0
                                  • G Offline
                                    Greg_E @nikade
                                    last edited by

                                    @nikade

                                    BTW, the ram increase made zero difference in the VDI migration times.

                                    I may still change the ram for each host in production, but I only have like 12 VMs right now and probably not many more to create in my little system. Might be useful if I ever get down to a single host because of a catastrophe. Currently running what it chose for default at 7.41 MiB, roughly doubled that in my lab to see what might happen.

                                    Just got my VMware lab started so all this other stuff is going to back burner for a while. I will say I like the basic ESXi interface that they give you, I can see why smaller systems may not bother buying VCenter. I hope XO-lite gives us the same amount of function when it gets done (or more).

                                    nikadeN 1 Reply Last reply Reply Quote 1
                                    • nikadeN Offline
                                      nikade Top contributor @Greg_E
                                      last edited by

                                      @Greg_E yeah I can imagine, I dont think the ram does much except gives you some "margins" under some heavier load. With 12 VM's I wouldn't bother increasing it from the default values 🙂

                                      Yea the mgmt interface in ESXi is nice, I think the standalone interface is almost better than vCenter. Im pretty sure XO Lite will be able to do more when it is done, for example you'll be able to manage a small pool with it.

                                      1 Reply Last reply Reply Quote 0
                                      • First post
                                        Last post