XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Backup / Migration Performance

    Scheduled Pinned Locked Moved Backup
    31 Posts 9 Posters 6.2k Views 10 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Z Offline
      zach
      last edited by

      Hello- I have a two host Pool, each with a 10GB network connection, and a 6 drive all SSD RAID10. I have noticed that when migrating machines between the two hosts, or running backups to an external NFS share, I never seem to get more than 40MB/s write speed. When I run the "Test your remote" on the NFS share, I generally get about 40 write and 500ish read.

      This is in a test environment, so I am running the self-compiled Xen Orchestra. Are there any obvious bottlenecks I should be looking for?

      P 1 Reply Last reply Reply Quote 0
      • P Offline
        probain @zach
        last edited by

        @zach
        XCP-ng is unfortunately quite slow on individual streams for storage. It kind of disappears when using lots of VMs. But looking at any one individual one, it is surprising.

        A good read is the write up on Understanding the storage stack in XCP-ng

        🙂

        1 Reply Last reply Reply Quote 0
        • Z Offline
          zach
          last edited by

          Thanks for the link! That makes sense for the backups themselves, but shouldn't the "Test Your Remote" function still show actual throughput between me and the backup destination? The fact that that was limited so low made me think there was another issue somewhere.

          1 Reply Last reply Reply Quote 0
          • nikadeN Offline
            nikade Top contributor
            last edited by

            No issue, just a known limitation.
            With 4x backups at once we're pushing about 166Mbyte/s against our NFS over 10G, a single one is about 35-46Mbyte/s.

            1 Reply Last reply Reply Quote 1
            • planedropP Offline
              planedrop Top contributor
              last edited by

              The "test your remote" feature is mostly just to be sure connectivity to the remote is actually working, not so much for a speed test (anything below 1 gigabyte of data isn't going to really be reliable for a high speed network performance test anyway).

              I concur with @nikade I also see about 35-50MB/s for backups on a 10 gigabit network with a NAS that I know can intake 20 gigabit or more sustained.

              IMO this speed is pretty OK, especially if you're using virtualization for many smaller VMs (which is the best use case) and just set the concurrency higher. My personal "issue" is S3 backup performance could still be better, it's much improved from before, but still about 1/3rd the performance I see on local SMB/NFS backups.

              1 Reply Last reply Reply Quote 0
              • H Houbsi referenced this topic on
              • olivierlambertO Offline
                olivierlambert Vates 🪐 Co-Founder CEO
                last edited by olivierlambert

                It's really hard to talk about performances during backup because it's a stream, going to at the speed of the slowest part. It could be anything, from SR to host (storage speed or network), host to XO (way of export, XAPI speed, CPU speed, network…) and then XO to BR (network again), BR speed, latency etc.

                With a recent XOA, you should get (with no big bottleneck) something around 60MiB/s easily. We started to see more than 100MiB/s for many users depending on the size of their infrastructure, concurrency etc.

                K 1 Reply Last reply Reply Quote 1
                • K Offline
                  KPS Top contributor @olivierlambert
                  last edited by

                  @olivierlambert
                  ...but do you also see these speeds for migrations? I am able to get theses speeds for backups with XOA, but not for migrations (with storage migration).

                  nikadeN 1 Reply Last reply Reply Quote 0
                  • olivierlambertO Offline
                    olivierlambert Vates 🪐 Co-Founder CEO
                    last edited by

                    Migration is another thing. It's handled by SMAPIv1, it's 100% unrelated to XO.

                    1 Reply Last reply Reply Quote 0
                    • nikadeN Offline
                      nikade Top contributor @KPS
                      last edited by

                      @KPS said in Backup / Migration Performance:

                      @olivierlambert
                      ...but do you also see these speeds for migrations? I am able to get theses speeds for backups with XOA, but not for migrations (with storage migration).

                      VDI (Storage) migration is another thing, when doing a VDI migration we're seeing about 30Mbyte/s on a 10G connection.
                      Something that has helped us a lot is to increase the RAM of the dom0, either set it to 8Gb or 16Gb if you can and you'll see more stable and higher speeds during VDI migration.

                      1 Reply Last reply Reply Quote 0
                      • R Offline
                        rfx77
                        last edited by

                        Hi!

                        We also have very low migration speeds.

                        Backup speeds seems to be depandent on the way XO does it.

                        With commvault we have 800MB/s+ easily but commvault attaches the snapshot-vdis directly to a vm and reads them there

                        nikadeN 1 Reply Last reply Reply Quote 0
                        • nikadeN Offline
                          nikade Top contributor @rfx77
                          last edited by

                          @rfx77 said in Backup / Migration Performance:

                          Hi!

                          We also have very low migration speeds.

                          Backup speeds seems to be depandent on the way XO does it.

                          With commvault we have 800MB/s+ easily but commvault attaches the snapshot-vdis directly to a vm and reads them there

                          Hi,

                          1. What are the speeds more specifically?

                          2. Are you using commvault to backup XCP-NG? Which version do they support?
                            If they're able to reach 800MB/s this is very impressive.

                          R 1 Reply Last reply Reply Quote 0
                          • olivierlambertO Offline
                            olivierlambert Vates 🪐 Co-Founder CEO
                            last edited by

                            I think it's possible to reach at this speed in a VM, so I don't see why it wouldn't be possible to achieve a similar speed in a similar context 🙂

                            1 Reply Last reply Reply Quote 0
                            • R Offline
                              rfx77 @nikade
                              last edited by rfx77

                              @nikade

                              I tested a backup job just today and we did 5 concurrent vms with about 120-150MB/s each. we use it mostly with XenServer-8 but we also used it with XCP-NG and both are equally supported.

                              With a singel stream it is hard to get mor than 150MB/s. Downside of backing up multiple vms is that you have many snapshots which cost you much disk-space.

                              Our backup configruation is a moving target right now because we need to work around the thick-volume snapshot overhead which is very tricky

                              nikadeN 1 Reply Last reply Reply Quote 0
                              • nikadeN Offline
                                nikade Top contributor @rfx77
                                last edited by

                                @rfx77 cool, thanks for the information.
                                I only heard of a few other supported backup platforms for XenServer and I think we tried 2 of them. We were not very impressed with the speed so we stayed with XOA.

                                R 1 Reply Last reply Reply Quote 0
                                • R Offline
                                  rfx77 @nikade
                                  last edited by

                                  @nikade the probmem wit XO is that you cannot use it if you have multi TB Fileservers or large Mail-Servers and you need Agents to backup Eg.: Oracle, SQL-Server,... . You have to have a backup-solution which integrates with your storage system so that you can attach iscsi volumes directly in the vm.

                                  J 1 Reply Last reply Reply Quote 0
                                  • J Offline
                                    john.c @rfx77
                                    last edited by john.c

                                    @rfx77 said in Backup / Migration Performance:

                                    @nikade the probmem wit XO is that you cannot use it if you have multi TB Fileservers or large Mail-Servers and you need Agents to backup Eg.: Oracle, SQL-Server,... . You have to have a backup-solution which integrates with your storage system so that you can attach iscsi volumes directly in the vm.

                                    @rfx77 said in Backup / Migration Performance:

                                    @nikade the probmem wit XO is that you cannot use it if you have multi TB Fileservers or large Mail-Servers and you need Agents to backup Eg.: Oracle, SQL-Server,... . You have to have a backup-solution which integrates with your storage system so that you can attach iscsi volumes directly in the vm.

                                    The issue with the multi terabyte virtual disks is due to a limitation of the Xen hypervisor (along with the software stack) and its use of VHD format disk images. Which are limited to 2 TB per disk image, which can be bypassed by adding more VHD disk images to a VM. Then combining it with a pool storage system such as Storage Spaces on Windows, LVM on Linux or ZPool on FreeBSD, OpenBSD, NetBSD etc.

                                    Though sorting this issue is being discussed and worked on along with a new storage SMAPI namely transitioning from SMAPI v1 to SMAPI v3 as part of software development.

                                    1 Reply Last reply Reply Quote 1
                                    • nikadeN Offline
                                      nikade Top contributor
                                      last edited by

                                      Yeah totally agree, SMAPIv3 will bring a lot to the table.
                                      I am excited to see what comes in the next few months.

                                      1 Reply Last reply Reply Quote 1
                                      • J Offline
                                        john.c
                                        last edited by john.c

                                        @rfx77 Also recently added is migration compression which compresses the VMs and/or data for them to be run on the XCP-ng hosts. That way VMs running on the hosts when migrating will be smaller which can bring a speed boost when transferring on slower networks. Though it comes at the cost of increased load on the hosts where the migration is being performed.

                                        The migration compression is only possible under XCP-ng 8.3 or above!

                                        K andrewperryA 2 Replies Last reply Reply Quote 0
                                        • K Offline
                                          KPS Top contributor @john.c
                                          last edited by

                                          I think, we are mixing up some topics

                                          • 2TB limitation
                                            This is not nice, but can be mostly worked around with LVM/storage-spaces inside the VM with multiple VDIs. 2-10 TB are possible, but file-level restore is not.

                                          • backup-speed
                                            backup-speed went up within the last updates, NBD, etc. It could be better, but as backups can be parallelized, this is mostly good

                                          • restore-speed
                                            As restores are mostly "one-VM-at-a-time"-jobs, this should be faster. Things like "instant-recover" are missing, so you have to wait for the full copy.

                                          • migration-speed
                                            No progress on fast networks, improvements on slow-networks with compression. This should really be better compared to other hypervisors

                                          J planedropP 2 Replies Last reply Reply Quote 0
                                          • J Offline
                                            john.c @KPS
                                            last edited by john.c

                                            This post is deleted!
                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post