XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Backup / Migration Performance

    Scheduled Pinned Locked Moved Backup
    31 Posts 9 Posters 6.2k Views 10 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates 🪐 Co-Founder CEO
      last edited by

      Migration is another thing. It's handled by SMAPIv1, it's 100% unrelated to XO.

      1 Reply Last reply Reply Quote 0
      • nikadeN Offline
        nikade Top contributor @KPS
        last edited by

        @KPS said in Backup / Migration Performance:

        @olivierlambert
        ...but do you also see these speeds for migrations? I am able to get theses speeds for backups with XOA, but not for migrations (with storage migration).

        VDI (Storage) migration is another thing, when doing a VDI migration we're seeing about 30Mbyte/s on a 10G connection.
        Something that has helped us a lot is to increase the RAM of the dom0, either set it to 8Gb or 16Gb if you can and you'll see more stable and higher speeds during VDI migration.

        1 Reply Last reply Reply Quote 0
        • R Offline
          rfx77
          last edited by

          Hi!

          We also have very low migration speeds.

          Backup speeds seems to be depandent on the way XO does it.

          With commvault we have 800MB/s+ easily but commvault attaches the snapshot-vdis directly to a vm and reads them there

          nikadeN 1 Reply Last reply Reply Quote 0
          • nikadeN Offline
            nikade Top contributor @rfx77
            last edited by

            @rfx77 said in Backup / Migration Performance:

            Hi!

            We also have very low migration speeds.

            Backup speeds seems to be depandent on the way XO does it.

            With commvault we have 800MB/s+ easily but commvault attaches the snapshot-vdis directly to a vm and reads them there

            Hi,

            1. What are the speeds more specifically?

            2. Are you using commvault to backup XCP-NG? Which version do they support?
              If they're able to reach 800MB/s this is very impressive.

            R 1 Reply Last reply Reply Quote 0
            • olivierlambertO Offline
              olivierlambert Vates 🪐 Co-Founder CEO
              last edited by

              I think it's possible to reach at this speed in a VM, so I don't see why it wouldn't be possible to achieve a similar speed in a similar context 🙂

              1 Reply Last reply Reply Quote 0
              • R Offline
                rfx77 @nikade
                last edited by rfx77

                @nikade

                I tested a backup job just today and we did 5 concurrent vms with about 120-150MB/s each. we use it mostly with XenServer-8 but we also used it with XCP-NG and both are equally supported.

                With a singel stream it is hard to get mor than 150MB/s. Downside of backing up multiple vms is that you have many snapshots which cost you much disk-space.

                Our backup configruation is a moving target right now because we need to work around the thick-volume snapshot overhead which is very tricky

                nikadeN 1 Reply Last reply Reply Quote 0
                • nikadeN Offline
                  nikade Top contributor @rfx77
                  last edited by

                  @rfx77 cool, thanks for the information.
                  I only heard of a few other supported backup platforms for XenServer and I think we tried 2 of them. We were not very impressed with the speed so we stayed with XOA.

                  R 1 Reply Last reply Reply Quote 0
                  • R Offline
                    rfx77 @nikade
                    last edited by

                    @nikade the probmem wit XO is that you cannot use it if you have multi TB Fileservers or large Mail-Servers and you need Agents to backup Eg.: Oracle, SQL-Server,... . You have to have a backup-solution which integrates with your storage system so that you can attach iscsi volumes directly in the vm.

                    J 1 Reply Last reply Reply Quote 0
                    • J Offline
                      john.c @rfx77
                      last edited by john.c

                      @rfx77 said in Backup / Migration Performance:

                      @nikade the probmem wit XO is that you cannot use it if you have multi TB Fileservers or large Mail-Servers and you need Agents to backup Eg.: Oracle, SQL-Server,... . You have to have a backup-solution which integrates with your storage system so that you can attach iscsi volumes directly in the vm.

                      @rfx77 said in Backup / Migration Performance:

                      @nikade the probmem wit XO is that you cannot use it if you have multi TB Fileservers or large Mail-Servers and you need Agents to backup Eg.: Oracle, SQL-Server,... . You have to have a backup-solution which integrates with your storage system so that you can attach iscsi volumes directly in the vm.

                      The issue with the multi terabyte virtual disks is due to a limitation of the Xen hypervisor (along with the software stack) and its use of VHD format disk images. Which are limited to 2 TB per disk image, which can be bypassed by adding more VHD disk images to a VM. Then combining it with a pool storage system such as Storage Spaces on Windows, LVM on Linux or ZPool on FreeBSD, OpenBSD, NetBSD etc.

                      Though sorting this issue is being discussed and worked on along with a new storage SMAPI namely transitioning from SMAPI v1 to SMAPI v3 as part of software development.

                      1 Reply Last reply Reply Quote 1
                      • nikadeN Offline
                        nikade Top contributor
                        last edited by

                        Yeah totally agree, SMAPIv3 will bring a lot to the table.
                        I am excited to see what comes in the next few months.

                        1 Reply Last reply Reply Quote 1
                        • J Offline
                          john.c
                          last edited by john.c

                          @rfx77 Also recently added is migration compression which compresses the VMs and/or data for them to be run on the XCP-ng hosts. That way VMs running on the hosts when migrating will be smaller which can bring a speed boost when transferring on slower networks. Though it comes at the cost of increased load on the hosts where the migration is being performed.

                          The migration compression is only possible under XCP-ng 8.3 or above!

                          K andrewperryA 2 Replies Last reply Reply Quote 0
                          • K Offline
                            KPS Top contributor @john.c
                            last edited by

                            I think, we are mixing up some topics

                            • 2TB limitation
                              This is not nice, but can be mostly worked around with LVM/storage-spaces inside the VM with multiple VDIs. 2-10 TB are possible, but file-level restore is not.

                            • backup-speed
                              backup-speed went up within the last updates, NBD, etc. It could be better, but as backups can be parallelized, this is mostly good

                            • restore-speed
                              As restores are mostly "one-VM-at-a-time"-jobs, this should be faster. Things like "instant-recover" are missing, so you have to wait for the full copy.

                            • migration-speed
                              No progress on fast networks, improvements on slow-networks with compression. This should really be better compared to other hypervisors

                            J planedropP 2 Replies Last reply Reply Quote 0
                            • J Offline
                              john.c @KPS
                              last edited by john.c

                              This post is deleted!
                              1 Reply Last reply Reply Quote 0
                              • olivierlambertO Offline
                                olivierlambert Vates 🪐 Co-Founder CEO
                                last edited by

                                Restore speed: you can now enjoy diff restore if you still have the original VM. Otherwise, CR can provide you the instant restore you need. But even with that, if you want a better solution, we could spawn an NFS share in XO directly and mount it as a temporary SR. My fear is that will be really slow, and you'll need to live migrate it out after. Potentially creating more problem than fixing it. CR is the right tool for instant restore 🙂

                                nikadeN K 2 Replies Last reply Reply Quote 0
                                • nikadeN Offline
                                  nikade Top contributor @olivierlambert
                                  last edited by

                                  @olivierlambert said in Backup / Migration Performance:

                                  Restore speed: you can now enjoy diff restore if you still have the original VM. Otherwise, CR can provide you the instant restore you need. But even with that, if you want a better solution, we could spawn an NFS share in XO directly and mount it as a temporary SR. My fear is that will be really slow, and you'll need to live migrate it out after. Potentially creating more problem than fixing it. CR is the right tool for instant restore 🙂

                                  With Veeam Instant Recovery the VM is booted off the Veeam storage and then it is migrated to your esxi cluster/host, works pretty well if your Veeam respository has fast storage.

                                  1 Reply Last reply Reply Quote 0
                                  • olivierlambertO Offline
                                    olivierlambert Vates 🪐 Co-Founder CEO
                                    last edited by

                                    Yes, as usual "if you have X or Y", but we have so many different infrastructure, I'm already feeling the number of tickets "migration can't be done because I'm writing more on the temporary restore SR than it can be migrated" 😄

                                    1 Reply Last reply Reply Quote 0
                                    • K Offline
                                      KPS Top contributor @olivierlambert
                                      last edited by

                                      @olivierlambert
                                      That is my current workaround: instead of an NFS server, i did install an additional (licensed) XCP-ng-host, that is ONLY used as CR-target.
                                      Not optimal, but - of course - as fast as instant recovery.

                                      But migrating the VM to the prod cluster is limited by the migration speed of XCP-ng

                                      nikadeN 1 Reply Last reply Reply Quote 0
                                      • nikadeN Offline
                                        nikade Top contributor @KPS
                                        last edited by

                                        @KPS said in Backup / Migration Performance:

                                        @olivierlambert
                                        That is my current workaround: instead of an NFS server, i did install an additional (licensed) XCP-ng-host, that is ONLY used as CR-target.
                                        Not optimal, but - of course - as fast as instant recovery.

                                        But migrating the VM to the prod cluster is limited by the migration speed of XCP-ng

                                        This is probably the best solution tbh, it also offers you the flexibility to "scale" up with more hosts if you'd need more for a faster recovery of many VM's.
                                        One note tho, if im correct you're only allowed to do 4 concurrent migrations, but as long as you can start the VM's fast on the CR-host you could queue the migrations.

                                        K 1 Reply Last reply Reply Quote 0
                                        • K Offline
                                          KPS Top contributor @nikade
                                          last edited by

                                          @nikade
                                          I think, this can be handled. The downsides are the inefficient way to save the VMs, which can perhaps be minimized with ZFS storage for some compression, but it is working.

                                          1 Reply Last reply Reply Quote 1
                                          • planedropP Offline
                                            planedrop Top contributor @KPS
                                            last edited by

                                            @KPS Regarding the 2TiB limitation, it'll definitely be nice when we have SMAPIv3 so we can go over this, but it's worth noting that IMO no VMs should be larger than this anyway. Generally speaking if you need that kind of space it'd be better to just use a NAS/iSCSI setup. Something like TrueNAS can delivery that at high speed, and then handle it's own backups and replication of it.

                                            I know most probably already know this, and all environments are different (I manage one that requires a 7TiB local disk, at least for the time being, plan is to migrate it to a NAS once the software vendor supports it), but it's worth noting anytime I see the 2TiB limit come up, ideally it should be architected around so the VMs are nimble.

                                            I do something similar w/ a pretty massive SMB share and TrueNAS can back this up at whatever speed the WAN can handle, in my case 2 gigabits and it'll maintain that 2 gigabit upload for 8+ hours without slowing down. (and I'm confident even 10 gigabit would be possible with this box)

                                            olivierlambertO nikadeN 2 Replies Last reply Reply Quote 0
                                            • First post
                                              Last post