XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    10gb backup only managing about 80Mb

    Scheduled Pinned Locked Moved Backup
    32 Posts 8 Posters 1.1k Views 8 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • utopianfishU Offline
      utopianfish @nikade
      last edited by

      @nikade ok thanks.. that’s begs the question.. where can I get a cheap 10gb switch.. not sfp. Would need to be rj45 connections

      A tjkreidlT 2 Replies Last reply Reply Quote 0
      • A Offline
        AlbertK @utopianfish
        last edited by AlbertK

        @utopianfish

        Mikrotik CRS304-4XG-IN or CRS312-4C+8XG-RM

        https://mikrotik.com/products/group/switches

        1 Reply Last reply Reply Quote 1
        • tjkreidlT Offline
          tjkreidl Ambassador @utopianfish
          last edited by

          @utopianfish Or look for deals in places like amazon.com or bestbuy.com or even Ebay.com.

          1 Reply Last reply Reply Quote 0
          • tjkreidlT Offline
            tjkreidl Ambassador @nikade
            last edited by

            @nikade Did the same. VLANs are great! We did use separate NICs for iSCSI storage. But the PMI and VMs, traffic was handled easily by the dual 10 GiB NICs, even with several hundred XenDesktop VMs hosted among three servers (typically around 8- VMs per server).

            P 1 Reply Last reply Reply Quote 0
            • P Offline
              Pilow @tjkreidl
              last edited by Pilow

              @tjkreidl I have same results as you : ~80MB/s for backups over a 2x10Gb network bond.

              Only way i managed to get overt his 'limit' is by using XOproxies, on another VLAN on this SAME bond
              Don't ask me why, but bandwith tests (when you click TEST on a remote) tripled comparing to the remote on same VLAN as the XOA.

              Isolating is a better best practive for backup networks, but also has the benefit of getting more bandwith ? strange.

              My backups can now reach 100/110 MB/s (yeah tripled only in the test...)

              J 1 Reply Last reply Reply Quote 0
              • J Offline
                jshiells @Pilow
                last edited by

                @Pilow I think what you are seeing is the result of tapdisk single threaded nature (among other things master Dom0 related). I would suggest changing concurrency to 4 or 8 and see if the speed AT THE PORT is higher during backups. You may still only see <80MiB/s PER vm getting backed up, maybe less, but you may end with 4 or 8 backing up at 40 to 60MiB/s == high total bandwidth.

                also note.. you 61.63MiB (note capital B for Mebibyte/s) is == 517.267 mb/s network speed megabite/second

                1 Reply Last reply Reply Quote 0
                • P Offline
                  Pilow
                  last edited by

                  okay i'll up the concurrency, backup is happening in 35 minutes, i'll report back

                  1 Reply Last reply Reply Quote 0
                  • P Offline
                    Pilow
                    last edited by

                    results from upgrading from 3 concurrency (yesterday) to 6 concurrency today
                    backup of 9 VMs, proceeded by XOA (was a delta not a full)
                    XOA is 4 vCPU, 8Gb RAM (tuned in systemd service to let 7Gb to xo-server, 1Gb to debian)
                    remote is a same LAN (10Gb/s) S3 remote, with 25Gb/s iSCSI storage

                    0674ec72-2513-4fb8-8b19-3c9f4bebe962-{5617553C-9A6A-47B8-8096-0E2D845D1470}.png

                    on XOA, I can see a peak of RX of 195.45MiB/s on VIF0, transmitting to S3 remote on VIF2 at lesser bandwith
                    e7080b70-bc64-45d1-bbe0-869c6dee8db6-image.png

                    simultaneously, on the S3 remote, I can see incoming traffic of maximum 114.34MiB/s on VIF3 (LAN), transmitting same speed on iSCSI (2 active paths, 2 lines of TX are the same bandwith consumption, half/half need to add them)
                    d77bc023-55e1-48c9-b4ff-05c5704ae522-image.png

                    per VM speed in the job report is between Speed: 37.34 MiB/s and Speed: 143.09 MiB/s

                    1 Reply Last reply Reply Quote 0
                    • P Offline
                      Pilow
                      last edited by

                      the 10Gb card can really be used at its full potential in XCP
                      c437144a-80a9-4d34-a0ab-b7e182bfff55-{1E695E07-AB95-41EC-B9F0-B60D6C9F10B1}.png

                      This is a graph when live migrating 4 VMs from one host to another... using the same VLAN on same BOND as backups transfers.

                      why can't we have these speeds in backups ? 😞

                      1 Reply Last reply Reply Quote 0
                      • olivierlambertO Offline
                        olivierlambert Vates 🪐 Co-Founder CEO
                        last edited by

                        Because you are comparing apples and carrots. Live migrating a VM is moving RAM between hosts, not moving any data blocks stored on a storage repository (SR) or backup repo (BR). There are MANY more layers involved with blocks. Try to live migrate a VM with its storage in live, you'll see you'll be ballpark VM backup speed.

                        1 Reply Last reply Reply Quote 1
                        • P Offline
                          Pilow
                          last edited by

                          indeed.

                          What should we expect with smapiV3/QCOW2 ?

                          I won't take it for granted but, shall we get out the current ballpark ?

                          1 Reply Last reply Reply Quote 0
                          • olivierlambertO Offline
                            olivierlambert Vates 🪐 Co-Founder CEO
                            last edited by

                            Impossible to tell yet, more in few months.

                            1 Reply Last reply Reply Quote 1
                            • First post
                              Last post