XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    10gb backup only managing about 80Mb

    Scheduled Pinned Locked Moved Backup
    32 Posts 8 Posters 1.9k Views 9 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • nikadeN Offline
      nikade Top contributor @utopianfish
      last edited by

      @utopianfish said in 10gb backup only managing about 80Mb:

      @nikade i think the problem is its using the mgmt interface to do the backup..its not touching the 10GB nics.. when i set it under Pools/Adanced/Backup to use the 10gb nic as default the job fails... setting it back to none the job is successful with a speed of 80 MiB/s.. so using the 1GB mgmt nic... how do i get the backups to use the dedicated 10gb link then. ?

      May I ask why your management interface is not on the 10G nic? There is absolutely no downside to having that kind of setup.

      We used this setup for 7 years on our Dell R630's without any issues at all. We had 2x10G NIC in our hosts and then put the management interface on top of the bond0 as a native vlan.
      Then we just added our VLAN's on top on the bond0 and voila, all your interfaces benefits from the 10G nic's.

      utopianfishU tjkreidlT 2 Replies Last reply Reply Quote 0
      • olivierlambertO Offline
        olivierlambert Vates 🪐 Co-Founder CEO
        last edited by

        That's also what we do in our prod.

        1 Reply Last reply Reply Quote 0
        • A Offline
          acebmxer
          last edited by acebmxer

          I could be wrong but from VMware world the management interface didnt transfer much data if at all. It was only used to communicate to vsphere and/or to the to the host. So no need to waste a 10gb port on something only only see kb worth of data.

          Our previous server had 2x 1gb nics for management 1x 10gb nic for network 2x 10tgb nic for storage 1x 10gb nic for vmotion.

          nikadeN 1 Reply Last reply Reply Quote 0
          • nikadeN Offline
            nikade Top contributor @acebmxer
            last edited by nikade

            @acebmxer said in 10gb backup only managing about 80Mb:

            I could be wrong but from VMware world the management interface didnt transfer much data if at all. It was only used to communicate to vsphere and/or to the to the host. So no need to waste a 10gb port on something only only see kb worth of data.

            Our previous server had 2x 1gb nics for management 1x 10gb nic for network 2x 10tgb nic for storage 1x 10gb nic for vmotion.

            Tbh I do the same on our vmware hosts, 2x10G or 2x25G and then the management as a vlan interface on that vSwitch, aswell as the VLAN's used for storage, VM traffic and so on.

            I find it much easier to keep the racks clean if we only have 2 connections from each hosts, rather than 4, since it kind of adds up really fast and makes the rack impossible to keep nice and clean when you have 15-20 machines in it + storage + switches + firewalls and all the inter-connections with other racks, ip-transit and so on.

            Edit:
            Except for vSAN hosts where the vSAN traffic needs atleast 1 dedicated interface, but those are the only exception.

            1 Reply Last reply Reply Quote 0
            • utopianfishU Offline
              utopianfish @nikade
              last edited by

              @nikade My 10gb nic are directly connected to my hosts.. i dont have a 10gb switch.. the mgmt is connected via switch ports...

              nikadeN 1 Reply Last reply Reply Quote 0
              • nikadeN Offline
                nikade Top contributor @utopianfish
                last edited by

                @utopianfish I see, that explains a lot.

                tjkreidlT 1 Reply Last reply Reply Quote 1
                • tjkreidlT Offline
                  tjkreidl Ambassador @nikade
                  last edited by tjkreidl

                  @nikade Yeah, that is a far from optimal setup. It will force the data to flow through the management interface before being routed to the storage NICs.
                  Running iostat and xtop should show the load. A better configuration IMO would be putting the storage NICs on the switch and using a separate network or VLAN for the storage I/O traffic.
                  Storage I/O optimization takes some time and effort. The type, number, and RAID configuration of your storage device as well as speed of your host CPUs, eize and type of memory, and configuration of your VMs (if NUMA aware, for example) all will play a role.

                  nikadeN 1 Reply Last reply Reply Quote 1
                  • nikadeN Offline
                    nikade Top contributor @tjkreidl
                    last edited by

                    @tjkreidl I think the issue is that he's got no 10G switch, hence the direct connection 🙂
                    But you live and you learn, best would be to pick up a cheap 10G switch and make it right!

                    utopianfishU 1 Reply Last reply Reply Quote 1
                    • utopianfishU Offline
                      utopianfish @nikade
                      last edited by

                      @nikade ok thanks.. that’s begs the question.. where can I get a cheap 10gb switch.. not sfp. Would need to be rj45 connections

                      A tjkreidlT 2 Replies Last reply Reply Quote 0
                      • A Offline
                        AlbertK @utopianfish
                        last edited by AlbertK

                        @utopianfish

                        Mikrotik CRS304-4XG-IN or CRS312-4C+8XG-RM

                        https://mikrotik.com/products/group/switches

                        1 Reply Last reply Reply Quote 1
                        • tjkreidlT Offline
                          tjkreidl Ambassador @utopianfish
                          last edited by

                          @utopianfish Or look for deals in places like amazon.com or bestbuy.com or even Ebay.com.

                          1 Reply Last reply Reply Quote 0
                          • tjkreidlT Offline
                            tjkreidl Ambassador @nikade
                            last edited by

                            @nikade Did the same. VLANs are great! We did use separate NICs for iSCSI storage. But the PMI and VMs, traffic was handled easily by the dual 10 GiB NICs, even with several hundred XenDesktop VMs hosted among three servers (typically around 8- VMs per server).

                            P 1 Reply Last reply Reply Quote 0
                            • P Offline
                              Pilow @tjkreidl
                              last edited by Pilow

                              @tjkreidl I have same results as you : ~80MB/s for backups over a 2x10Gb network bond.

                              Only way i managed to get overt his 'limit' is by using XOproxies, on another VLAN on this SAME bond
                              Don't ask me why, but bandwith tests (when you click TEST on a remote) tripled comparing to the remote on same VLAN as the XOA.

                              Isolating is a better best practive for backup networks, but also has the benefit of getting more bandwith ? strange.

                              My backups can now reach 100/110 MB/s (yeah tripled only in the test...)

                              J 1 Reply Last reply Reply Quote 0
                              • J Offline
                                jshiells @Pilow
                                last edited by

                                @Pilow I think what you are seeing is the result of tapdisk single threaded nature (among other things master Dom0 related). I would suggest changing concurrency to 4 or 8 and see if the speed AT THE PORT is higher during backups. You may still only see <80MiB/s PER vm getting backed up, maybe less, but you may end with 4 or 8 backing up at 40 to 60MiB/s == high total bandwidth.

                                also note.. you 61.63MiB (note capital B for Mebibyte/s) is == 517.267 mb/s network speed megabite/second

                                1 Reply Last reply Reply Quote 0
                                • P Offline
                                  Pilow
                                  last edited by

                                  okay i'll up the concurrency, backup is happening in 35 minutes, i'll report back

                                  1 Reply Last reply Reply Quote 0
                                  • P Offline
                                    Pilow
                                    last edited by

                                    results from upgrading from 3 concurrency (yesterday) to 6 concurrency today
                                    backup of 9 VMs, proceeded by XOA (was a delta not a full)
                                    XOA is 4 vCPU, 8Gb RAM (tuned in systemd service to let 7Gb to xo-server, 1Gb to debian)
                                    remote is a same LAN (10Gb/s) S3 remote, with 25Gb/s iSCSI storage

                                    0674ec72-2513-4fb8-8b19-3c9f4bebe962-{5617553C-9A6A-47B8-8096-0E2D845D1470}.png

                                    on XOA, I can see a peak of RX of 195.45MiB/s on VIF0, transmitting to S3 remote on VIF2 at lesser bandwith
                                    e7080b70-bc64-45d1-bbe0-869c6dee8db6-image.png

                                    simultaneously, on the S3 remote, I can see incoming traffic of maximum 114.34MiB/s on VIF3 (LAN), transmitting same speed on iSCSI (2 active paths, 2 lines of TX are the same bandwith consumption, half/half need to add them)
                                    d77bc023-55e1-48c9-b4ff-05c5704ae522-image.png

                                    per VM speed in the job report is between Speed: 37.34 MiB/s and Speed: 143.09 MiB/s

                                    1 Reply Last reply Reply Quote 0
                                    • P Offline
                                      Pilow
                                      last edited by

                                      the 10Gb card can really be used at its full potential in XCP
                                      c437144a-80a9-4d34-a0ab-b7e182bfff55-{1E695E07-AB95-41EC-B9F0-B60D6C9F10B1}.png

                                      This is a graph when live migrating 4 VMs from one host to another... using the same VLAN on same BOND as backups transfers.

                                      why can't we have these speeds in backups ? 😞

                                      1 Reply Last reply Reply Quote 0
                                      • olivierlambertO Offline
                                        olivierlambert Vates 🪐 Co-Founder CEO
                                        last edited by

                                        Because you are comparing apples and carrots. Live migrating a VM is moving RAM between hosts, not moving any data blocks stored on a storage repository (SR) or backup repo (BR). There are MANY more layers involved with blocks. Try to live migrate a VM with its storage in live, you'll see you'll be ballpark VM backup speed.

                                        1 Reply Last reply Reply Quote 1
                                        • P Offline
                                          Pilow
                                          last edited by

                                          indeed.

                                          What should we expect with smapiV3/QCOW2 ?

                                          I won't take it for granted but, shall we get out the current ballpark ?

                                          1 Reply Last reply Reply Quote 0
                                          • olivierlambertO Offline
                                            olivierlambert Vates 🪐 Co-Founder CEO
                                            last edited by

                                            Impossible to tell yet, more in few months.

                                            1 Reply Last reply Reply Quote 1
                                            • First post
                                              Last post