XCP-ng
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    10gb backup only managing about 80Mb

    Scheduled Pinned Locked Moved Backup
    32 Posts 8 Posters 1.6k Views 9 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • olivierlambertO Offline
      olivierlambert Vates πŸͺ Co-Founder CEO
      last edited by

      Hi,

      You mean 80MiB/s in transfer speed?

      utopianfishU 1 Reply Last reply Reply Quote 0
      • utopianfishU Offline
        utopianfish @olivierlambert
        last edited by

        @olivierlambert ok here's a bit from the log.. Start: 2025-09-03 12:00
        End: 2025-09-03 12:00
        Duration: a few seconds
        Size: 624 MiB
        Speed: 61.63 MiB/s

        Start: 2025-09-03 12:00
        End: 2025-09-03 12:00

        so other jobs are sowing anywhere betwwen 25 to about 80. MiB/s

        nikadeN 1 Reply Last reply Reply Quote 0
        • olivierlambertO Offline
          olivierlambert Vates πŸͺ Co-Founder CEO
          last edited by

          What kind of backups are you using? In general, the bottleneck is within the export speed per disk (or if you do full backup, compression can be a factor)

          utopianfishU 1 Reply Last reply Reply Quote 0
          • utopianfishU Offline
            utopianfish @olivierlambert
            last edited by

            @olivierlambert

            Continuous replication.... runs every 12 hrs...

            1 Reply Last reply Reply Quote 0
            • nikadeN Offline
              nikade Top contributor @utopianfish
              last edited by

              @utopianfish said in 10gb backup only managing about 80Mb:

              @olivierlambert ok here's a bit from the log.. Start: 2025-09-03 12:00
              End: 2025-09-03 12:00
              Duration: a few seconds
              Size: 624 MiB
              Speed: 61.63 MiB/s

              Start: 2025-09-03 12:00
              End: 2025-09-03 12:00

              so other jobs are sowing anywhere betwwen 25 to about 80. MiB/s

              What CPU are you using? We saw about the same speeds on our older Intel Xeon's with 2.4ghz and when we switched to newer Intel Xeon Gold with 3Ghz the speeds increased quite a bit, we're now seeing around 110-160 MiB/s after migrating the XO VM.

              1 Reply Last reply Reply Quote 1
              • olivierlambertO Offline
                olivierlambert Vates πŸͺ Co-Founder CEO
                last edited by

                I would have ask the same question πŸ˜„

                utopianfishU nikadeN 2 Replies Last reply Reply Quote 1
                • utopianfishU Offline
                  utopianfish @olivierlambert
                  last edited by

                  @olivierlambert

                  My hosts are Hp Elitedesk G5 SFF.. 64gb Mem Processor type is Intel Core i5 3.0Ghz i believe.. nics are showing connected @ 10GB

                  1 Reply Last reply Reply Quote 0
                  • olivierlambertO Offline
                    olivierlambert Vates πŸͺ Co-Founder CEO
                    last edited by

                    IDK if the BIOS provides many power options, but you could make sure you are turbo boosting when needed. Because it's likely your CPU is both old but also not getting at 3Ghz at all.

                    1 Reply Last reply Reply Quote 0
                    • nikadeN Offline
                      nikade Top contributor @olivierlambert
                      last edited by

                      @olivierlambert said in 10gb backup only managing about 80Mb:

                      I would have ask the same question πŸ˜„

                      Great minds and all that, you know πŸ˜‰

                      @utopianfish check if you have any kind of power options regarding "power saving" or "performance" modes you can change in the BIOS. That could make a big difference as well.

                      utopianfishU 1 Reply Last reply Reply Quote 0
                      • utopianfishU Offline
                        utopianfish @nikade
                        last edited by

                        @nikade i think the problem is its using the mgmt interface to do the backup..its not touching the 10GB nics.. when i set it under Pools/Adanced/Backup to use the 10gb nic as default the job fails... setting it back to none the job is successful with a speed of 80 MiB/s.. so using the 1GB mgmt nic... how do i get the backups to use the dedicated 10gb link then. ?

                        nikadeN 1 Reply Last reply Reply Quote 0
                        • olivierlambertO Offline
                          olivierlambert Vates πŸͺ Co-Founder CEO
                          last edited by

                          Use XO to connect on the host with the 10G network IPs addresses.

                          1 Reply Last reply Reply Quote 0
                          • nikadeN Offline
                            nikade Top contributor @utopianfish
                            last edited by

                            @utopianfish said in 10gb backup only managing about 80Mb:

                            @nikade i think the problem is its using the mgmt interface to do the backup..its not touching the 10GB nics.. when i set it under Pools/Adanced/Backup to use the 10gb nic as default the job fails... setting it back to none the job is successful with a speed of 80 MiB/s.. so using the 1GB mgmt nic... how do i get the backups to use the dedicated 10gb link then. ?

                            May I ask why your management interface is not on the 10G nic? There is absolutely no downside to having that kind of setup.

                            We used this setup for 7 years on our Dell R630's without any issues at all. We had 2x10G NIC in our hosts and then put the management interface on top of the bond0 as a native vlan.
                            Then we just added our VLAN's on top on the bond0 and voila, all your interfaces benefits from the 10G nic's.

                            utopianfishU tjkreidlT 2 Replies Last reply Reply Quote 0
                            • olivierlambertO Offline
                              olivierlambert Vates πŸͺ Co-Founder CEO
                              last edited by

                              That's also what we do in our prod.

                              1 Reply Last reply Reply Quote 0
                              • A Online
                                acebmxer
                                last edited by acebmxer

                                I could be wrong but from VMware world the management interface didnt transfer much data if at all. It was only used to communicate to vsphere and/or to the to the host. So no need to waste a 10gb port on something only only see kb worth of data.

                                Our previous server had 2x 1gb nics for management 1x 10gb nic for network 2x 10tgb nic for storage 1x 10gb nic for vmotion.

                                nikadeN 1 Reply Last reply Reply Quote 0
                                • nikadeN Offline
                                  nikade Top contributor @acebmxer
                                  last edited by nikade

                                  @acebmxer said in 10gb backup only managing about 80Mb:

                                  I could be wrong but from VMware world the management interface didnt transfer much data if at all. It was only used to communicate to vsphere and/or to the to the host. So no need to waste a 10gb port on something only only see kb worth of data.

                                  Our previous server had 2x 1gb nics for management 1x 10gb nic for network 2x 10tgb nic for storage 1x 10gb nic for vmotion.

                                  Tbh I do the same on our vmware hosts, 2x10G or 2x25G and then the management as a vlan interface on that vSwitch, aswell as the VLAN's used for storage, VM traffic and so on.

                                  I find it much easier to keep the racks clean if we only have 2 connections from each hosts, rather than 4, since it kind of adds up really fast and makes the rack impossible to keep nice and clean when you have 15-20 machines in it + storage + switches + firewalls and all the inter-connections with other racks, ip-transit and so on.

                                  Edit:
                                  Except for vSAN hosts where the vSAN traffic needs atleast 1 dedicated interface, but those are the only exception.

                                  1 Reply Last reply Reply Quote 0
                                  • utopianfishU Offline
                                    utopianfish @nikade
                                    last edited by

                                    @nikade My 10gb nic are directly connected to my hosts.. i dont have a 10gb switch.. the mgmt is connected via switch ports...

                                    nikadeN 1 Reply Last reply Reply Quote 0
                                    • nikadeN Offline
                                      nikade Top contributor @utopianfish
                                      last edited by

                                      @utopianfish I see, that explains a lot.

                                      tjkreidlT 1 Reply Last reply Reply Quote 1
                                      • tjkreidlT Offline
                                        tjkreidl Ambassador @nikade
                                        last edited by tjkreidl

                                        @nikade Yeah, that is a far from optimal setup. It will force the data to flow through the management interface before being routed to the storage NICs.
                                        Running iostat and xtop should show the load. A better configuration IMO would be putting the storage NICs on the switch and using a separate network or VLAN for the storage I/O traffic.
                                        Storage I/O optimization takes some time and effort. The type, number, and RAID configuration of your storage device as well as speed of your host CPUs, eize and type of memory, and configuration of your VMs (if NUMA aware, for example) all will play a role.

                                        nikadeN 1 Reply Last reply Reply Quote 1
                                        • nikadeN Offline
                                          nikade Top contributor @tjkreidl
                                          last edited by

                                          @tjkreidl I think the issue is that he's got no 10G switch, hence the direct connection πŸ™‚
                                          But you live and you learn, best would be to pick up a cheap 10G switch and make it right!

                                          utopianfishU 1 Reply Last reply Reply Quote 1
                                          • utopianfishU Offline
                                            utopianfish @nikade
                                            last edited by

                                            @nikade ok thanks.. that’s begs the question.. where can I get a cheap 10gb switch.. not sfp. Would need to be rj45 connections

                                            A tjkreidlT 2 Replies Last reply Reply Quote 0
                                            • First post
                                              Last post