How to get 10gbe speeds on guest VMs?
-
New to XCP-ng and am having trouble determining why guest VMs cannot complete an iperf3 test at 10gb. Running this same test on physical servers or guests in VMware works properly. On a XCP-ng guest VMs this test maxes out at <3gbps. Are there any obvious settings I may be missing? Currently the VIFs I'm testing with have an MTU of 9000, which reports correctly on the guests. I am questioning the "Pool-wide network associated with ethX" setting which the MTU of 1500, which may be overriding my VIFs settings? Iperf3 command I'm using is: (iperf3 -c x.x.x.x -p xxxx -t 60 -P 10). Any help would be appreciated.
-
@olivierlambert said in How to get 10gbe speeds on guest VMs?:
OpenvSwitch
Found out I was missing setting the MTU of the 10gbe PIFs on my XCP-ng server. "xe pif-list" to find the <network-uuid>, then "xe network-param-set uuid=<network-uuid> MTU=9000" Found out the hard way you need to power off all VMs that have any attachment or simply unplug those NICs for the change to take. After that, configured the MTU of my guest test VM to 9000 and ipef3 testing is now at 10gb.
-
Your network must have a MTU of 9000 if you want to be really used. Otherwise, on OpenvSwitch part it won't be used at 9000 (and will stay at 1500)
I let @fohdeesha deal with the rest
edit: also please tell us more about your setup
-
@olivierlambert said in How to get 10gbe speeds on guest VMs?:
OpenvSwitch
Found out I was missing setting the MTU of the 10gbe PIFs on my XCP-ng server. "xe pif-list" to find the <network-uuid>, then "xe network-param-set uuid=<network-uuid> MTU=9000" Found out the hard way you need to power off all VMs that have any attachment or simply unplug those NICs for the change to take. After that, configured the MTU of my guest test VM to 9000 and ipef3 testing is now at 10gb.
-
Ah great
-
-
-
Hi There
I have the same issue on BSD (in fact opnsense). If I do the same testing with a linux vm (ubuntu 18) beside, I get >1Gbs. But with the more or less same settings on opnsense (Freebsd 13.1) it will stay <1Gbs. (Set 9000 MTU manually on both vms)
Is this a known issue with bsd, or is it worth investigating further?
Cheers.
-
In theory you would want to change the MTU of parent interface in OPNSense to 9000. In my case testing is not possible since I run all vlans off a since interface (2x10gbe NIC) and do not want to change the MTU of systems outside my virtual environment. Anyways, with the change mentioned above you achieve 10gbe speeds between VMs.