When IBM/RedHat "killed" CentOS, the rest of the world took a hint and left. Companies and projects left CentOS in droves as the future of their products were in jeopardy due to the loss of CentOS.
At this point, the damage is done.
When IBM/RedHat "killed" CentOS, the rest of the world took a hint and left. Companies and projects left CentOS in droves as the future of their products were in jeopardy due to the loss of CentOS.
At this point, the damage is done.
@Andrew Thanks for that added detail.
Your success to Wasabi is encouraging. Perhaps Planedrops performance issues with BackBlaze B2 is related to a specific combination of implementation of S3 between BackBlaze and XO.
Things to test:
XO to AWS
XO to Wasabi
XO to BackBlaze
Theoretically, the performance should be the same to all S3 endpoints.
The past couple of days have been pretty nuts, but I've dabbled with testing this and in my configuration with XCP-ng 8.3 with all currently released patches, I top out at 15Gb/s with 8 threads on Win 10. Going further to 16 threads or beyond doesn't really improve things.
Killing core boost, SMT, and setting deterministic performance in BIOS added nearly 2Gb/s on single-threaded iperf.
When running the iperf and watching htop on the XCP-ng server, I see nearly all cores running at 15-20% for the duration of the transfer. That seems excessive.
Iperf on the E3-1230v2...Single thread, 9.27Gbs. Neglibile improvement for more threads. Surprisingly, a similar hit on CPU performance. Not as bad though. 10Gbps traffic hits about 10% or so. Definitely not as bad as on the Epyc system.
I'll do more thorough testing tomorrow.
@nicols Agreed. I'm pretty sure this is a Xen/Epyc issue.
This evening I'll build a couple of VM's to your config, run iperf, and report back the results.
@nicols give me your VM specs and I'll run the exact same tests. vCPU, RAM, anything else relevant.
@olivierlambert For single threaded iperf....Yes. Our speeds match 100%. Which is half the transfer rate of a single threaded iperf on 12 year-old Xeon E3 hardware.
I understand that we've had lots of security issues in the past decade and several steps have been taken to protect and isolate the memory inside all virtualization platforms. When I first built my E3-1230 Xeon system for homelab, VM to VM iperfs were like 20Gb/s. Nowadays that's significantly slowed down.
Anyway...I just find it hard to believe that with as superior a computing platform as Epyc is, that the single-threaded iperf is so much slwoer than 12-year-old entry level Intel CPUs.
Maybe I should load VMWare on this system and see how it does and report back. Same hardware, but different hypervisor, and compare notes.
@olivierlambert With a billion threads.
Anyway...
I'm most definitely a willing subject to help get this resolved. Heck..I'll even give you guys access to the environment to do whatever you want to do. I would just like to see this get fixed.
With that...You guys tell me. What tests do you want run and do you want access to the environment to do your own thing with it?
A note...
I'm running a single 16 core, 32 thread second-gen Epyc.
Nicols is running a dual proc, 24 core, 48 thread third-gen Epyc.
My base clock rate is 3.0Ghz. His is 2.9Ghz.
The improved caching and memory handling in the third-gen Epyc should be behaving better than my second gen CPU's, but generally speaking, our performance seems to be the same.
@olivierlambert Not really...
In Nicols first post, a single threaded iperf got 3.38Gb/s.
Single threaded for me was 3.32Gb/s.
With two threads I get 5.19Gb/s
With 20 threads I cap off at 7.02Gb/s
This performance is about the same with Windows VM's and Debian VM's. So it's not a guest OS issue. It's something in the hypervisor.
I cloned a Win10 VM...Win10 to Win10, same performance as Debian.
This is most definitely something within the networking infrastructure of Xen.
Here's my thread from earlier this year:
https://xcp-ng.org/forum/topic/6916/tracking-down-poor-network-performance/11
Here's a referenced thread in that thread:
I'd be curious how this works in VMWare.