When IBM/RedHat "killed" CentOS, the rest of the world took a hint and left. Companies and projects left CentOS in droves as the future of their products were in jeopardy due to the loss of CentOS.
At this point, the damage is done.
When IBM/RedHat "killed" CentOS, the rest of the world took a hint and left. Companies and projects left CentOS in droves as the future of their products were in jeopardy due to the loss of CentOS.
At this point, the damage is done.
From my perspective, there's literally money on the ground for any virtualization platform to pick up VDI with Intel. The GPU's are affordable and performant for VDI work. They currently work with Openshift and Proxmox is at work on it.
@Andrew Thanks for that added detail.
Your success to Wasabi is encouraging. Perhaps Planedrops performance issues with BackBlaze B2 is related to a specific combination of implementation of S3 between BackBlaze and XO.
Things to test:
XO to AWS
XO to Wasabi
XO to BackBlaze
Theoretically, the performance should be the same to all S3 endpoints.
From my perspective, there's literally money on the ground for any virtualization platform to pick up VDI with Intel. The GPU's are affordable and performant for VDI work. They currently work with Openshift and Proxmox is at work on it.
@olivierlambert As mentioned in another thread...Intel Flex GPU's seem primed for this. nVidia is closed and license greedy. AMD seems a little lost and wandering. Intel has said, "No licensing...Just use it." but they require some development.
It should be relatively easy to incorporate the Intel Flex GPU's, but I'm not sure if the newer kernels are required. That might be where the wheels fall off for now.
Apologies if I've missed this...But has XOSTOR been made compatible to 8.3? It seemed last I checked it was listed as 8.2 only.
@sanjay
Definitely in agreement here. Getting Flex integration with XCP-ng is a no-brainer to me. Hopefully Vates is working on it. If not...Maybe there's a way to fund or prioritize the development?
The past couple of days have been pretty nuts, but I've dabbled with testing this and in my configuration with XCP-ng 8.3 with all currently released patches, I top out at 15Gb/s with 8 threads on Win 10. Going further to 16 threads or beyond doesn't really improve things.
Killing core boost, SMT, and setting deterministic performance in BIOS added nearly 2Gb/s on single-threaded iperf.
When running the iperf and watching htop on the XCP-ng server, I see nearly all cores running at 15-20% for the duration of the transfer. That seems excessive.
Iperf on the E3-1230v2...Single thread, 9.27Gbs. Neglibile improvement for more threads. Surprisingly, a similar hit on CPU performance. Not as bad though. 10Gbps traffic hits about 10% or so. Definitely not as bad as on the Epyc system.
I'll do more thorough testing tomorrow.
@nicols Agreed. I'm pretty sure this is a Xen/Epyc issue.
This evening I'll build a couple of VM's to your config, run iperf, and report back the results.
@nicols give me your VM specs and I'll run the exact same tests. vCPU, RAM, anything else relevant.
@olivierlambert For single threaded iperf....Yes. Our speeds match 100%. Which is half the transfer rate of a single threaded iperf on 12 year-old Xeon E3 hardware.
I understand that we've had lots of security issues in the past decade and several steps have been taken to protect and isolate the memory inside all virtualization platforms. When I first built my E3-1230 Xeon system for homelab, VM to VM iperfs were like 20Gb/s. Nowadays that's significantly slowed down.
Anyway...I just find it hard to believe that with as superior a computing platform as Epyc is, that the single-threaded iperf is so much slwoer than 12-year-old entry level Intel CPUs.
Maybe I should load VMWare on this system and see how it does and report back. Same hardware, but different hypervisor, and compare notes.