I just added a P4 to one of my hosts for exactly this. My servers can only handle low-profile cards, so the P4 fits. Not the most powerful of GPU's but I can get my feet wet.
Posts made by JamesG
-
RE: Nvidia P40s with XCP-ng 8.3 for inference and light training
-
RE: Epyc VM to VM networking slow
These latest 8.3 update speeds are still slower than a 13 year-old Xeon E3 1230.
-
RE: XO deploy multiple VMs observation/issue
While not critical...It would be nice to have the same dialog for system disks as we do for system names in the multiple VM's section under "advanced." Seems like a really easy thing to add that would prevent someone from having to go back and rename disks later. The system is already creating a name, it's already giving us the ability to name the disks for singles, why not be able to name the disks like the systems for multiples?
-
RE: XO deploy multiple VMs observation/issue
Okay...Updated to latest commit. Still behaves the same
-
RE: XO deploy multiple VMs observation/issue
@DustinB I'm on ee6fa which is apparently 11 commits behind. I'm updating now. and will retry a multiple VM deployment and see what happens.
My understanding is that the "name" of the disk doesn't really matter so much as that's more for us humans. The system mainly works off of UUID's which are unique.
I'll post back later.
-
XO deploy multiple VMs observation/issue
Using XO from sources (and presumably XOA) to deploy multiple VM's, there's a function to set the names for the VM's. I wish there was the same capability to name the disks.
I usually label the disks the same as the host with an _0 for first (boot) drive, and _"X" for any subsequent drives (if any).
So using XO, if I try to create three hosts with the advanced/multiple feature, I can set the host naming: Sys-VM-1, Sys-VM-2, Sys-VM-3, and I would like to define their respective system drives as Sys-VM-1_0, Sys-VM-2_0, and Sys-VM-3_0, but instead, I get three disks named the same as whatever I specified in the main VM disk creation dialog (Sys-VM-1_0). Yes...You can go back later and rename them, but it would be nice to just do it all from the main dialog and not have to go back later and clean up.
-
RE: Epyc VM to VM networking slow
@Seneram If you search the forum you'll find other topics that discuss this. In January/February 2023 I reported it myself because I was trying to build a cluster that needed high-performance networking and found that the VM's couldn't do it. While researching the issue then, I seem to recall seeing other topics from a year or so prior to that.
Just because this one thread isn't two years old doesn't mean this is the only topic reporting the issue.
-
RE: Epyc VM to VM networking slow
While I'm very happy to see this getting some attention now, I am a bit disappointed that this has been reported for so long (easily two years or more) and is only now getting serious attention. Hopefully it will be resolved fairly soon.
That said...If you need high-speed networking in Epyc VM's now, SR-IOV can be your friend. Using ConnectX-4 25Gb cards I can hit 22-23Gb/s with guest VM's. Obviously SR-IOV brings along a whole other set of issues, but it's a way to get fast networking today.
-
RE: nVidia Tesla P4 for vgpu and Plex encoding
From my perspective, there's literally money on the ground for any virtualization platform to pick up VDI with Intel. The GPU's are affordable and performant for VDI work. They currently work with Openshift and Proxmox is at work on it.
-
RE: nVidia Tesla P4 for vgpu and Plex encoding
@olivierlambert As mentioned in another thread...Intel Flex GPU's seem primed for this. nVidia is closed and license greedy. AMD seems a little lost and wandering. Intel has said, "No licensing...Just use it." but they require some development.
It should be relatively easy to incorporate the Intel Flex GPU's, but I'm not sure if the newer kernels are required. That might be where the wheels fall off for now.
-
XOSTOR to 8.3?
Apologies if I've missed this...But has XOSTOR been made compatible to 8.3? It seemed last I checked it was listed as 8.2 only.
-
RE: Intel Flex GPU with SR-IOV for GPU accelarated VDIs
@sanjay
Definitely in agreement here. Getting Flex integration with XCP-ng is a no-brainer to me. Hopefully Vates is working on it. If not...Maybe there's a way to fund or prioritize the development? -
RE: Epyc VM to VM networking slow
The past couple of days have been pretty nuts, but I've dabbled with testing this and in my configuration with XCP-ng 8.3 with all currently released patches, I top out at 15Gb/s with 8 threads on Win 10. Going further to 16 threads or beyond doesn't really improve things.
Killing core boost, SMT, and setting deterministic performance in BIOS added nearly 2Gb/s on single-threaded iperf.
When running the iperf and watching htop on the XCP-ng server, I see nearly all cores running at 15-20% for the duration of the transfer. That seems excessive.
Iperf on the E3-1230v2...Single thread, 9.27Gbs. Neglibile improvement for more threads. Surprisingly, a similar hit on CPU performance. Not as bad though. 10Gbps traffic hits about 10% or so. Definitely not as bad as on the Epyc system.
I'll do more thorough testing tomorrow.
-
RE: Epyc VM to VM networking slow
@nicols Agreed. I'm pretty sure this is a Xen/Epyc issue.
This evening I'll build a couple of VM's to your config, run iperf, and report back the results.
-
RE: Epyc VM to VM networking slow
@nicols give me your VM specs and I'll run the exact same tests. vCPU, RAM, anything else relevant.
-
RE: Epyc VM to VM networking slow
@olivierlambert For single threaded iperf....Yes. Our speeds match 100%. Which is half the transfer rate of a single threaded iperf on 12 year-old Xeon E3 hardware.
I understand that we've had lots of security issues in the past decade and several steps have been taken to protect and isolate the memory inside all virtualization platforms. When I first built my E3-1230 Xeon system for homelab, VM to VM iperfs were like 20Gb/s. Nowadays that's significantly slowed down.
Anyway...I just find it hard to believe that with as superior a computing platform as Epyc is, that the single-threaded iperf is so much slwoer than 12-year-old entry level Intel CPUs.
Maybe I should load VMWare on this system and see how it does and report back. Same hardware, but different hypervisor, and compare notes.
-
RE: Epyc VM to VM networking slow
@olivierlambert With a billion threads.
Anyway...
I'm most definitely a willing subject to help get this resolved. Heck..I'll even give you guys access to the environment to do whatever you want to do. I would just like to see this get fixed.
With that...You guys tell me. What tests do you want run and do you want access to the environment to do your own thing with it?
-
RE: Epyc VM to VM networking slow
A note...
I'm running a single 16 core, 32 thread second-gen Epyc.
Nicols is running a dual proc, 24 core, 48 thread third-gen Epyc.My base clock rate is 3.0Ghz. His is 2.9Ghz.
The improved caching and memory handling in the third-gen Epyc should be behaving better than my second gen CPU's, but generally speaking, our performance seems to be the same.