I just added a P4 to one of my hosts for exactly this. My servers can only handle low-profile cards, so the P4 fits. Not the most powerful of GPU's but I can get my feet wet.
Best posts made by JamesG
-
RE: Nvidia P40s with XCP-ng 8.3 for inference and light training
-
RE: nVidia Tesla P4 for vgpu and Plex encoding
From my perspective, there's literally money on the ground for any virtualization platform to pick up VDI with Intel. The GPU's are affordable and performant for VDI work. They currently work with Openshift and Proxmox is at work on it.
-
RE: Centos 9 . why nobody use this OS?!
When IBM/RedHat "killed" CentOS, the rest of the world took a hint and left. Companies and projects left CentOS in droves as the future of their products were in jeopardy due to the loss of CentOS.
At this point, the damage is done.
-
RE: Epyc VM to VM networking slow
These latest 8.3 update speeds are still slower than a 13 year-old Xeon E3 1230.
-
RE: XOA/XO from Sources S3 backup feature usage/status
@Andrew Thanks for that added detail.
Your success to Wasabi is encouraging. Perhaps Planedrops performance issues with BackBlaze B2 is related to a specific combination of implementation of S3 between BackBlaze and XO.
Things to test:
XO to AWS
XO to Wasabi
XO to BackBlazeTheoretically, the performance should be the same to all S3 endpoints.
Latest posts made by JamesG
-
RE: Intel Flex GPU with SR-IOV for GPU accelarated VDIs
@olivierlambert While VDI is maybe not as vital as it once was...I'm experimenting with multimedia work in XCP-ng. Having a VM with GPU off-loading of CODEC encoding would be nice. It's a pretty big CPU hit to make that go.
-
RE: Intel Flex GPU with SR-IOV for GPU accelarated VDIs
@olivierlambert Ideally you need to be somewhere into Kernel 6. 6.12 is sticking out in my head, but I'm not positive when support got fully integrated.
-
RE: Intel Flex GPU with SR-IOV for GPU accelarated VDIs
@olivierlambert If I remember right, you should be able to see 62 VF's on that card. There might be a tool needed to define how many VF's are present like on a NIC.
-
RE: Epyc VM to VM networking slow
@Forza said in Epyc VM to VM networking slow:
Would sr-iov with xoa help backup speeds?
If you specify the SR-IOV NIC, it will be wire-speed.
-
RE: Intel Flex GPU with SR-IOV for GPU accelarated VDIs
@olivierlambert From Intel's page:
"With up to 62 virtual functions based on hardware-enabled single-root input/output virtualization (SR-IOV) and no licensing fees, the Intel
Data Center GPU Flex 140 delivers impeccable quality, flexibility, and productivity at scale."
-
RE: Intel Flex GPU with SR-IOV for GPU accelarated VDIs
Generally speaking...Just because you haven't had direct requests doesn't mean the feature isn't desired.
It's easy enough to look over the current features and because it's not listed, you assume it's not there and move on to find the next viable solution.
Proxmox and OpenShift seem to be killing it in this space with Intel Flex GPU's.
-
RE: Intel Flex GPU with SR-IOV for GPU accelarated VDIs
I can't believe that there's not much interest.
Anyway...As far as I know these are pretty well supported in newer kernels. I think you need to be fairly deep in kernel 6. Given that XCPng/XenServer is currently running on kernel 4 with a bunch of backports, this might be a little problematic.
Last I knew, Intel is not charging licensing fees for using vGPUs like NVidia does.
With a working driver in XCP-ng and no licensing fees, this is could be a real cost-effective VDI platform.
-
RE: Nvidia P40s with XCP-ng 8.3 for inference and light training
I just added a P4 to one of my hosts for exactly this. My servers can only handle low-profile cards, so the P4 fits. Not the most powerful of GPU's but I can get my feet wet.
-
RE: Epyc VM to VM networking slow
These latest 8.3 update speeds are still slower than a 13 year-old Xeon E3 1230.
-
RE: XO deploy multiple VMs observation/issue
While not critical...It would be nice to have the same dialog for system disks as we do for system names in the multiple VM's section under "advanced." Seems like a really easy thing to add that would prevent someone from having to go back and rename disks later. The system is already creating a name, it's already giving us the ability to name the disks for singles, why not be able to name the disks like the systems for multiples?