The past year 2022 was again an exciting year with numerous innovations in XCP-ng and Xen Orchestra as well as good and helpful discussions in the forum. I am excited about what the new year brings and wish everyone a good start for 2023.
Best posts made by gskger
-
A wonderful new year for everybody
-
RE: Can I just say thanks?
I second that! While not a commercial user, I really like the community and the active participation of the Vates team helping novice homelab user with patience and commercial user with in-depth knowledge alike. Keep rocking!
-
RE: Backup reports on Microsoft Teams
You could at least send the backup reports (requires
backup-reports
andtransport-email
plugin on XOA) to a Microsoft Teams channel of your choice (Channel - More options - Get email adress). -
RE: XCP-ng 8.2 updates announcements and testing
@gduperrey Updated my two host playlab without a problem. Installed and/or update guest tools (now reporting
7.30.0-11
) on some mainstream Linux distros worked as well as the usual VM operations in the pool. Looks good -
RE: XCP-ng 8.2 updates announcements and testing
@bleader Update worked well on my two node homelab and everything looks and works normal after reboot. I did some basic stuff like VM and Storage migration, but nothing in depth. Let's see how things work out.
-
RE: XCP-ng 8.2 updates announcements and testing
@bleader Updated my homelab without any issues
-
RE: XCP-ng 8.2 updates announcements and testing
@stormi Did not even know the problem existed . Anyway, added a new (second) DNS server (9.9.9.9) to the DNS server list via
xsconsole
and rebooted the host (XCP-ng 8.2.0 fully patched).Before update: DNS 9.9.9.9 did not persist, only the previous settings are shown
After update: DNS 9.9.9.9 did persist the reboot and is listed together with the previous settingsDeleting DNS 9.9.9.9 worked as well, so the
xsconsole
update worked for me. -
RE: XCP-ng 8.2 updates announcements and testing
@stormi Updated my two host playlab (8.2.0 fully patched, the third host currently serves as a Covid-19 homeoffice workstation) with no error. Rebooted and ran the usual tests (create, live migrate, copy and delete a linux and a windows 10 VM as well as create / revert snapshot (with/without ram) ). Fooled myself with a
VM_LACKS_FEATURE
error on the windows 10 VM until I realized that I forgot to install the Guest tools - I need more sleep. Will try a restore after tonights backup.Edit: restore from backup worked as well
-
A great and happy new year
A great and happy new year 2022 to everybody! Another year has passed and the XCP-ng community continues to rock! Stay optimistic and healthy (never hurts ).
-
Nvidia P40s with XCP-ng 8.3 for inference and light training
Being curious about Large Language Models (LLMs) and machine learning, I wanted to add GPUs to my XCP-ng homelab. Finding the right GPU in June 2024 was not easy, given the numerous options and constraints, including a limited budget. My primary setup consists of two HP ProDesk 600 G6 running my 24/7 XCP-ng cluster with shared storage. My secondary setup features a set of Dell R210 II and Dell R720 servers, which I use for memory-intensive tasks. The Dell R720 can hold up to two full-sized GPU which lead to my first requirement: a GPU must fit into the Dell R720.
With R720 compatibility as a requirement, gaming GPUs (RTX 2080, 3080/3090, 4080/4090) were not an option. Additionally, they are expensive and do not all come with a lot of VRAM memory. But I admit that those are more powerful compared to what I came up with.
Since I want to test different LLMs with various parameter sizes, my minimum memory size requirement was 24GB VRAM. That clearly reduced the GPU options again, even for compatible low power (~70W) GPUs like the Nvidia RTX A2000 (12GB) or the Nvidia Tesla T4 (16GB).
My budget limit for getting started was around β¬300 for one GPU. That narrowed down my search to the Nvidia Tesla P40, a Pascal architecture GPU that, when released in 2016, cost around $5,699. In Europe, the P40 is available on Ebay for around β¬300 to β¬500, and I was very lucky to get two P40s for around β¬510 in total. Seeing two P40s on Ebay in the US for $299 with no delivery option for Europe was a painful experience though.
However, I had to make two compromises with the P40, which may be a problem in the future. First, the P40 is lacking Tensor Cores, which are essential for deep learning training compared to FP32 training. Additionally, the P40 is limited by its CUDA compute capability of 6.1, which is lower than that of newer GPUs like the H100 (9.0). At some point, software tools might stop supporting the P40.
To install a second GPU I had to swap the Dell R720 riser card #3 from 2 PCIe x8 slots with a 150W power connector to a 1 PCIe x16 slot with a 225W power connector. Like the K80/M40/M60/P100 the P40 has a 8-pin EPS connector, so you need a special power cable that can be sourced from Ebay. Using the standard Dell general-purpose GPU cable risks damaging the GPU or motherboard. Yesterday, the last part arrived so today is install day.
The process of swapping riser #3, installing the two P40 GPUs, and connecting the power cables was straightforward.. During boot, the server checks PCI devices and updates the inventory, which might take some minutes. After the initial fan ramp-up, the fan speed dropped back to normal and the Dell R720 idles at about 126W with both GPUs installed.
Next step was installing and updating XCP-ng 8.3 beta, which was as easy as installing the GPUs. Adding the host to XO from source and activating the PCI pass-through in the hosts advanced view required a reboot, but after that I could setup an Ollama VM to run LLMs and another Open WebUI VM to chat with the LLMs. With 48 GB of VRAM, I can run
llama3-70b
with some headroom and about 6 tokens/sec whilellama3-8b
is much smaller and answers with 23 tokens/sec on this setup.So what are the next steps? On one hand, I want to setup a development environment for
Phyton
and API based usage of LLMs (not only local LLMs, but also cloud based LLMs like ChaGPT or Claude). That will be fun, since I have zero experience with that. On the other hand, I will setup more GPU supported services like Perplexica or AUTOMATIC1111 or whisper. Apart from that, I will also try to improve my prompt engineering skills and learn about LLM multi agent frameworks.The best thing on this setup is that XCP-ng 8.3 beta provides a robust foundation for running Large Language Models (LLMs) and other AI workloads on one machine. Looking forward to the release candidate!
Latest posts made by gskger
-
RE: Nvidia P40s with XCP-ng 8.3 for inference and light training
@Vinylrider Cool modification, but definitely not for the faint-hearted. Is it difficult to cut through the metal? Interestingly, in your setup, the end with the 4x black wires connects to the riser card. With my setup / cable, it's the 5x black wire end that plugs into the riser card. Mh, I saw that on youtube but never gave it much thought until now.
Since my R720s canβt natively monitor GPU temperatures of this GPUs to adjust fan speeds, Iβm planning to create a script that reads both CPU and GPU temperatures and dynamically controls the fan speed through iDRAC. My R720s, equipped with two Intel Xeon CPU E5-2640 v2 processors (TDP of 95W), donβt typically run too hot though, even under load.
Hereβs a quick temperature chart Iβve put together for my setup, showing CPU and GPU temperatures at various fan speeds (adjusted via iDRAC) - both at idle and under load:
With the automatic fan control at 45%, both CPU and GPU temperatures remain comfortably below 60Β°C, even under load.
-
RE: NVIDIA Tesla M40 for AI work in Ubuntu VM - is it a good idea?
@CodeMercenary Did some tests with the A2000 and as expected, the 12GB VRAM is the biggest limitation. Used vanilla installations of Ollama and ComfyUI with no tweaking or optimization. Especially in stable diffusion, the A2000 is about three times faster compared to the P40, but that is to be expected. I have added some results below.
Stable Diffusion tests
A2000 1024x1024, batch 1, iterations 30, cfg 4.0, euler 1.4s/it 1024x1024, batch 4, iterations 30, cfg 4.0, euler 2.5s/it = 0.6s/it P40 1024x1024, batch 1, iterations 30, cfg 4.0, euler 2.8s/it 1024x1024, batch 4, iterations 30, cfg 4.0, euler 12.1s/it = 3s/it
Inference tests
A2000 qwen2.5:14b 21 token/sec qwen2.5-coder:14b 21 token/sec llama3.2:3b-Q8 50 token/sec llama3.2:3b-Q4 60 token/sec P40 qwen2.5:14b 17 token/sec qwen2.5-coder:14b 17 token/sec llama3.2:3b-Q8 40 token/sec llama3.2:3b-Q4 48 token/sec
During heavy testing, the A2000 reached 70Β°C and the P40 reached 60Β°C both with the Dell R720 set to automatic fan control.
-
RE: XCP-ng 8.3 updates announcements and testing
@gduperrey Update some Dell R720s with GPUs and a Dell R730. Update worked without any problem and VMs operate as expected. Will update this post if that changes during day-to-day operation. Great work!
-
RE: NVIDIA Tesla M40 for AI work in Ubuntu VM - is it a good idea?
@CodeMercenary I got my hands on a Nvidia RTX A2000 12GB (around 310β¬ used on Ebay) which might be an option, depending on what you want to do. It is a dual slot low profile GPU with 12GB VRAM, a max power consumption of 70W and active cooling. With a compute capability of 8.3 (P40: 6.1, M40: 5.2) it is fully supported by
ollama
. While 12GB is only 50% of one P40 with 24GB VRAM, it runs small LLMs nicely and with a high token per second rate. It can almost run the Llama3.2 11b vision model (11b-instruct-q4_K_M
) using the GPU with only 3% offloaded to CPU. I will start testing this card during the weekend and can share some results if that would help. -
RE: NVIDIA Tesla M40 for AI work in Ubuntu VM - is it a good idea?
@CodeMercenary I didn't know there was a midboard hard drive option for the R730xd - cool. You could always install a couple Nvidia T4s, but they only come with 16GB of VRAM and are much more expensive compared to the P40s. For reference, I've added a top view of the R720 with two P40s installed.
-
RE: Passthru of Graphics card
@manilx Maybe this post on Intel iGPU passthough gives some ideas? You probably loose the video output on the Protectli when you assign the iGPU to a VM.
-
RE: NVIDIA Tesla M40 for AI work in Ubuntu VM - is it a good idea?
@CodeMercenary The M40 is a server card and (physically) compatible with the R730, so no extra cooling is required (and possible). The downside is that the R730 most likely will still go full blast on all fans regardless of the actual power consumption since the server can not read the GPUs temperature. But there are scripts to manage the fan speeds based on server or GPU temperature. And once you have ollama installed, you can ask it how to write that code
-
RE: NVIDIA Tesla M40 for AI work in Ubuntu VM - is it a good idea?
@CodeMercenary Probably not insane if you want to learn using ollama or other LLM frameworks for inference. But the M40 is an ageing GPU with a low compute capability (v5.2), so with time, it might not be supported any more by platforms like ollama, vLLM, llama.cpp or aprhodite (did not check if they actually support that GPU, but Ollama has support for the M40). I doubt that you get an acceptable performance for stable diffusion (image generation) or training/fine-tuning. But what could you expect for $90?
The card has a power consumption (TDP) of 250W which is compatible with the 16x PCIe slot of riser #2. You have to be extra careful with the cable as it is not a standard cable. While most would suggest power supplies of 1100W for the Dell R730 to be on the save side, I run two P40s with 750W power supplies in a Dell R720. But I also power limit the card to 140W with little effect on the performance and have light workloads and no batch processing.
-
RE: XCP-ng 8.3 betas and RCs feedback π
@olivierlambert People might be a little crazy, but they also have trust in the test and RC lifecycle of XCP-ng, which has proven to be very reliable thanks to the dedication of the Vates team and also the community. But I agree, for production use it's better to wait for a GA announcement. Anyway, congratulation to this important milestone in the development of XCP-ng .
-
RE: XCP-ng 8.3 betas and RCs feedback π
@bleader There is a
xcp-ng-8.3.0.iso
on the ISO repository. Is that the release of XCP-ng 8.3 ? Looking forward to an official announcement .