@olivierlambert Any idea who can be tagged here to ask internally (I assume you meant someone from the Vates team)
Posts made by TS79
-
RE: USB Passthrough speed issue
-
RE: USB Passthrough speed issue
Tried on Ubuntu 24.04 with kernel 6, same result: host and XCP/XOA show the device as xhci 5000Mbps, guest VM shows the device as ehci 480Mbps.
I don't have any time to fiddle, and some of this seems hardware specific, but some brief searching on keywords with Linux, EHCI, XHCI, etc. seem to mention a few possibilities: kernel driver being loaded wrong, guest EUFI vs BIOS (where BIOS initialises the device as USB 2.0), host BIOS (disabling legacy USB support - but this may endanger keyboard/mouse inputs), or host IOMMU settings.
-
RE: USB Passthrough speed issue
Hi @Joe_dev
I'm no expert on this, but one thing I noticed is that on your host, the driver shows as "xhci" (which is the USB 3.x controller interface standard), whereas in the VM the driver is showing as "ehci" (which is the USB 2.x interface) and therefore has the 480Mbps bandwidth limitation.
Sadly I've got little to no experience with USB or PCIe passthrough (haven't had to use them yet), but hopefully this can point someone in the right direction to troubleshoot. My guess is, either the VM's OS has a limitation, or there's a VM setting at XCP-ng level, or the XCP-ng host itself has a setting or device passthrough challenge. Sorry I can't be more helpful.
EDIT: I just plugged a USB 3.0 storage device into my XCP-ng host, went to the VM settings and added a VUSB device from the PUSB. Within Xen Orchestra, the USB device shows as 5000Mbps speed, but within the VM it shows as 480Mbps. VM's OS is Ubuntu 22.04.5. I'm going to test on a newer Ubuntu 24.04 version - will reply on this thread.
-
RE: Windows Server 2025 on XCP-ng
Conhost process is related to command-line / console apps. It could be that something (either a server role in Win Server 2025, or perhaps the Citrix/XCP-ng guest tools, or at worst malware) is stuck in a loop and spawning these multiple instances.
If you absolutely need to salvage this particular OS install, perhaps hunt through the process IDs to find what is spawning them, or setup some debug/trace tools to determine the same. Or try remove roles / software one at a time to see which one, if any, stop the conhost spawning.
EDIT: must have missed your previous post where you've nailed it down to the ADDC role. If you typically install the role via GUI, switch to PowerShell to see if the same problem persists, or vice versa, if you typically use PowerShell, switch to the GUI
Beyond that, I have no insights except to point at the current "Preview" status of Win2025. I'm not familiar with which OS components (not to mention server roles, frameworks, etc.) Microsoft have re-used and which are entirely new.
Also, sadly, conhost process issues have been a sporadic problem in Windows for decades...
-
RE: SR NFS Creation Error 13
@stevewest15 Just did the same (well, using the ronivay script to update) - still had the same error adding NFS storage to the host. Noticed the following oddities:
- I only typed in the IP address of the NFS server, then immediately selected the NFS version 4.1 before clicking anything else
- After hitting the "search" icon to the right of the NFS Server's IP address, the only Path option was /mnt
This is NOT expected, as in TrueNAS Scale, the NFS export is set to /mnt/datapool/NFS-XEN1
-
Clicking the "search" icon to the right of the Subdirectory field didn't show any results, so I entered the datapool/NFS-XEN1 manually
-
This kicked up an SR creation error
-
I then set the NFS version to 'Default' and clicked Create again - this kicked up a different error specifically saying that NFS version 3 failed
-
I changed back to NFS 4.1, clicked Create again, and this time it worked...
Very sporadic behaviour, and not a problem I've had until recently. Many previous months of testing with NFS have all added perfectly (once I had the TrueNAS permissions set correctly).
-
RE: SR NFS Creation Error 13
@stevewest15 Hey dude, thanks for feeding back all that info. Is your new host part of a pool or is it a standalone host?
The fact that when you browse paths, you can see the 2 existing SR IDs means that NFS is working - at least to read the folder.
Let's ignore what ChatLLM said about the error - but it is a fair point. If the host is part of a pool, all physical networking needs to be identical across the hosts (for example, ETH0 is for LAN, ETH1 is for storage, etc.)Interesting point: I just tried to add an NFS storage to my homelab host (standalone), and I'm having the same problem on TrueNAS Scale - I create the NFS share, set the permissions correctly, but when I go to add it in XO, I also cannot see folders correctly and it won't let me add!
@olivierlambert - perhaps a bug in recent XO(A) updates? I deploy & update mine from source using the ronivay script, onto Ubuntu 22.04 LTS. Not sure when the potential bug has surfaced though, as I don't often try add new NFS storage at home...
-
RE: Please review - XCP-ng Reference Architecture
@john-c @nikade - I had to Google Search for TrueSecure, as hadn't heard of it before.
Seems good in that it's first-party solution, and security it typically always a good idea, but it's not really something for my use-case as a homelabber.
It mentions storage encryption: which to me immediately complicates things like deduplication, compression, and delta backups / replication.
TrueSecure seems to be positioned as a tool to achieve security compliance for strict standards like NIST / FIPS / government security regulations.
Still, good to know it exists and will be reading more about it for potential future advice! -
RE: SR NFS Creation Error 13
@stevewest15 I pasted this error log into ChatLLM and it spat out a key error here:
"The error log you provided indicates that there was an issue during the import of a virtual machine (VM) in XCP-ng. The specific error is related to the creation of a network, and the relevant part of the log is:
[error||8816 /var/lib/xcp/xapi||backtrace] network.create R:1662dcd1a2fb failed with exception Server_error(INVALID_VALUE, [ bridge; xapi1 ])
This error suggests that there was an attempt to create a network with a bridge named xapi1, but the value provided for the bridge was invalid.
This could be due to several reasons:
Bridge Name Conflict: The bridge name xapi1 might already be in use or reserved, causing a conflict.
Incorrect Bridge Configuration: The bridge configuration might be incorrect or not properly set up in the network settings of the host.Network Configuration Issue: There might be a broader issue with the network configuration on the host, such as missing or misconfigured network interfaces.
VLAN Configuration: If VLANs are involved, ensure that the VLAN ID is correctly configured and that the network interface supports VLAN tagging.
To resolve this issue, you can try the following steps:
Check Existing Bridges: Use the command brctl show on the host to list all existing bridges and verify if xapi1 already exists.Review Network Configuration: Check the network configuration files on the host to ensure that the bridge xapi1 is correctly defined and not conflicting with other network settings.
Verify VLAN Settings: If VLANs are used, ensure that the VLAN ID is correctly set and that the network interface is configured to handle VLANs.
Consult Documentation: Review the XCP-ng documentation for any specific requirements or limitations regarding network bridge names and configurations.
If these steps do not resolve the issue, you may need to provide additional context or configuration details for further troubleshooting."
-
RE: SR NFS Creation Error 13
Another spitball idea:
I did notice too that you have folders in the /mnt/Tank path : "VMs" and "XCP-NG". I assume the "XCP-NG" folder is your XO remote (as I recognise the xo-config / xo-pool / xo-vm folders from backup jobs), so I assume that the "VMs" folder is your VDI SR?
If those assumptions are correct, when you add the storage to the host, I feel like your 'Create a new SR' info is missing the "VMs" text in the subdirectory - from the screenshot you posted, it looks like the SR is trying to create in the /mnt/Tank path, not the /mnt/Tank/VMs path?
But I'm obviously not entirely sure of your intention and expectations, whether this host is part of a pool or is a standalone host; all of which has an impact on NFS storage and resultant paths. -
RE: SR NFS Creation Error 13
Hi @stevewest15 - just going to quick-fire some ideas here as I've fallen afoul of this in the past.
I can see you have NFS restricted to 'Authorised Networks' and 'Authorised Hosts'.
Are you 100% sure that the new host is in the correct IP subnet and that the host is defined in the host groups? -
RE: Please review - XCP-ng Reference Architecture
@nikade Thanks again for your input, much appreciated.
-
RE: Please review - XCP-ng Reference Architecture
@olivierlambert Thank you - all makes sense
-
RE: Please review - XCP-ng Reference Architecture
@olivierlambert Brief question please - would it make sense to install XOA on a dedicated computer (either 'from source' on Debian/Ubuntu, or as an only VM on a standalone XCP-ng host) so that it's managing pools but isn't adding load to the pool's compute / storage / network resources? Is there any recommendation here from Vates?
-
RE: n100 based hosts? AMD 5800h?
@Greg_E if the motherboard supports PCIe lane bifurcation, these cards work well: https://www.amazon.co.uk/GLOTRENDS-Platform-Bifurcation-Motherboard-PA41/dp/B0BHWN7WKD
If no bifurcation support on your mobo, then you need a PCIe card with a PCIe switch on it - lthey're much more expensive but typically solve the problem. I've used this one: https://www.amazon.co.uk/GLOTRENDS-PA40-Adapter-Bifurcation-Function/dp/B0CCNL7YD8
Just remember to add heatsinks and if possible, additional active cooling. I ended up wedging a rubber-edged Noctua 80mm fan inside my DIY NAS to blow directly onto the NVMEs and dropped them from 60-70 degrees C down to 30-40 degrees.
-
RE: n100 based hosts? AMD 5800h?
Hope you don't lose too much sleep thinking about it! There are so many right ways of doing it
My short'n'sweet advice: keep it as simple as possible while providing what you actually need.
Full resilience at every level to tackle every potential fault often brings more complexity than it's worth. Hence why I've boiled my homelab down to a single host, all VMs stored on local NVME, with regular backups and replicas. Worst case: boot up another Lenovo host, restore, and carry on. Even when I used 3x Lenovo hosts in a pool, I found that the shared storage performance was not worth needing 4 hosts sucking electricity
-
RE: n100 based hosts? AMD 5800h?
Hi @Greg_E. I've setup a few homelabs with XCP-ng using older and newer mini PCs, so thought I'd share some of my experiences.
First pass, I used the Lenovo Tiny M710q PCs, bought for around £100 each on eBay. They had either the i5-6400T or i5-6500T processor. I added 32GB of Crucial RAM, added the SATA drive tray for a boot drive, and added a 1TB NVMe in each for storage. Since I don't use Wifi on these, I removed the M.2 wifi card and added in a cheap 2.5GbE NIC (https://www.amazon.co.uk/gp/product/B09YG8J7BP)
XCP-ng 8.2.1 works perfectly, no customisation or challenges. I did see the exact same storage performance trends as you, and see that @CJ has already correctly pointed out the current limitation in the current storage API (SMAPIv1).I've also built a homelab with the Trigkey G5 N100 mini PCs. Again, XCP-ng 8.2.1 works perfectly on the 4-core E-cores of the N100. This G5 model has dual 2.5GbE NICs which is perfect for giving VMs a 2.5GbE link to the world, and a separate 2.5GbE link for the host to use for storage. Be aware, if you split networking this way, Xen Orchestra needs to be present on both networks (management to talk to the XCP-ng hosts over HTTPS, and storage to talk to NFS and/or CIFS for backups/replication).
I've not measured the power draw much, but typically the Lenovos are using around 15-25W, and the Trigkey G5s about 10-18W. Fan noise on both are very low - I have them on a shelf in my desk, so I sit next to them all day. My daily driver is a dead-silent Mac Mini M2, so I'm very aware of surrounding noise, and there's nearly none.
The only challenge I had with the N100 was that Windows VMs seemed to think they only had a clock speed of 800MHz - so performance was poor. I did not get around to trying any performance settings in the BIOS to force higher clock speeds : in my view this would trigger additional power usage, unwanted additional heat and additional fan noise.
If you build a homelab with 3 XCP-ng hosts, slap a 1TB NVME in each and trial the XOSTOR as an alternative to network shared storage. In my case, I went down to running my workloads on a single Lenovo M710q, stored locally on NVME. Xen Orchestra (VM on the Lenovo) which backs up and replicates VMs to an NFS hosts (another Trigkey G5 with Ubuntu Server, a 4TB NVME, and running Ubuntu-native NFS)
Typical network performance during backups / DR is around 150-200MB/sec on the 2.5GbE.
Hope that helps!
-
RE: Please review - XCP-ng Reference Architecture
@nikade Thanks for your comments and thoughts. We're repurposing existing HP DL380 servers for the hosts, and was going to try repurpose our Nimble AF40 arrays, but they only do iSCSI, which means thick provisioning, which creates a capacity challenge for us (some of our VMs have been provisioned with 2-4TB virtual disks, but only using 100-300GB... so recreating smaller disks and data-cloning would be tedious but necessary).
TrueNAS is my 'gold prize', assuming it provides enough uptime and performance. Our IOPS and throughput requirements aren't huge; they only hit anywhere over 500MB/sec and a few thousand IOPS during backup jobs.
Replicating XOA is definitely a 'default'. But from my lab tests, redeploying and restoring config is to quick too, so I'm not too fussed about 'losing' XOA. I'd backup the config to on-premises 'remotes' and to cloud-based object storage.
Much appreciate your time and feedback, thank you!
-
RE: Please review - XCP-ng Reference Architecture
@billcouper Excellent spot - thanks for your thoughts and inputs! Based on the load during backup / replication jobs, I've been considering where best to put XOA. The default deployment method puts it on the pool, but I've considered deploying a standalone instance on a desktop (with 10GbE of course) - need to see how that would work in terms of seeing shared storage and pool local storage.
Honestly, TrueNAS is the most likely. We're winding down the on-prem footprint quite aggressively, so investment in beautiful things like Pure AFA's is unlikely. But I will look into those anyway, if just to be informed on the options - thanks again.
Really appreciate your input
-
RE: Introduce yourself!
Hi. I'm a cloud solutions architect, with around 25 years of working experience in servers, storage, networking (your typical infrastructure stuff) and about 20 years of virtualisation. I started up a homelab many years ago, and through (too) many evolutions, I've ended up with Lenovo M710q mini PCs running XCP-ng, with another mini PC providing NFS storage (with backup and replication to cater for problems and failures).
Absolutely love XCP-ng and am promoting it wherever I can. I've architected and kicked off a project at my employer to replace VMware with XCP-ng, so I'm keen to use the forum to read other people's real-world experiences with storage and host specs, hurdles to avoid, and any tips & tricks.
Looking forward to interacting with the community more and more.