@LuisRiveraSig Since the device is not need for Xen Console or Dom0 then you can try a PCI pass-thru to the VM or use a USB device and pass it to the VM and just use the needed driver in the VM directly.
Posts
-
RE: On-board serial port compatibility?
-
RE: On-board serial port compatibility?
@LuisRiveraSig What are you trying to use the serial ports for?
- XCP console?
- Dom0 Serial port (ie. UPS monitor)?
- DomU/VM?
-
RE: Wide VMs on XCP-ng
@plaidypus I am not 100% sure what the correct answer is for the default XCP configuration. I think the basic answer is: no. Xen/XCP does not care what cores it uses for your VM. So on average your performance will be a little worse than not crossing NUMA nodes but better than always interleaving NUMA nodes. Some systems will be better/worse than others.
Xen/XCP hypervisor does have a NUMA aware scheduler. There are two basic modes, one is CPU hard pinning where you specify which cores a VM (domain) uses. This would force the VM to use only the cores it is assigned. The other is to let Xen/XCP do its own work where it tries to schedule core use of a VM (domain) on a single CPU pool. The problem with this is the default config is to put all cores (and HT) in a single default pool. There are some options to try and enable best-effort NUMA assignment but I believe it is not set that way by default.
You can configure CPUs of a NUMA node into an individual pool (see below). A VM can be set for an affinity for a single pool (soft CPU pinning). This would keep most of the work on that single node as you want.
The links listed before to have good information about NUMA and CPU pinning. Below are some more:
Here is an older link about Xen on NUMA machines.
Here is a link about Xen CPU pools.
Here is a link about performance improvements on an AMD EPYC CPU (mostly related to AMD cache design).There are also APIs in the guest tools to allow the VM to request resources based on NUMA nodes.
If you start hard limiting VMs where/how they can run you may break migration and HA for your XCP pool.
-
RE: Wide VMs on XCP-ng
@plaidypus Yes, it just works. The guest VM does not see NUMA info and just sees one node even if the config is set for multi socket.
You can't assign more cores than actual threads but you can assign as many as you have to one VM. There will be some performance penalty as the cores in use may use memory from another node.
If you have dual 16 core CPU with HT enabled then you could assign 64 cores to a VM.
-
RE: On-board serial port compatibility?
@LuisRiveraSig What chipset is used for the serial ports? My guess is that it's a special MOXA serial chipset (non-16550) and requires a special driver that XCP/Xen does not natively support.
-
RE: Lab setup for testing
@jimfini I'm happy to help if you want to try it. After driver install/reboot, did you refresh your network interfaces on XO? You can look at
dmesg
on XCP to see the driver loading. Runip link
to see the interfaces. -
RE: XCP-ng 8.3 updates announcements and testing
@gduperrey I have several hosts updated and running. I'm happy to see 8.3 updates on parity with 8.2.
-
RE: XCP-ng 8.2 updates announcements and testing
@gduperrey Updates installed and working on busy hosts/pools.
-
RE: Lab setup for testing
@jimfini I think that USB ethernet is using the RTL8156 chipset.
Please try adding my
r8152-module-alt
USB driver and see if it works: download page.USB ethernets are not the best devices for XCP.
Let the community know if this driver works for you as it's new and has not been well tested.
-
RE: Some VMs Booting to EFI Shell in Air-gapped XOA Instance
@kagbasi-ngc What OS is on VM that boot to the EFI Shell?
-
RE: High Fan Speed Issue on Lenovo ThinkSystem Servers
@gduperrey (unrelated to this fan issue) I loaded the new standard 8.2 testing kernel on my NUC11 and it seems to boot a little faster and also no longer complains about some APIC devices.
-
RE: Guest VM UEFI NVRAM not saved / not persistent
@stormi said
With such simple reproduction steps, I wonder why we haven't more reports about it.
Because the default VM install uses BIOS boot... ?
(edit) This is only(?) a Debian Linux UEFI boot on a XCP pool issue. -
RE: Guest VM UEFI NVRAM not saved / not persistent
@stormi Simple install: Using XCP 8.2.1 pool (current updates), build new VM, use Generic Linux UEFI template, using shared pool storage, install Debian 11 from CD as UEFI boot. No custom config needed.
Guest VM boots correctly using the special EFI/debian startup. VM can reboot correctly. VM can be powered off and on and boot correctly as long as it stays on the same host in the pool. It also works correctly on a single host system even after the host reboot.
Starting the VM on a different host or live migrating to a different host in the pool and then rebooting the VM, it fails to boot correctly because it does not have the correct EFI boot variables. Booting the VM on the original host (after the guest has been used on a different host) does not fix the issue.
You can view the EFI boot config vars with the command
efibootmgr
. You can see the before (good) and after (non-working) output:BootCurrent: 0004 Timeout: 0 seconds BootOrder: 0004,0001,0000,0003,0002 Boot0000* UiApp Boot0001* UEFI Misc Device Boot0002* UEFI PXEv4 (MAC:D6C87DE081C8) Boot0003* EFI Internal Shell Boot0004* debian
Boot0000 Boot0001 Boot0002 Boot0003 Boot0004
Running
grub-install
on the debian VM will 'fix' the EFI problem until the VM moves again. -
RE: Guest VM UEFI NVRAM not saved / not persistent
@stormi I'll build a new VM and test it.
-
RE: Mouse stops responding in XO console (XCP-ng 8.3, Win11 24H2)
@WayneSherman I have Windows running on XCP 8.3 with XT 9.4 and things have been fine. Also I have not seen this issue (ever) on my 8.2 hosts.
I have seen this mouse issue on a different XCP 8.3 test machine with windows guests (Win10 and Server 2022). It seemed to happen when I made network changes on the host for the guest. I thought the guest locked up but it was just the mouse (KB still worked). I did not do any additional testing then, but I can leave it running for a while and see what happens.
I don't see this as an XO problem as I use the same XO for other hosts/guests that don't have a problem.
-
RE: Replication retention & max chain size
@McHenry Correct, my DR server is not part of the main pool. The main pool has several hosts and shared storage (NFS) and if one host fails there's room for guest VMs on another host in the pool. If main storage totally fails then the VMs could be run on the DR system and then migrated back to the main pool when operational.
The DR server is a form of an active backup as it does not share storage with the main pool (it has its own SSD RAID6). There are other real backups (offsite S3 and separate OS level backups).
-
RE: OmniOS / Illumos / Solaris as a guest - not working
@Buckle8711 I see that too. QEMU is only started with two interfaces passed to the guest. I don't know if that's because of the lack of tools or native drivers or some other config issue.
@olivierlambert XCP/Xen/Qemu expert question: Why does qemu start with only two interfaces when more are configured in XO?
qemu-dm-7 -machine pc-i440fx-2.10,accel=xen ... -device e1000,netdev=tapnet1,mac=5a:7f:c3:ba:04:d6,addr=5,rombar=0 -netdev tap,id=tapnet1,fd=7 -device e1000,netdev=tapnet0,mac=8e:68:02:55:da:c7,addr=4,rombar=0 -netdev tap,id=tapnet0,fd=8 ...
Where's the third and fourth? I see other VMs with more. Is it a template or vm-param issue? -
RE: XCP-ng 8.3 updates announcements and testing
@stormi Installed on several test and pre-production machines.
-
RE: XCP-ng 8.2 updates announcements and testing
@gduperrey Installed and running on active pools.