@stormi Microcode updated on affected Gen11 i7. Running normally.
Best posts made by Andrew
-
RE: XCP-ng 8.2 updates announcements and testing
-
RE: XCP-ng 8.2 updates announcements and testing
@gduperrey I jumped in all the way by mistake... I updated a wrong host, so I just did them all. Older AMD, Intel E3/E5, NUC11, etc. So far, so good. Add/migrate/backup/etc VMs are working as usual. Good for guest tools too, but mine are mostly Debian 7-11. Stuff is as usual so far.
-
RE: Can I just say thanks?
I agree and I'll say it again, Thanks! It's not just Linux/Xen stuff. It is XCP-ng and XO that make everything work as a cohesive vertical open-source solution (some nice buzz words). Thanks to the Vates team and community that have built and support it. I look forward to ongoing continuous improvement and innovation!
-
RE: XCP-ng 8.2 updates announcements and testing
@bleader Updates running on several old and new intel machines (including microcode update). Working fine so far. Rolling Pool Reboot is a helpful feature.
-
RE: XCP-ng 8.2 updates announcements and testing
@bleader I installed it on a bunch of busy hosts. All are fine, but none used PCI passthrough. The Rolling Pool Reboot in XO was very helpful.
-
Ability to delete XO task logs. Thanks!
Thanks for the ability to delete XO task logs feature! (XO commit f6e6e)
-
RE: XCP-ng 8.3 updates announcements and testing
@stormi Installed on several test and pre-production machines.
-
RE: XCP-ng 8.2 updates announcements and testing
@bleader I installed it on many 8.2 machines. On one I did get a warning:
Cleanup : xen-libs-4.13.5-9.40.3.xcpng8.2.x86_64 14/14 warning: %posttrans(microcode_ctl-2:2.1-26.xs29.5.xcpng8.2.x86_64) scriptlet failed, exit status 1 Non-fatal POSTTRANS scriptlet failure in rpm package 2:microcode_ctl-2.1-26.xs29.5.xcpng8.2.x86_64 Verifying : xcp-ng-release-8.2.1-13.x86_64 1/14
But it did not seem to have any effect. Nothing extra in the yum.log
-
RE: XCP-ng 8.2 updates announcements and testing
@bleader Updated an running on newer and older Intel machines. Running normally so far.
Latest posts made by Andrew
-
RE: Some VMs Booting to EFI Shell in Air-gapped XOA Instance
@kagbasi-ngc What OS is on VM that boot to the EFI Shell?
-
RE: High Fan Speed Issue on Lenovo ThinkSystem Servers
@gduperrey (unrelated to this fan issue) I loaded the new standard 8.2 testing kernel on my NUC11 and it seems to boot a little faster and also no longer complains about some APIC devices.
-
RE: Guest VM UEFI NVRAM not saved / not persistent
@stormi said
With such simple reproduction steps, I wonder why we haven't more reports about it.
Because the default VM install uses BIOS boot... ?
(edit) This is only(?) a Debian Linux UEFI boot on a XCP pool issue. -
RE: Guest VM UEFI NVRAM not saved / not persistent
@stormi Simple install: Using XCP 8.2.1 pool (current updates), build new VM, use Generic Linux UEFI template, using shared pool storage, install Debian 11 from CD as UEFI boot. No custom config needed.
Guest VM boots correctly using the special EFI/debian startup. VM can reboot correctly. VM can be powered off and on and boot correctly as long as it stays on the same host in the pool. It also works correctly on a single host system even after the host reboot.
Starting the VM on a different host or live migrating to a different host in the pool and then rebooting the VM, it fails to boot correctly because it does not have the correct EFI boot variables. Booting the VM on the original host (after the guest has been used on a different host) does not fix the issue.
You can view the EFI boot config vars with the command
efibootmgr
. You can see the before (good) and after (non-working) output:BootCurrent: 0004 Timeout: 0 seconds BootOrder: 0004,0001,0000,0003,0002 Boot0000* UiApp Boot0001* UEFI Misc Device Boot0002* UEFI PXEv4 (MAC:D6C87DE081C8) Boot0003* EFI Internal Shell Boot0004* debian
Boot0000 Boot0001 Boot0002 Boot0003 Boot0004
Running
grub-install
on the debian VM will 'fix' the EFI problem until the VM moves again. -
RE: Guest VM UEFI NVRAM not saved / not persistent
@stormi I'll build a new VM and test it.
-
RE: Mouse stops responding in XO console (XCP-ng 8.3, Win11 24H2)
@WayneSherman I have Windows running on XCP 8.3 with XT 9.4 and things have been fine. Also I have not seen this issue (ever) on my 8.2 hosts.
I have seen this mouse issue on a different XCP 8.3 test machine with windows guests (Win10 and Server 2022). It seemed to happen when I made network changes on the host for the guest. I thought the guest locked up but it was just the mouse (KB still worked). I did not do any additional testing then, but I can leave it running for a while and see what happens.
I don't see this as an XO problem as I use the same XO for other hosts/guests that don't have a problem.
-
RE: Replication retention & max chain size
@McHenry Correct, my DR server is not part of the main pool. The main pool has several hosts and shared storage (NFS) and if one host fails there's room for guest VMs on another host in the pool. If main storage totally fails then the VMs could be run on the DR system and then migrated back to the main pool when operational.
The DR server is a form of an active backup as it does not share storage with the main pool (it has its own SSD RAID6). There are other real backups (offsite S3 and separate OS level backups).
-
RE: OmniOS / Illumos / Solaris as a guest - not working
@Buckle8711 I see that too. QEMU is only started with two interfaces passed to the guest. I don't know if that's because of the lack of tools or native drivers or some other config issue.
@olivierlambert XCP/Xen/Qemu expert question: Why does qemu start with only two interfaces when more are configured in XO?
qemu-dm-7 -machine pc-i440fx-2.10,accel=xen ... -device e1000,netdev=tapnet1,mac=5a:7f:c3:ba:04:d6,addr=5,rombar=0 -netdev tap,id=tapnet1,fd=7 -device e1000,netdev=tapnet0,mac=8e:68:02:55:da:c7,addr=4,rombar=0 -netdev tap,id=tapnet0,fd=8 ...
Where's the third and fourth? I see other VMs with more. Is it a template or vm-param issue? -
RE: XCP-ng 8.3 updates announcements and testing
@stormi Installed on several test and pre-production machines.