Just installed the latest kernel from Rocky and a live migrate seems to work on the 2 dev servers I have tried so far :
4.18.0-477.21.1.el8_8.x86_64 #1 SMP Tue Aug 8 21:30:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Posts
-
RE: Live migrate of Rocky Linux 8.8 VM crashes/reboots VM
-
RE: Latest Centos 6 kernel not booting.
Yes that's it - thank you
I should have noticed the initramfs line was missing in grub.conf.
Strange, for some reason the new kernel install did not create a new initramfs file in /boot.
I've just updated an old Centos 6 PV VM and the kernel update creates the new initramfs file and boots the new kernel OK.
Maybe I'll remove the new kernel on the bad VM and install it again as there may be other problems with it.
The other 4 VMs I updated this morning have a initramfs file for the new kernel.And I'll try HVM mode
-
RE: Latest Centos 6 kernel not booting.
Converting back ti PV allow be to boot the old kernel again:
2.6.32-754.23.1.el6.centos.plus.x86_64I'll change the new kernel command from:
kernel /vmlinuz-2.6.32-754.24.2.el6.centos.plus.x86_64 ro root=/dev/mapper/VolGroup-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD console=hvc0 KEYTABLE=us rd_LVM_LV=VolGroup/lv_swap SYSFONT=latarcyrheb-sun16 rhgb crashkernel=auto quiet rd_LVM_LV=VolGroup/lv_root rd_NO_DM
to
kernel /vmlinuz-2.6.32-754.24.2.el6.centos.plus.x86_64 ro root=/dev/mapper/VolGroup-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD console=hvc0 KEYTABLE=us rd_LVM_LV=VolGroup/lv_swap SYSFONT=latarcyrheb-sun16 crashkernel=auto noreboot rd_LVM_LV=VolGroup/lv_root rd_NO_DM
In PV mode all kernels shut down before the console appears.
IN HNV mode the new kernel displays:
Probing edd=off to disable
then shuts down after a few seconds and the older kernels shut down with just a blank screen.
Adding eff=off to the kernel line stops "Probing edd=off to disable"
appearing but the boot stops as before.Maybe I'll try another Centos 6 VM to see if it does the same or if its just a problem with this VM.
-
RE: Latest Centos 6 kernel not booting.
Well eagerfpu=off makes no difference and converting to a HVM brings up the grub boot menu but the new kernel shuts down like before and the other kernels just hang, so the VM is now completely unbootable.
I'll try to convert back to a PV.
If that doesn't work, I have a pre-convert copy to boot from. -
RE: Latest Centos 6 kernel not booting.
@r1 Thanks - I'll try that, its a PV because that's how it was created by default back in 2012.
-
RE: Latest Centos 6 kernel not booting.
Thanks for the info. but I have booted this VM on kernel 6.32-754.2.1 and all since then.
I'll try setting eagerfpu=off -
Latest Centos 6 kernel not booting.
Hi,
I have just yum updated my Centos6 VMs and the new kernel:
2.6.32-754.24.2.el6.centos.plus.x86_64
does not boot.On XCP-ng Center the VM goes green moves to a server and then shutdowns back to red.
The is virtualization state is: Paravirtualization (PV)If I edit grub.conf and change back to the previous kernel it boots OK.
I am running XCP 8.0 with the latest updates.
Please let me know if you need any further info.Regards
Kevin
Please let me know if you need any -
RE: XCP-ng 7.6 RC1 available
I have this version of the guest-tools on the VM
#rpm -aq|grep xen
xe-guest-utilities-xenstore-7.10.0-1.x86_64 -
RE: XCP-ng 7.6 RC1 available
I don't know if this helps but I have also been doing some test on 7.5 and it looks like VMs that are using a large percentage of their memory have problems migrating.
eg Centos 7 HVM with Atlassian bitbucket java process using about 80%
From top:PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1874 atlbitb+ 20 0 3695600 1.3g 6440 S 0.3 42.8 0:52.40 java 1891 atlbitb+ 20 0 4154320 1.1g 8124 S 1.3 36.5 6:40.68 java
Live migration stops at 99% and stays there till you cancel the migration and I have seen it go to 100% and the VM crash.
You then need to restart the toolstack on the receiving server or
/var/log/xcp-rrdd-plugins.log
fills up with these messagesxcp-rrdd-squeezed: [ warn|xen1-3|0 ||rrdd-plugins] Couldn't find cached dynamic-max value for domain 66, using 0
Shut down the 2 java processes and it migrates OK.
I have also seen the same problem with migration of virtual disks between iSCSI SRs