Issues with diskperformance of an migrated vm
-
Hi all,
i am running some test migrations, we experience slow disk performance on a migrated vm from vmware to XCP. Anyone have this experience?
-
Hi... what OS? Also, did you load the guest tools in the VM? Did you remove anything VMWare related before the migration?
-
Yes, i removed vmware tools prior to the migration.
It's windows 2022 server. i experience the issue allready while starting the vm, all file related stuff is slow,... -
And you installed the XS or XCP-ng guest tools? What performance levels are you seeing?
-
no not yet, i didn't have the chance, booting the vm is taking 15 minutes allready. i installed a fresh vm, boots in less then a minute. same storage.
-
I found the issue. I compared settings of a newly created vm and one we migrated. It seems the migrated vm was running on bios firmware instead of uefi.
I changed the newly created vm to bios and i experience the same loss in speed.
I changed the migrated vm to uefi and the speed is normal now.Any idea what could cause the performance drop for bios mode?
-
Are you certain the issue is specifically disk performance? I have plenty of both UEFI and BIOS based VMs and they all perform fine when it comes to disk speeds, so don't think that is the core issue.
Guest tools are super important to install so make sure to get those going.
Was this VM setup as BIOS on the ESXi side or UEFI? You want to be sure those match either way or it'll cause issues.
I've migrated a number of Windows VMs from ESXi and haven't ran across this, been a few months (and therefore a few updates) since I did so though so maybe something has changed.
-
The source vm was also on bios mode, most of our vms are on uefi, i am doing some other migration tests as well.
i know tools are going to do a lot as well. The first test vm we did does perform normal right now. i will do some benchmarking against vmware tomorrow.
Thanks for the help so far. I am happy i found the issue for this type, i also do not understand why it does make such a difference but this will not hold us from migration. -
The emulation is vastly different between BIOS and UEFI. Eg in UEFI, an NVMe device is emulated, which can be fast (for an emulated device). In BIOS, it's a very old thing.
All of that shouldn't matter if the PV drivers are correctly installed.
-
@olivierlambert i have now migrated 3 windows vms and they all perform well and normal.
-
I know this is a bit of a necro bump, but I've been pulling hair out trying to figure out why only guest Windows 11 VMs are having extremely poor disk performance but other guest Linux VMs seem fine.
On an older R710 and a newer (ish) Ryzen 5 PC, I'm get AWESOME disk performance with XCP-ng on Windows, Linux and BDS guest VM, but for some reason, only on this R730 motherboard, I cannot sort out what causes the Windows 11 VMs to be dramatically slower.
The disk performance test scenarios have varied wildly, but the most noticeable is an 8x SSD RAID0 and 8x SSD ZFS striped pool. On all other machines, I'm getting over 800MB/s when testing in Windows 11 with CrystalDiskMark (CMD), but only in the R730 with Windows 10 and 11 (23H2 and 24H2 mix) VMs do I see about 300MB/s (average) - oddly, Linux (Debian, Alpine and some other) guest VMs are benchmarking between 900-1400MB/s consistently.
Running fio tests directly on the host gets even better results (between 3400-3700MB/s ranges) so I know its not hardware/cabling/controller related. I have it narrowed down to only Windows VM on the R730 (with H730 controller).
Since this post was mostly related to this issue, I am giving it a bump, but I may start a new thread if there isn't a simple or well-known solution that I'm missing?
-
I'm adding @dinhngtu in the loop in case we can makes sense of that, but it's really strange indeed.