Memory in vm half as fast after migration of vm.
-
@andreas Hi Andreas,
After testing it on my side i can confirm i reproduce the issue.
I will discuss it at dev level and get back to you. -
@darkbeldin
Okay Thanks -
@darkbeldin said in Memory in vm half as fast after migration of vm.:
@andreas Hi Andreas,
After testing it on my side i can confirm i reproduce the issue.
I will discuss it at dev level and get back to you.This seems quite an important find. Please let is know how this goes.
-
So I was doing some testing before reporting to dev team and I have a behavior I will like you to check if you reproduce:
my clean VM report like thisyachy@ubuntuyachy:~$ redis-benchmark -r 1000000 -n 2000000 -t get,set,lpush,lpop -P 16 -q SET: 156152.41 requests per second GET: 168180.28 requests per second LPUSH: 156421.08 requests per second LPOP: 159757.17 requests per second
That's my reference, when I migrate to another host it report like this:
yachy@ubuntuyachy:~$ redis-benchmark -r 1000000 -n 2000000 -t get,set,lpush,lpop -P 16 -q SET: 55718.07 requests per second GET: 58683.72 requests per second LPUSH: 55742.91 requests per second LPOP: 54775.01 requests per second
If I reboot it goes back to original reporting but if I migrate back to the original host without rebooting it report like that.
redis-benchmark -r 1000000 -n 2000000 -t get,set,lpush,lpop -P 16 -q SET: 138092.94 requests per second GET: 153151.08 requests per second LPUSH: 147004.78 requests per second LPOP: 148115.23 requests per second
So not perfect as reference but way better than after migration.
As I want to be thorough before reporting could you check if you reproduce that?
So:- migrate to another host
- make the test
- migrate back to the original host
- make the test
Thanks for your help.
-
@darkbeldin
Hello
I installed clean new xcp-ng 8.2 on 2 identical PCs name host1 and host2 then updated to latest "yum update"
then install a virtual machine ubuntu 20.04 with static 4GB of memory and with guest tools.
Install redis-server
Then I did the test
on host1
root@ramtest:/home/andreas# redis-benchmark -r 1000000 -n 2000000 -t get,set,lpush,lpop -P 16 -q
SET: 243368.20 requests per second
GET: 261917.23 requests per second
LPUSH: 257499.67 requests per second
LPOP: 264830.50 requests per secondThen migrate to host2 got lower speed
root@ramtest:/home/andreas# redis-benchmark -r 1000000 -n 2000000 -t get,set,lpush,lpop -P 16 -q
SET: 92055.60 requests per second
GET: 95297.09 requests per second
LPUSH: 95570.31 requests per second
LPOP: 95401.64 requests per secondThen back to host1 got almost the same speed
root@ramtest:/home/andreas# redis-benchmark -r 1000000 -n 2000000 -t get,set,lpush,lpop -P 16 -q
SET: 238010.23 requests per second
GET: 253100.48 requests per second
LPUSH: 259100.92 requests per second
LPOP: 259134.50 requests per second -
@andreas Ok so migrating back to the original host give us a small perf issue but clearly not what we see when we migrate to another host.
I will report it like that thanks for the test Andreas -
@darkbeldin
Okay
Did more tests
Started on host1 normal speed
migrate to host2
make the test got lower speed
restart vm
make the test on host2
Got normal speed
migrate to host1
make the test got lower speed
migrate to host2 normal speedso it seems to be something that happens after first migrating to another host
I have a third exactly the same pc i should test install on it
and see what happens if i move vm to host3 after moving to host2
but I have to do it tomorrow, I do not have time now. -
@andreas Yes i tested it no need to do it, migrating to a third hosts result to half perf has first migration.
-
@andreas Ok so after discussing it with Dev team the issue has been identified.
The trouble is linked to TSC management in the VM.
You can work around the issue by setting the VM:xe vm-param-set uuid=<VM_UUID> platform:tsc_mode=2
But be aware we can not recommend this settings to go to a production VM.
TSC clock won't be emulated at all if you enable this settings. So you might have some weird time behavior during migration. -
@darkbeldin Okay thanks
I did test this and it worked. -
@darkbeldin Hello, So for those of us in production, does this problem affect rolling pool upgrades?
If so, how do we fix this and update our pools without needing to explicitly shutdown VMs in the pool?
-
@darkbeldin Is memory access actually slower or is a timing issue with the statistics?
-
@andrew Sorry guys not sure i understand the issue enough to answer, @olivierlambert has a way better understanding of it, think it's better you to ask him
-
It's a very long story. The real impact isn't that big in real usage, and it depends on so many factors that it's hard to really know at one time if you are really affected or not.
The core issue is related to TSC clock. Time/tick regularity on hardware is a REAL mess, even on the same hardware, Xen default mode is trying to use the TSC without emulation for your VMs, but sometimes TSC is doing weird things, and Xen is able to preserve the behavior in the guest by emulating it.
This emulation is costing performance. And this is already on the very same hardware. Now imagine live migrate to another machine, to another CPU and motherboard, even on the exact same model. The TSC frequency can't be exactly the same, so there's some variation.
To keep a perfectly constant/consistent clock on the VM, Xen default TSC mode (1) is detecting those changes to "hide" them to the guest with some emulation (if needed).
Mode 2 is "no emulation whatsoever" (and mode 0 is always emulate). I'm not exactly sure about the risk on switching to mode 2 in production. If you want to test it and check chrony/ntpd logs, I'm interested in the results
-
Here is an old paper from VMware, but with a good recap on various timers and the complexity of it: https://nextcloud.vates.fr/index.php/s/WHk64gHTK4iaJAP
-
@olivierlambert said in Memory in vm half as fast after migration of vm.:
Mode 2 is "no emulation whatsoever" (and mode 0 is always emulate). I'm not exactly sure about the risk on switching to mode 2 in production. If you want to test it and check chrony/ntpd logs, I'm interested in the results
We use local NTP servers so we could use chrony the the like to sync (already do). But, is TSC only about timesync, or is it about other stuff like Linux kernel internals depending on some stability of it?
I found this article https://superuser.com/questions/393969/what-does-clocksource-tsc-unstable-mean that discusses TSC a little. It seems we can have unstable TSC on multicore systems and the kernels should handle it anyway? -
Frankly, I wouldn't speculate on the potential risk, I prefer to answer that I don't know I might read stuff later when I can to get a better idea.
-
@olivierlambert said in Memory in vm half as fast after migration of vm.:
Frankly, I wouldn't speculate on the potential risk, I prefer to answer that I don't know I might read stuff later when I can to get a better idea.
And.. Why does the emulation make it slower after a move - is this a bug in the emulation?
-
No it's not a bug. Emulating a TSC clock is taking resources. Emulation in general is bad for performance.
-
@olivierlambert said in Memory in vm half as fast after migration of vm.:
No it's not a bug. Emulating a TSC clock is taking resources. Emulation in general is bad for performance.
So what happens is that no emulation is used when the VM is booted, but it gets activated when migrating to another host?
And not enabling TSC emulation would mean that TSC behaviour/properties might change when migrating, which in turn has a possible (what?) effect on the guest VM?