So when you live migrate a VM, Xen will decrease memory dynamically to the minimum. If you didn't plan correctly dynamic min to a value that's OK for your system, you might have memory problems for your operating system (eg if you system can't survive with less than 4GiB RAM and the dynamic min is set to 3GiB, this will cause problems!). So planning is important.
There's some edge cases also in migration when you have to tell Xen about memory management when you grow/reduce amount of RAM. But it works generally OK if you took care of dynamic min as I said.
Thank you very much for your valuable advice. I will never run any third party application directly in xcp-ng. I meant inside a VM that will run on the xcp-ng hypervisor, like Xen Orchestra is.
So, I need to create a VM and set the correct network and then run nmap -sT -P0 -p 443 xo-domain to test the connection.
You attempted an operation on a VM that was not in an appropriate power state at the time; for example, you attempted to start a VM that was already running. The parameters returned are the VM's handle, and the expected and actual VM state at the time of the call.
vm: 2fc2b09b-2249-164a-9f60-e408d9c3db82 (Cert test-host_2021-04-07T16:19:20.356Z)
the last portion states that I can only do this command on a running vm
I created a new vm using terraform and changed ran the suggested command. The result is that the vm operates normally.
The issue now is what is causing this to happen? I don't believe this was an issue for me before.
I can't look at the dmesg today as I'm home with a cold...
I hope you get well soon 🙂
I did experiment with xl cpupool-numa-split but this did not generate good results for multithreaded workloads. I believe this is because VMs get locked to use only as many cores as there are in each NUMA domain.
Indeed, a VM in a pool get locked to use only the cores of the pool and its max amount of VCPU being the number of core in the pool. It is useful if you have the need to isolate completely the VM.
You need to be careful when benching these things because the memory allocation of a running VM is not moved but the VCPU will still run on the pinned node. I don't remember exactly if cpu-pool did have a different behavior than simple pinning in that case though. I remember that hard pinning a guest VCPU were not definitely not moving its memory. You could only modify this before booting.
@olivierlambert That is wonderful news. We actually will be closer to a 1.2:1 or 1.1:1 ratio of vCPU's per physical core - our hosts are old beasts, but we don't work them hard. We had some 10 or 12 VMs on one while it was running VMware (DURING our migration, so a it was running more than it normally did) and it was consuming maaaaaaybe 30% of the available CPU power. There really aren't that many of us.
We are probably not going to purchase XCP-ng Pro Support, but we absolutely WILL be purchasing a Xen Orchestra subscription at the Starter tier. I think it hits all the features we need, it gives us a real support contract for our virtualization solution, and it is our great honor to support an open-source project!
Thanks for answering my question. Hopefully it's helpful in answering someone else's out there on the internet.
You're very welcome. My solution to the similar problem I'd had was to set up a couple of internal systems as NTP servers so that I always had something with the right time and static IP addresses and pointed everything needing NTP at them.
I have the same problem with xcp-ng-8.2. I'm trying to start with mxgpu with HPE ML380p Gen8 E5-2620V2. Inserting pci=realloc pci=assign-busses the server cannot boot. Below the point of boot where it crashes.
The log in images seems to recall a known bug --> "choose an explicit smt=(bool) setting. See XSA-297"
It's the pci=assign-busses that cannot permit to boot but without it "modprobe gim" has not inserted. Also using usb disk avoiding PCI disk system crashs during startup. Firmware bios is really recent ( 2019 ) , the last one. Someone has resolved this issue ?crash-assign-busses.jpg