Hello,
I'm a new developer on XCP-ng, I'll work on the Xen side to improve performance.
I'm a newly graduated of University of Versailles Saint-Quentin with a specialty in parallel computing and HPC, I have a big interest in operating systems.
Hello,
I'm a new developer on XCP-ng, I'll work on the Xen side to improve performance.
I'm a newly graduated of University of Versailles Saint-Quentin with a specialty in parallel computing and HPC, I have a big interest in operating systems.
Hello
So, you can of course makes some config by hand to alleviate some of the cost of the architecture on virtualization.
But like you can imagine, the scheduler will move the vCPU around and sometimes break the L3 locality if it move it to a remote core.
I asked to someone more informed than me about that and he said that running a vCPU is always better than trying to make it run locally so it's only useful under specific condition (having enough resources).
You can use the cpupool functionality to isolate VM on a specific NUMA node.
But it's only interesting if you really want more performance since it's a manual process, and can be cumbersome.
You can also pin vCPU on a specific physical core to keep L3 locality, but it would only work if you have little amount of VM running on that particular core. So yes, it might be a little gain (or even a loss).
There is multiple ways to make the core pinned, most with xl
but if you want it to stick between VM reboot you need to use xe
. Especially since if you want to pin a VM to a node and need it's memory being allocated on that node, since it can only be done at boot time. Pinning vCPU after boot using xl
can create problem if you pin it on a node and the VM memory is allocated on a another node.
You can see the VM NUMA memory information with the command xl debug-key u; xl dmesg
.
With xl
:
Pin a CPU:
xl vcpu-pin <Domain> <vcpu id> <cpu id>
e.g. : xl vcpu-pin 1 all 2-5
to pin all the vCPU of the VM 1 to core 2 to 5.
With CPUPool:
xl cpupool-numa-split # Will create a cpupool by NUMA node
xl cpupool-migrate <VM> <Pool>
(CPUPool only works for guest, not dom0)
And with xe
:
xe vm-param-set uuid=<UUID> VCPUs-params:mask=<mask> #To add a pinning
xe vm-param-remove uuid=<UUID> param-name=VCPUs-params param-key=mask #To remove pinning
The mask
above is CPU id separated with comma e.g. 0,1,2,3
Hope I could be useful, I will add that to the XCP-ng documentation soon
No, there is none. You should have no problem using the Debian 10 template for any VM.
The Ryzen 7 2700X is not equipped with a GPU. IIRC only G-suffixed Ryzen are equipped with one.
Hello
So, you can of course makes some config by hand to alleviate some of the cost of the architecture on virtualization.
But like you can imagine, the scheduler will move the vCPU around and sometimes break the L3 locality if it move it to a remote core.
I asked to someone more informed than me about that and he said that running a vCPU is always better than trying to make it run locally so it's only useful under specific condition (having enough resources).
You can use the cpupool functionality to isolate VM on a specific NUMA node.
But it's only interesting if you really want more performance since it's a manual process, and can be cumbersome.
You can also pin vCPU on a specific physical core to keep L3 locality, but it would only work if you have little amount of VM running on that particular core. So yes, it might be a little gain (or even a loss).
There is multiple ways to make the core pinned, most with xl
but if you want it to stick between VM reboot you need to use xe
. Especially since if you want to pin a VM to a node and need it's memory being allocated on that node, since it can only be done at boot time. Pinning vCPU after boot using xl
can create problem if you pin it on a node and the VM memory is allocated on a another node.
You can see the VM NUMA memory information with the command xl debug-key u; xl dmesg
.
With xl
:
Pin a CPU:
xl vcpu-pin <Domain> <vcpu id> <cpu id>
e.g. : xl vcpu-pin 1 all 2-5
to pin all the vCPU of the VM 1 to core 2 to 5.
With CPUPool:
xl cpupool-numa-split # Will create a cpupool by NUMA node
xl cpupool-migrate <VM> <Pool>
(CPUPool only works for guest, not dom0)
And with xe
:
xe vm-param-set uuid=<UUID> VCPUs-params:mask=<mask> #To add a pinning
xe vm-param-remove uuid=<UUID> param-name=VCPUs-params param-key=mask #To remove pinning
The mask
above is CPU id separated with comma e.g. 0,1,2,3
Hope I could be useful, I will add that to the XCP-ng documentation soon
No, there is none. You should have no problem using the Debian 10 template for any VM.
The Ryzen 7 2700X is not equipped with a GPU. IIRC only G-suffixed Ryzen are equipped with one.
Exactly as @tony said, the option --delete-dom0
just remove the option from the booting command line given to dom0, if you need to remove a device you just need to remove it from the command when calling --set-dom0
. I'll take a look at the doc, you can modify it too, we are always open to contribution .
/opt/xensource/libexec/xen-cmdline --delete-dom0 xen-pciback.hide
should do the trick to remove the device from the pci-assignable list of devices at the next reboot.
dom0 rebooting will take control of the device.
What I have seen from Microsoft forums would seem to indicate that Windows 10 only support up to 2 sockets and 256 cores.
Hello,
I'm a new developer on XCP-ng, I'll work on the Xen side to improve performance.
I'm a newly graduated of University of Versailles Saint-Quentin with a specialty in parallel computing and HPC, I have a big interest in operating systems.