@plaidypus I am not 100% sure what the correct answer is for the default XCP configuration. I think the basic answer is: no. Xen/XCP does not care what cores it uses for your VM. So on average your performance will be a little worse than not crossing NUMA nodes but better than always interleaving NUMA nodes. Some systems will be better/worse than others.
Xen/XCP hypervisor does have a NUMA aware scheduler. There are two basic modes, one is CPU hard pinning where you specify which cores a VM (domain) uses. This would force the VM to use only the cores it is assigned. The other is to let Xen/XCP do its own work where it tries to schedule core use of a VM (domain) on a single CPU pool. The problem with this is the default config is to put all cores (and HT) in a single default pool. There are some options to try and enable best-effort NUMA assignment but I believe it is not set that way by default.
You can configure CPUs of a NUMA node into an individual pool (see below). A VM can be set for an affinity for a single pool (soft CPU pinning). This would keep most of the work on that single node as you want.
The links listed before to have good information about NUMA and CPU pinning. Below are some more:
Here is an older link about Xen on NUMA machines.
Here is a link about Xen CPU pools.
Here is a link about performance improvements on an AMD EPYC CPU (mostly related to AMD cache design).
There are also APIs in the guest tools to allow the VM to request resources based on NUMA nodes.
If you start hard limiting VMs where/how they can run you may break migration and HA for your XCP pool.