No, the logs from hypervisor as well as from dom0 didn't lead me to this.
Logs from VM OS while trying use those NICs generated some logs which after checking on the Internet lead to RHEL support case where adding this parameter to kernel was a confirmed solution. At XCP-NG forum there was also a thread which mentioned this option for passthrough device but it was related with GPU if I recall correctly.
Anyway this has helped me with Intel NICs.
Still no luck with Marvel FastlinQ NICs but there the problem is more related with driver included in kernel on guest OS. I'm still trying to solve it.
I didn't test this yet with Mellanox cards.
I'm trying to squeeze my lab environment as much as I can to save on electricity... the prices went crazy nowadays.
I'm trying to use StarWind VSAN (free license for 3 nodes and HA storage) and those NICs with SATA controller and NVMe ssd I needed to passthrough to VM. Server nodes are so powerful today so even single socket server can handle quite a lot..
I did some tests today with two LUNs presented over multi path iSCSI and this may work quite nice.
One instance of StarWind VSAN on one XCP-NG node, one on second and communication over dedicated NICs.. HA and replication on interfaces connected directly between StarWind nodes without switch.
I don't need much in terms of computing power and storage spece.. It might be perfect solution of two node cluster setups which may fit many of small customers or labs as my.
Of course this require more testing... but my lab is perfect place to experiment