Hello, @planedrop
This is what has been on my mind recently. I have a home lab setup with a few nodes, 3 to be precise.
I bought refurbished Supermicro boards with Xeon-D processors, chassis and all other components required for home server builds. As software-defined storage is the technology of choice for many people nowadays my server nodes have Broadcom 9300-8i HBAs with SATA SSDs connected to them.
There is something, though, what is itching - lack of visibility of what is happening to the storage drives and the HBAs when using XCP-NG. MegaRAID Storage Manager does not work with 9300-series HBAs so there is no tooling to monitor the HBA and anything connected to it, SAS3IRCU is pretty useless as it does not provide anything other than HBA listing; if I wanted to be able to get more out of this utility I would have needed to use IR vs IT firmware in the HBA which defeats the idea of running an SDS stack.
Using smartmontool manually is not foolproof - I will always forget to check the stats and miss an event of drive failure. If I make and mdadm or even openzfs volumes again to be on top of things I will have to watch the server health myself and of course something will get in the way.
So, my thought process was like this - install XCP-NG on a PCIe M.2 stick and don't make any SRs from the SSDs connected to the HBA. Instead, the first VM on the host will be a TrueNAS instance and the drives or even the HBA, as you suggested yesterday, will be passed through to the TrueNAS VM. TrueNAS has all hardware monitoring built in and it is a "set it and forget it" thing, all automated and taken care of by TrueNAS. Moreover, for the NAS fleets with less than 50 drives in total you may use TrueCommand to have all monitoring tasks on autopilot. Plus you get the best automated ZFS array health monitoring done by TrueNAS.
Once this part is done make an NFS or iSCSI share on the NAS VM and then point XCP-NG to it and add a new SR. All network traffic will be flowing withing the PCIe bus of the host and even not traversing the network switches unless the SR is accessed by other XCP-NG hosts. This way we get like a hypervisor and a NAS running inside the same node, while a better, more advanced storage repository is placed on the ZFS-protected volume in the NAS.
What I wanted to discuss is whether there are any gotchas in my thinking and whether you and people like you can spot any oversights in my "design". Please share ideas and views.