@jmara Thank you for the input. All pods are running with caveats.
Prior to executing the installation, I updated the image name to ghcr.io/vatesfr/xenorchestra-csi:edge in the manifests.
After executing the install, I had to manually edit the image name in the DaemonSet, from ghcr.io/vatesfr/xenorchestra-csi-driver:edge to ghcr.io/vatesfr/xenorchestra-csi:edge.
After editing the DaemonSet, the node pods restarted and transitioned to running.
However, the controller pod was still attempting to pull this image: ghcr.io/vatesfr/xenorchestra-csi-driver:edge and never transitioned to running.
To correct that, I edited the image name in the Deployment, from ghcr.io/vatesfr/xenorchestra-csi-driver:edge to ghcr.io/vatesfr/xenorchestra-csi:edge.
Thus after editing the DaemonSet and Deployment, the pods transitioned to running.
kgp -nkube-system | grep csi*
csi-xenorchestra-controller-b5b695fb-ts4b9 3/3 Running 0 4m8s
csi-xenorchestra-node-27qzg 3/3 Running 0 6m21s
csi-xenorchestra-node-4bflf 3/3 Running 0 6m20s
csi-xenorchestra-node-8tb5m 3/3 Running 0 6m20s
csi-xenorchestra-node-t9m78 3/3 Running 0 6m20s