Proxmox and TrueNas Scale passthough LSI Card

I have installed Proxmox, and want to pass through the Broadcom/LSI card to set up ZFS in TrueNAS. I have enabled IOMMU in grub with GRUB_CMDLINE_LINUX_DEFAULT=“quiet intel_iommu=on”. and added
to /etc/modules. I am passing through all functions. I then spin up a VM and…
the server restarts. I get a TASK ERROR: start failed: QEMU exited with code 1, VNC fails to load. I watched the Craft Computing video which helped me with a previous server to get video passthrough working, but it isnt working for this. Any help on what I am missing. My goal is the whole backplane to TrueNas for storage.

Ok. I got truenas installed by using q35, uefi, and disabling secure boot for the installation media to work, and passing the broadcom card through and checking all functions
I think I have the card passed through as it is showing in truenas, BUT no drives are showing. (I have 7 Seagate exos installed). Do I need to flash the card to it mode. I am using the hl15 full build.

** I have also verified iommu is enabled in proxmox and have the vfio and similar (all 4 enabled and verified working in modules)

Hey @Goose! Just double checking, the drives were visible in proxmox before doing the work to pass through the HBA? Since you mentioned full build, can I assume you are trying to pass the onboard SAS3008 HBA connected to slots 1-9 through 1-15? Anything in dmesg of TrueNAS look suspicious?

So I think I may have figured this portion out. In case anyone else is trying to run TrueNas on Proxmox. DO the above to enable iommu
I had to pass through the SATA controller and the Broadcom LSI device.
I ran
ls -al /sys/block/sd*
it showed my drives were hooked up to the ID of device they were connected to
I then ran
to get the device name which I passed through.

This is my first foray into TrueNAS and ZFS, I have set up a Proxmox 3 node cluster with Ceph. If I am giving out misinformation or setting myself up for a failure please someone correct me. I fix teeth by day and break computers by night.

If it the both devices are in the same iommu group, I think you can add “pcie_acs_override=downstream,multifunction” parameters to GRUB_CMDLINE_LINUX_DEFAULT to force devices into their own groups. No guarantees but won’t hurt to try either.