Rocket 750 port multiplier chips to SFF-8087 connectors mapping question

I’m looking for anyone who might know how the 40 SATA lanes from the 8 port multiplier chips on the Rocket 750 are allocated to the 10 SFF-8087 connectors, and figured this forum is my best bet for info. I can’t seem to find a block diagram or a technical doc describing which ports are connected to which chips, and AI isn’t being helpful (rather the opposite).

Intuition and logic suggest that PM1 has 4 lanes to Port1 and 1 lane to Port2, PM2 has 3 lanes to Port2 and 2 to Port3, etc. AI is telling me that the 5th lane on each chip goes to Port9 and Port10, which, from a signal integrity and board layout perspective, is kinda nuts.

Does anyone have any concrete info on how the PMs are mapped to the ports? And more specifically which SATA connectors on the breakout cable, although I’m aware I’ll likely have to test continuity per cable and label them individually. But just ‘side or port 2 closest to port 1’ would be super helpful.

My situation: random read heavy workload with occasional large sequential writes. I want to use 2.5 inch SATA SSDs for this, and they’ll be housed in IcyDock 8-bay Expresscages (MB038SP-B if it matters). I’ll be setting up 5-drive parity arrays (eventually 9-drive but with the 9th drive on a port on the motherboard), and would like to ensure that the 5 drives on any given array are on 5 different PM chips to give me the best speed on those big sequential writes.

On a side note, aside from losing bandwidth, are there any issues with running an R750 in an x4 PCI slot instead of x8?

Thanks in advance!

What OS are you planning to use for this project? You are aware that modern linux does not support the Rocket 750? You will be limited to a distro based on the 5.x kernel or some community projects trying to compile the custom Highpoint drivers into modern linux.

Actually, from a signal integrity and board layout perspective it might actually be easier to route the fifth channel from each chip to land together on two of the SFF-8087 ports rather than cascading them across the ports the way you suggest. I don’t think Highpoint published this level of detail for the card.

You should be able to determine this with lspci assuming you have access to an older linux like a Live USB of Ubuntu 20.04 and one of the 2.5 inch SSDs:

  • Once booted install the build tools necessary for driver install
    sudo apt install build-essential linux-headers-generic
  • Install the driver package from Highpoint. (do you have this?)
  • Plug the test SSD into Port 1, Channel 1 on the Rocket 750.
  • Boot the system and run lsscsi or check dmesg | grep -i sata.
  • The OS will report the drive’s location in a host:bus:target:lun format. More importantly, the kernel logs will show the drive initializing under a specific SATA link and Port Multiplier ID.
  • Shut down, move the SSD to Port 1, Channel 2, and repeat. (This verifies whether the channels on a single port share a PM).
  • Next, move the SSD to Port 2, Channel 1. Check the logs again to see if the Port Multiplier ID changes.
  • Finally, test one of the channels on Port 9 or 10 to see if it reports back to one of the earlier Port Multipliers (which would prove the “overflow” layout theory).

The bandwidth of PCIE 2 x4 is 2000 MB/s, so if a SATA SSD can do 500-600 MB/s, you’re capping yourself to the performance of about 3.5 SSDs, even spreading the disks across the port multipliers.

Hoping to use Windows 11 (yes, odd use case), I’ll be checking on whether the card is recognized and the 2016 drivers work over this weekend. If not, it’s just $50 for an experiment, just trying to avoid a SAS HBA and expander(s).

I don’t think Highpoint published this level of detail for the card.

If they did, I can’t find it anywhere. It seems this card had a 3-year lifespan and then everyone moved on. Looking at the physical traces that I can see on the board, it appears that chips 1-4 are connected to ports 1-5, and chips 5-8 are connected to ports 6-10. Might have a look at the Marvel datasheet for the 88sm9715 chips.

You should be able to determine this with lspci assuming you have access to an older linux like a Live USB of Ubuntu 20.04 and one of the 2.5 inch SSDs:

I can toss it onto a spare (damaged) motherboard for this, and since I have several spare 2.5 inch SSDs of various models and manufacturers, I can likely test several ports/breakout cables at once, good tip, thanks.

The bandwidth of PCIE 2 x4 is 2000 MB/s, so if a SATA SSD can do 500-600 MB/s, you’re capping yourself to the performance of about 3.5 SSDs, even spreading the disks across the port multipliers.

Currently just running JBOD (and running out of drive letters, so RAID is happening in the near future for sure), so as long as the bandwidth is equal to or greater than ~500 MB/sec, i.e. one 6gb/s SATA link, I’ll be tickled pink, as I’ll likely only either be writing to one array at a time, or worst case from one array to another.

Thanks for the response, really hoping this all ‘just works’, as 40 ports for 10 watts is pretty awesome.

It is possible that SFF8087 ports 5 and 6 are the overflow ports given the positions of the chips on the side and back. Being a multi layer board it’s hard to tell.

I ran the card on a Windows 10 system for a while, not with 40 drives. It worked ok for my use case, but I wasn’t pushing it with any massive read/write operations except an initial load of a pool with Stablebit Drivepool. As with Linux, you will have to have or track down a version of the driver to use it in Windows. I have one dated March 2016 versioned 1.2.4.0. Official support stopped with Windows 8 in 2012 although some private patches were released after that. Windows 11 might recognize the card with the generic AHCI driver, but that driver is really not intended for port multipliers.

Under load, that Marvell chipset does tend to drop ports, so you could have 5 drives drop out all at once, which could corrupt an array. Any array on the card that had data I didn’t want to lose to random total corruption, I would be sure to have a backup of.

If your use case is WORM, like chia plots or nearline media storage/playback, I guess this might work ok. I think “everyone moved on” because outside of backup most use cases required one or more of;

  • stability
  • performance
  • SAS

Not sure what software you are planning to use on Windows 11 (Storage Spaces? ReFS?) but if you are satisfied with the read speed of one SSD drive, you might want to look at duplicating file systems rather than parity file systems. You could use something like Drivepool plus SnapRAID on Windows or MergerFS plus SnapRAID on Linux. That would take up more space, but avoid the perils of multiple drive “failures” corrupting array parity. (Unraid would be a solid contender in this category, but it doesn’t support the card).

It sounds like you already have the Rocket 750, but if it doesn’t work out but you still want to hook up 40 drives, I’d look at getting an LSI 9211-8i or 9300-8i and pairing it with a SAS expander. SAS expanders are different than port multipliers and that combo would have mostly universal support in modern OS versions. Dual 9x00-24i’s isn’t the only alternative to a Rocket 750.

I’m going to do some experimenting this weekend, will post my results for posterity. I have the 2016 driver, which supposedly worked in Server 2016, so Windows 10, I’m hoping 11 too. Ports 5 and 6 would make more sense if all the 5th lanes went there.

Not sure what software you are planning to use on Windows 11

Storage Spaces, unlike others I haven’t had any issues with it, and the ability to simply plug all the drives in an array into another Windows PC and have it recognized is a nice bonus.

If your use case is WORM

Yeap, Plex/media server, and is also the HTPC for the upstairs, plus seeding torrents, plus backups for the other PCs, etc. What started as just an HTPC years ago has evolved quite a bit. Media is mostly on two HDD arrays, but the torrents really need to be on SSDs for the random read nature. Performance requirements are super low, aside from bursts of sequential write activity, as reads total under 150 MB/s peak, averaging well below that.

I’d look at getting an LSI 9211-8i or 9300-8i and pairing it with a SAS expander

Too power hungry, and the fewer fans the better. I was looking at a 9400-16i (Lenovo 430) with an HP 9-port SAS expander, which should come in between 20 and 25 watts, and is what I’ll likely go with if the 750 doesn’t work out.

1 Like

Well, it’s not the question you asked, but I’d still discourage the use of Storage Spaces in an environment where what is on it is the only copy, or the only backup copy. I don’t know if “backups for the other PCs” are just OS images or precious family photos. There are so many potential issues with Storage Spaces;

  • Write amplification (degredation) to journal against the RAID 5 write hole
  • Lack of checksum parity (on parity mismatch the system does not know which is truth, scrub will silently overwrite parity with bad “new” value on bit rot).
  • Port multiplier intolerance (to monitor drive health properly SS really needs direct access to the drive, not through a PM, PM issues can corrupt the array).
  • Rebuild Penalties (even with SSDs, rebuilding a failed drive requires a lot of work from an unoptimized engine, and besides writing the actual data, it is also writing the journal logs, eating up TBW on all the SSDs in the array.

At least that’s my understanding, You certainly may have more direct experience, but personally I’d run from this for anything that was intended to be a serious part of my home infrastructure and not just an experiment.

Write amplification (degredation) to journal against the RAID 5 write hole

Not worried about it, as PC backups go to spinning rust, the files on the SSDs are generally written once and then just read multiple times.

Lack of checksum parity

Every file is hashed and the hash is saved, so if there’s an issue I’ll know about it, at least. Hasn’t popped up yet.

Port multiplier intolerance

That’s an issue under every OS, AFAIK, in some way, shape or form, only way around it is to not use port multipliers, however my SAS options also involve tradeoffs, still weighing and researching that alternative.

Rebuild Penalties

Needing to rebuild a failed drive is, itself, the biggest penalty. Personally I’m more concerned about the performance on adding a new drive to the array, but that’s less about the OS or software and more an issue of cache exhaustion.

Backups aren’t an issue. Important stuff is backed up to a separate drive on each computer (if available, 2 laptops only have a single drive), which is backed up to an SSD on the ‘server’ (I hate calling it a server, but it is what it is), which SSD is then backed up to one of the HDD arrays. Best case 4 copies on 4 disks in 2 locations, worst case 3 copies on 3 disks in 2 locations, with the ones on the server getting a checksum hash. My stepson and I trade portable hard drives every so often with our backup files so there’s an offsite copy too, even if it might be 6 months out of date.

I’m guessing most here are concerned with performance, redundancy, availability, etc. of very large data stores. Me, not so much. Having one very large drive array instead of 4+ smaller drives makes organization and sharing simpler and easier, which was my original reason for using it for media files. As for the SSDs, I’d rather not even use RAID, as just JBOD would be (and has been to date) fine for my use, however running out of drive letters in Windows is a thing, and Storage Spaces is a lot more straightforward than switching to Linux (especially migrating 100+ TB of media).

1 Like

Cool. It sounds like you know the choices you are balancing. I just know for me on Windows and the Rocket 750, I was losing the occasional file to NTFS CRC errors or worse. So I moved to something more resilient. But our use cases are completely different, you aren’t asking for that resiliency.

Yeah, I just need many more SATA ports, essentially. Easily solved with SAS but then I have an expander taking up a slot I don’t have (but could work around) or hanging in the case (the R5 XL I’m using is already rather cramped and maxed out in terms of space, plus I’ll need to somehow shoehorn in a second Corsair +5v load balancer), plus the need to remove the heat from the SAS card/expander so likely yet another fan, meaning fan noise that I want to avoid along with a slightly larger electricity bill. Then there are apparently spindown/sleep issues to deal with on the LSI HBAs.

2 Likes