Addition SATA connectors

As I am pleased with the HL-15 and its support for 15 drives, two SATA DOM, and NVMe carrier cards, I wanted to look for a PCIe board that could support addition SATA ports.

One goal for this HL-15 is to be my main storage device within my homelab. There will be a smaller MicroTower (with 4 Drives) to replicate the most important data (as my second copy).

As I was checking the motherboard manual for all the connectors. I noticed the following (here is a virtual reference for the motherboard):
HL-15 motherboard

  • The HL-15 users 15 drives - 7 SAS drives and 8 SATA drives or 15 SATA drives
    • The left side has the main SATA connectors vis ports I-SATA 0-3 and ISATA 4-7 (which are 8 ports used by the HL-15 backplane).
    • The black port on the bottom of the image is L-SAS0-7 (which are 8 ports used by the HL-15 board).
  • The orange ports to the right of the MAIN SATA connections are the SATA DOM ports labled as S-SATA0 and S-SATA1. (These 2 bring the total SATA ports to 10).
  • Next to the SAS connectors are 2 plugs labeled as JNVME1 and JNVME2. The motherboard manual mentions an OCuLink cable can be used for these ports.

Has anyone used or tried these OCuLink?
When I found a cable on the Supermicro site, I could not confirm the part is compatible with the mothboard (X11SPH-NCTF) via the Supermirco website.

As the HL-15 will replace the old NAS, the older unit had a physical RAM memory limit of 32 GB. To supplement the RAM I used two 480GB SSD drives (2.5) as “cache vdev” or L2ARC.

With people designing and/or printing carriers for 2.5 drives, I am planning to 3d print one for myself (using the 4 holes on the back of the HL-15 to mount the SSD holder). The use the holder I will need a PCIe expansion cards (to connect the drives).

Has anyone use (or could recommend) a good PCIe expansion card? I found a couple cards:

  1. BEYIMEI PCIE SATA card 16 ports which uses the ASM1064 Chip
  2. MZHOU PCIe 16-port SATA III Controller Expansion card which uses the JMB575 + ASM1064 chips

I am not necessary sold on these specific cards, but each supported 16 SATA ports.

The expansion card (having as ports as possible) will allow me to ingest older hard drives (to consolidate the data within the HL-15).

Look forward to the suggestion, feedback, etc

These cards look weird. AFAIK only HBA (Host Bus Adapter) cards are intended to directly connect extra drives to your MB. These look like expanders, if they are they need an HBA as the brains.

You mentioned you had two SSDs as a cache but then linked a 16-port item. How many drives do you need to add (now and in the future)?

What software are you running? unRAID is known to need cards flashed to IT mode so it can see each drive individually.

What kind of drives will you add? SATA or SAS, HDD or SSD?

Your detailed answer could change the recommendation from “you need a single port HBA” to “you need a card x generation/ports because any less will suffer speed bottlenecks”.

For instance, a single 6Gbps shared amongst 16 drives is around 50MB/s for each drive when accessed as one. Kinda hindering what most people intend when adding so many drives. Modern HDDs go 200-250MB/s so you are losing out on 80% of your array’s speed. And that’s for HDDs, SSDs are even faster and pricier so waiting money in a way.

In my case, I’m planning to add up to 8x Samsung 870 EVO for a total combined ±4GB/s so I need 8 ports that support ~500MB/s each in a PCIe that supports that too.

Since the actual card you end up buying will likely last 5-10+ years, I would choose SAS ports like the SFF-8643, instead of SATA ports.

I went with the LSI 9300-16i, the LSI 9201-8i would be even better since it runs cooler and is less overkill for my needs but I couldn’t find one NOT from China for a good price.

What exactly do you need and why?

:beers:

1 Like

I linked the 16-port item because I was not sure if getting the SATA connectors would be best right or another HBA.

I know the current backplane in the HL-15 will be maxed out. As this backplane is using two of the motherboard controllers. The 8 SATA drives use the Intel® C622 controller which maxes out at 6 Gbps. The other 7 SAS (or SATA) drives use the Broadcom® 3008 SW controller (depending on the drive type ) with a max throughput of 12 Gbps.

I figured a new PCIe card will be needed following a similar configuration of no more than 6 to 8 drives.

My assumption is the bandwidth will be limited by the PCIe lane, the card itself, and the type of drives. I have 3 unused lanes on the motherboard:

  • PCIe 3.0 x16 (x16 or x8 ) SLOT6 has a 16 GB/s max
  • PCIe 3.0 x8 (x0 or x8) SLOT 5 has a 8 GB/s max
  • PCIe 3.0 x4 (in x8 slot) SLOT 2 has a 4 GB/s max

I installed the Carried Card in SLOT 3 as this will allow me to use the card at its max sped.

What software are you running?

  • Plan A
    • I wanted to try the suggestion posted by a number of the 45 Drives team to use Houston interface using ZFS. I was going to export my existing zfs pool from Trusnas core and import the drives to the HL-15.
    • The existing Truenas ZFS pool is 13 drives: 10 spinning drives with 2 of the 3 SSD used. The extra SSD was to acting as a spare.
    • As the HL-15 can handle more drives, I plan to add 5 more spinning drives (total of 15).
    • As I have 10 SATA SSD in its original packaging, (Samsung EVO 860 drives). I was going to use as many as I can within the HL-15. This may be used a new zfs pool. I have not made a final decision.
  • Plan B
    • If plan A does not perform well, then I would install TrueNas core on the HL-15.
    • The numbers of drives will be the same as in Plan A.

I do not plan to use unRAID.

I also do not plan to had any graphics card as the unit is for storage.

As my ol’ Truenas server uses a “Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcom] (rev05)”, this card has 12 total drives connected. I believe the card was flashed into IT mode (as it came with Chenboro NR1200 via ebay). As this card PCIe Gen 2 x8, the card max is either 4 or 5 GB/s I was hoping to have recommendations of better cards that are PCIe Gen 3. If not, then I can save money and use this card with less drives connections.

The spinning drives will be the largest ZFS pool.
The 2.5 SSD drives will probably be a separate ZFS pool.
The NVMe via the carrier card will probably be individual drives.

I do understand that I will not reach the theoretical max. Between my Proxmox cluster and my laptop/desktop, I The goal here is achieve something better in total speed, drives supported, and total bytes stored than my current TrueNAS server. I wanted to add some additional drives to get more throughput from the server (without needing a separate physical server).

@pcHome ,
My biggest question would be - How are you going to power those extra drives?
15 spinning drives and 10 SSD I would think need more power than the 5v rail can supply. I thought @Hutch-45Drives mentioned somewhere that the 5V rail had a max of 20A draw. Using a base 1.2 A per HDD * 15 = 18A. Not much more room for more drives. I think most HBA cards still draw power from a molex connector.

The HL-15 has components that can be upgraded.

There are open PCIe slots to use another HBA card, etc.

The power supply can be upgraded depending the power budget for another PCIe card and the SSDs that be mounted via the printable cage.

The most recent post shows 5 SSDs (raid z1). I am hoping the design can be modified (or have one posted) to support 6 SSDs (raid z2).

There are other ATX style PSUs that provided more AMPs.

1 Like