Starting to build the BOM

Hi all…

I have some bits… and starting to look what I want to replace and what to reuse.

Chassis to be the HL15, 12 x 12TB Seagate Ironwolf NAS drives.
2 x 480GB 2.5" Adata SSD’s.
1 x 250GB M.2 NVME - OS - TrueNAS Electric Eel.

My current chassis is a Fractal Design Node 804, well at 8 drives it’s full full, my current MB also only have 1 slot for a LSI controller and 1 for the dual port IBM x520 Fiber Card.

Options are to order chassis with the SuperMicro and CPU and thus give me lots of PCIe slots or keep my 4 core, but faster I5 processor and simply find a MB locally / South Africa, Might end cheaper vs new Mb and new CPU…

My HDD controllers are currently:
2 x IBM ServeRaid M1015 46M0861 SAS/SATA PCI-e RAID Controller Card LSI SAS9220-8i Serve Raid Expanding Board, Once is in NAS atm, need to flash the 2nd one still.

I additionally have :

  • 1 x 2500Mbps Pcie To RJ45 Intel I226 Network * Card 2.5G Gigabit Ethernet Dual Ports 100/1000/2500Mbps Network Card For Desktop
  • 1 x X520-DA2 +2PCS modules E65689-001 Finisar FTLX8571D3BCV-IT 10GbE network card

suggestions…

G

What are your use cases? Do you need lots of PCIe lanes? Do you just need the system to do what it does currently, but with more hard drives, or do you have other requirements like adding a GPU or running more CPU intensive workloads? Do you need/want IPMI and other features of a server grade motherboard? Do you like building PCs or do you just want something to work out of the box? How future proof do you want it to be?

The full build’s entry level Xeon 3204 may have a slightly lower passmark score than your i5-7400, but you may not notice that if what you are mainly doing is storage, not lots of containers and VMs. How is CPU utilization in your current setup? The thing about the full build is it will give you a lot more headroom for upgrades as far as processors that support way more compute and RAM with the LGA3647 socket vs the LGA1151. It comes at the cost of US$1000 though (plus whatever import tax). You also get the built in HBA and 10G networking, so you don’t have to mess with your HBAs and NICs. You could potentially sell all that and get a little bit of money to put towards the new system.

Both the LGA1151 and LGA3647 are aging platforms at this point though. So, even though you’re getting much more potential from the X11SPH than your current B250M board, both platforms are some 8 years old. I don’t know about availability there of other CPUs for the soocket, but you could look at trading the Bronze 3204 for a Silver 4108 or 4208 for a bit more compute.

To me you have roughly these options (of which the last one isn’t something you’re probably considering but I don’t know your use cases):

1 Keep the motherboard and CPU, sell the two M1015 HBAs and buy a used LSI 9300-16i or 9305-16i, so you can run all the hard drives from the PCIE x16 slot and your NIC in the x4 slot
2 Research more modern consumer motherboard platforms. I have a bit of an AMD bias, at least in the AM5 socket era, but Intel 10th/11th gen might be fine. You probably would still want to trade the two M1015s for one “16i” HBA, but you would have more compute headroom from CPUs available for the socket, if that’s something you need. You can probably get used or new AM4 motherboards and CPUs fairly cheap now, or look at some low end AM5 builds of B650 motherboards and perhaps a Ryzen 5 7600 for something current but inexpensive
3 Get the full build. As stated above, this is a solid motherboard that has the HBA and networking integrated on the board, but it is a server motherboard so is more expensive as well as the CPUs.
4 Go full compute, get a high end Epyc workstation motherboard and CPU

I wouldn’t keep the CPU but just switch the motherboard to something with just PCIe and no PCI slots.

You seem to be frugal and not necessarily in need of the latest tech. If this was me, I would probably just buy the chassis or chassis+psu but not the full build, and I would buy an LSI 930x-16i. I would migrate the internals from the Node 804 replacing the HBA and get that working for a while. After that I would decide whether I wanted to upgrade the motherboard, because I don’t think my choice would be the X11SPH anyway. Not because it’s a bad motherboard, but because you can get so much more performance/dollar from a modern platform like AM5 or the latest Xeon and Epyc sockets.

If all that was TLDR, sorry.

Hi there

thanks for the idea’s… appreciate it.
Did not think about a 16i controller… thing is I just got the 2nd one, and very unlikely to be able sell them locally.
Exchange rate and import costs is not our friend this side so seeing how/where to spend $$$.

Plan was to primarily only get the case and backplane and cables.

I have a PSU. thinking was then the 2 x LSI controllers on a new board, old CPU and RAM.
Then see what NIC’s the new board has, if 2.5 then I’m sorted, otherwise I have the card.
The 10GbE Fiber card will def go back in.

I’m a IT architect, infrastructure, platform, with specialisation data management systems, as such I slice dice the 20TB of data repeatedly trying out new tech.

Other than that it’s our main document, photo etc store and then Plex media server.

I have a local Proxmox cluster with ceph cluster for local fast storage and then use the S3 and NFS from my TrueNAS as secondary storage on the K8S environment.

G

I don’t think you’re going to find anything more than 1G networking from that era on consumer boards. There might be one consumer board with a 10G port and a small handfull of server boards (and that probably wouldn’t understand negotiating down to 2.5G).

You could probably find enough PCIe slots/lanes on a Z270 ATX motherboard. I guess this was before all the PCIe was being taken up with m.2 slots. I would have said that’s not the component I would invest money in, but in reality you should only be spending US$50 or so on a Z270 so it’s ok. One question I don’t know the answer to but that you might want to is; will the M1015 negotiate down to x4 PCIe lanes or does it require x8 PCIe lanes? I don’t think that one of the M1015s will have to run at x4, but it’s possible if you’re using four PCIe cards in the system. Some HBAs will negotiate down and some don’t.

The HBAs will be a bottleneck if you start throwing SSDs in the backplane slots, but SATA HDDs should be ok.

What PSU do you have?

Need t verify… but think it’s a Corsair - CV Series CV550 .

The 2 x SSD’s in there at the moment is mostly just the App pool.

All data is on the tank pool.

I have more of the same SSD’s. so can put in a log or cache mirror zdev.

G

I don’t think that will work safely.

Based on this;


It looks like there are only two 4-pin molex connectors and they are on the same cable. I think you can get SATA power to molex adapters, but I’ve read they can cause fires. All the numbers get a bit fuzzy of the “it depends” variety, but the molex connectors and wires aren’t really rated for more than 10 amps continuous. 15 drives will pull more than that after spin up so you really need the power for the backplane to come through at least two separate cables from the PSU. The power distribution board for the backplane and fans actually has four molex for the input, but you can get away with connecting only two with more current being drawn over the cables, Also, hard drives pull quite a bit of power at spin-up and your setup may or may not be able to stagger the spin-up. I’m not sure if the +12V 44 amps is enough to handle the spin-up of 15 drives along with the other +12V requirements of the system.

Even if you don’t have 15 drives now, I think you need to plan for it.

I know my PSU have 3 SATA power points, each cable with 4 SATA points and a Molex at the end still, as said need to confirm if that is the corsair in there, remember I replaced it a while back and can’t remember if I updated my records. (Had a look through the glass quickly, there is a BeQuiet in there at the moment).

I have 3 of these cables, 2 in the NAS atm, I currently use the end of the cable into a splitter for fan’s.

planning is to only have 12 HDD’s (might make that 11, but with 12 x 12 TB I end with 100TB useable) and the 2 SSD’s… the 15th slot I want to keep free as a point to connect a replacement drive on if needed, think of it as my hot spare point.

I realise I might need to increase the PSU to a 750Watt unit…

G

curious, does anyone have a picture showing how a external LSI controller/Sata cables plug into the back plane.

Won’t mind a picture of the PSU plugging into backplane also.

G

Images exist of the supposed underside of the backplane, but they are taken by the same people who photograph the Loch Ness Monster, Bigfoot, and UFOs. :wink:

Ok, seriously …

There are two different boards. There is a power distribution board mounted to the side of the case next to the PSU that takes four Molex connectors in and has a consolidated 20-pin cable out that supplies power to the backplane and the fans. There are what appear to be low-profile molex connectors on the bottom of the backplane, but you aren’t really supposed to use those directly, if they can even be unplugged.

Then, under the backplane are four SFF-8643 connectors for the SAS cables. In your case you will be ordering Data Cable Set C “(4x) SFF 8643 → SFF 8087 [Mini-SAS-HD (Backplane) to Mini-SAS]”. The cables will come already plugged into the backplane for you, so you shouldn’t need to get under there on your first build. In the future–say you did get a 9305-16i–to get under the backplane you need to take out a crossbar and the metal drive cage, and then remove 16 screws holding the backplane down. You don’t necessarily have to remove the faceplate with the fans to do this, but it is probably easier. You also would need to remove the rack ears to get to two screws for the crossbar.

The SFF-8643 connectors are at the front underside of the backplane, ie pointing towards the front fans. In one of the last pictures above I am tilting the board up from the front against the mid case fans. In another I have unplugged the SAS cables and flipped the board over front-to-back.

1 Like

hmmm,

i’m looking to get a MB that has 3 x8 pic lanes, as thats whats needed by the LSI controller and the 10GbE SFP+ card. the 2.5GbE card fits into a normal x1 pci lane.

Trying to understand the data cabling a bit better. at the moment my card gives me 2 inputs that split to 4 Sata drive connections… are we saying i will buy 4 cables (2 per card) that will go from the cabling 45Drives supply that goes directly into my 2 cards ?

love the idea being able to personalise the front face, but at $250 eish, not… will have to get a sticker made that can go on front face with openings where needed for air flow.

G

You will ditch the SATA forward breakout cables. When you order, the configurator has a section for which data cable set you want;

You will select Set C because your M1015s take SFF-8087 connectors. You don’t have to buy anything additional elsewhere, and this is included in the basic price of the HL15. When they build your unit they will install four cables like this;

with the SFF-8643 ends already plugged into the backplane and the SFF-8087 ends zip tied in the motherboard chamber for shipping. You will snip the tie and connect those ends to the SFF-8088 ports on your HBAs. Each cable carries the data for four drives. (In the HL15 since there are only 15 drives, one will only be carrying data for three drives).

If for some reason you really wanted to purchase your cables locally and install them yourself you would need to contact info@45homelab.com about this as a special request.

was just wondering what happens with my 1 to 4 cables…

ok so these cables will go from the back plane into my card.

:slight_smile:

G

busy looking at a LSI 9305-16i… it and the 8i are both 8x pci cards…

as i already have 2 x 8i cards… and i know future wise need to get a new MB…
thinking it might be simpler/better idea to simply get a MB with enough 8x pci lanes…

2 x 8x for the LSI cards, this way each card/8 drives have access to 8x lanes vs 16 drives on one card on 8x lanes…
1 x 8x for the 10GbE / X520-DA2 card and a
1x 1x for the 2.5GbE / I226 card.

G

The M1015s are PCIe 2.0. The 9305-16i is PCIe 3.0.

x8 of PCIe 2 is 4 GB/s and x8 of PCIe 3 is 8 GB/s. So the overall bandwidth of two older cards vs the one newer card is the same, but you’re using up 2 slots instead of one. But also, a single HDD isn’t going to do more than 0.25 GB/s best case, and your Seagate’s probably less. So, 8 of your HDDs are only going to use half the PCIe bandwidth of a M1015 at a full concurrent workload, and 16 HDDs would only use half of the PCIe bandwidth of a 9305-16i at a full concurrent workload.

So, I’d base your decision on other factors. The PCIe isn’t going to be a bottleneck in either scenario. If you can keep the M1015s and find a motherboard with enough slots that’s fine, but the 9305 would give you the requirement for one less slot, assuming the motherboard you get has PCIe 3.0 or later, which I think they all do from the generaation you were discussing.

1 Like