Do the Molex connectors draw equal power on the HL15 PDB?

Does each of the four Molex connectors on the power distribution board power a subset of the backplane, or do all the Molex inputs get combined to feed all the drive and fan output draw?

IE, is it something like;
J1 - bays 1-1 to 1-4
J2 - bays 1-5 to 1-8
J3 - bays 1-9 to 1-12
J4 - bays 1-13 to 1-15 and fans?

The reason I’m asking is, assuming a fully loaded unit with 15 drives of the same model, will one Molex tend to draw less power and be a better candidate to daisy chain power for another drive off of? Does this change if I have the fans powered by the mobo and not through the PDB?

If so, which one? The one at the bottom?

All the Molex connectors get connected in that PCB distribution board meaning that all the connections share the power to all drives

4 Likes

Thanks @Hutch-45Drives .

The use of multiple cables is more of a safety thing as no one cable is working too hard. That’s also why they didn’t just use 1 cable with 4 Molex if i had to put money on it.

And people often forget the drive end of the plug is only supplying one drive and well within its limits, but the PSU side is dealing with ALL the load of every other plug and splitter fed device

This person didn’t take that into consideration like 45 Drives did. Its also why i advise against all these splitters people use and why I prefer backplanes.

image

2 Likes

Thanks. I understood from other posts about the PDB the “safety thing”. I was just wondering with 4 molex and 15 drives if the power draw was segregated with a group of drives per molex or combined for all the drives across all the molexes, because if segregated presumably one of the molex would only be powering 3 drives. Hutch confirmed that’s not the way it works, though.

I just need to add power for one additional 3.5" drive as I’m physically migrating an existing 16-drive ZFS pool into an HL15. I’ve ordered a custom 2x molex and molex-sata cables from Cablemod. Buying 15 larger drives and doing a data migration isn’t something I want to tackle right now.

1 Like

Ya regular raids and ZFS are a lot of work to resize and such. It’s why I use Unraid on my HL-15. I do have a NVMe ZFS pool in my portable unraid server.

I often recommend pcpartpicker as a guesstimate tool, but we’ve cross paths enough I’m confident you thought that out already just adding for anyone else that may stumble into the thread.