Drives for the HL

I think I’ll need caddies, but not entirely sure. I’ve got about 20 drives of 900GB 10K IBM drives brand new(ish) from Ebay. They are 9vx066-039 2.5inch SAS drives. They should fit the backplane, and just need caddies, yes?

I found the STL files (Printables) but note that is for SSDs. The drives I have are 2.5" (referenced above) but not sure which of the STL files would be most appropriate for them.

@doodlemania - The Printables file is designed to adapt a 2.5" drive with a width of 7mm. If your SAS drives are thicker than 7mm, then your drives are not compatible with the caddy.

These backplanes support both SATA and SAS.

1 Like

@doodlemania There are two caddies available - one for 7mm and one for 15mm. I printed both and test-fitted them with a ~7mm SATA SSD and a 14mm SAS SSD - both fit well.

My only nit-pick is the 7mm caddy has about 1mm of space between the face of the drive and the support, so it’s a little loose, but I suspect that when inserted it lines up fine. I may print a drive cage piece to test that theory.

Your SAS drive should fit in the 15mm caddy. Measure the thickness of the drive and if it’s ~15mm / 0.60" or less, it should be fine.

2 Likes

I wonder what capacity drives would be best for both $/GB as well as performance and raw capacity for the HL15… :question: do we go 8tb drives or 16tb? Also, If we have 10Gb networking, are ssd caches supported in ZFS or is it RAM based?

Depending on your tolerance for spend, I suspect you can go as big or small as you want in terms of size. I’m going to experiment with several HBAs as I opted for just the case/powersupply version. As I want to optimize for random reads (lots of VMs on my little compute cluster), I think two HBAs will be best with a motherboard that has plenty of PCI bandwidth. As for networking, sure 10GB is nice, but I’ve yet to saturate my 1GB when booting all my VMs (it does come close). For me, stupid amounts of RAM are critical as I plan on running TruNAS on top of this precious kit. I found in a previous incarnation of my lab that cramming as much RAM into the server as possible really made ALL of my spinning metal feel like SSDs. At one point, I had 384 GB of RAM cooking…that’s entire VHDs in memory :slight_smile:

1 Like

@Lavavex depending on how you build your RAID you can cap out a 10G network with 15 HDDs.

Typically HDD can handle 100-150MB/s transfers, if you build a RAIDZ2(RAID6) with 15 HDDs you are getting 13 HDDs useable and 2 are parity so 13*100MB/s= 1300MB/s whereas a 10Gb/s network is capped at 1250MB/s

If you are looking for higher random IO and higher IOPS the more VDEVs you have the better with ZFS. Each VDEV ZFS has essentially 1 disk of IOPS so multiple VDEVs will give you a lot more IOPS but will cut down on useable storage depending on what type of RAID you use

ZFS by default will use your extra RAM as a cache to increase performance but ZFS also offers support for LOG and CACHE VDEVs which would consist of fast storage such as SSDs or NVMe drives

This article explains the differences in the types of VDEVs in ZFS

4 Likes

sweet. once i get drives and switch over to prox/houston, this is going to be a fun time of learning and transferring data…

1 Like

I have opted for 10 X 8tb hgst refurbished drives from ebay and a mish mash sata ssds I have laying around. I will have proxmox boot off a raid 1 nvme 500gb pair of ssds.

Will you do proxmox off an on-motherboard pair of m.2s or dedicate whole drives from the enclosure to your setup?

I will do it off a pair of m.2 ssds from the motherboard.

Why a pair? Is this a mirrored setup for redundancy or speed, or both?

For redundancy. As I would like to not lose my virtual machines to nvme drive failure. Of course I will be making backups but it never hurts since nvme drives are so cheap these days

1 Like

I’m going to be installing Seagate Exos X20 SATA drives into my HL15’s — I’ve previously seen 275MB/sec out of these individually and I’m pairing with a 9500-16i HBAs and ConnectX-5 100GbE NICs.

I haven’t decided where to go for NVMe, the H13SSL-N has a couple Gen4 M.2 slots which will likely be good for the OS and then I need to decide between MCIO-attached NVMe, using a Gen5 x16 slot and finding a bifurcation card, or doing using a Gen5 x16 slot with a Gen4 x16 bifurcation card for now.

Seagate Exos Drives are pretty solid. I have also been using them in my current NAS and pretty much in whatever else I could stick them in that may need high capacity. It’s very rare I have a failure.

1 Like

Am I a crazy person for wanted to scoop a few of these guys?

Seagate 12Tb E-Bay

1 Like

We use the Seagate Exos drives for our enterprise server and love them

2 Likes