Performance challenge tied to: 45HomeLab - Doubling the Transfer Speeds of the HL15

Hey everyone!

I am assuming anyone here is coming straight from my newly released video “45HomeLab - Doubling the Transfer Speeds of the HL15”. If you haven’t checked it out, make sure to head here to watch so you can have the full context on this thread!

Alright, so let’s get to it. In my video I threw down a challenge for all you 45 homelabbers out there! I had some fun getting SMB multichannel configured on the HL15 to make use of that second 10Gbit connection on the server, and was able to hit some really great speeds.

I spent about 20 minutes tweaking a few parameters to try to get a really great speed, but I made sure to leave room for some more tuning to ensure that there is a lot more to get out of it……

So, with that in mind – I am challenging ya’ll to beat my performance with your HL15. The winner will get some really great 45Drives swag shipped out to them!

There are specific criteria to be eligible to enter this challenge for the official prizes, but even if you don’t meet all criteria, don’t worry and we encourage you to take part as well! Also, if you don’t have everything needed to take part in this challenge don’t fret – there will be lots of opportunities to take part and win some cool stuff!

Here are the rules and the parameters to be eligible to receive the winners award and 45Drives swag:

  • The server used in the challenge must be a HL15 chassis

    • The electronics are your choice. All CPUs, RAM and motherboards are accepted.
  • The server must be running 10Gb/s

    • The challenge here comes from making use of multiple 10Gb connections to deliver over the 10Gb/s performance limit of a single interface
    • SMB-multichannel OR bonding are both acceptable ways to achieve your numbers
  • The storage that will be used for the tests must be 15 HDD’s or less.

    • There shall be no use of Dual actuator HDD’s (mach2 etc.)
    • There shall be no use of NVMe or SSD in form of caching (bcache, OpenCAS, LVM cache, etc.)
  • ZFS must be used for the file system

    • There shall be no use of helper VDEVs such as SLOG, L2ARC, Special VDEV
    • All VDEV arrangements are valid – including simple. (In the video, we used RAIDZ2) Bonus points if you can achieve or beat my performance with some form of RAID protection
  • The benchmarks must be run on a client running a SMB share from the ZFS pool made up of the 15 HDD’s or less

    • Test 1 – CrystalDiskMark using 32 queue depth single thread with 1M block size and a 1GB file size (instructions will be below) and record performance with screenshots
    • Test 2 – Create a 10GB file (instructions will be below)
      copy the 10GB file FROM your clients’ local drive TO the HL15 ZFS SMB share and record performance with screenshots
      copy the 10GB file TO your clients’ local drive FROM the ZFS SMB share and record performance with screenshots
  • Include all tuning parameters used to achieve the result

    • For bonus points: Include the before tuning numbers and the after tuning numbers.

How to add a submission to the challenge:

Once you have something ready to submit, take screenshots of all tests in progress showing the performance numbers.

We have a bash script created just for this challenge you can find here: https://scripts.45homelab.com/perf-challenge.sh

Note: If you do not meet all of the requirements for the rules listed above, this script may fail on some of the steps, but it should still complete and create an output file.

Follow these instructions to pull it down:

Open up your HL15’s terminal and make sure you are logged in as root.

From the terminal run: curl -LO https://scripts.45homelab.com/perf-challenge.sh

Next, make it executable: chmod +x perf-challenge.sh

Finally, run the script: bash perf-challenge.sh

It will output a file to /tmp/test_env.txt

Gather up your screenshots and the test_env.txt file and we will follow up on this post with a place for you to upload them.


CrystalDiskMark instructions:

You can download a clean copy of CrystalDiskMark from guru3d here: Crystal DiskMark 8.0.4 Download

Once downloaded and installed, open CrystalDiskmark64.exe. By default the first test should be very close to the proposed test. It is most likely set to SEQ1M Q8T1 and we want to set it to SEQ1M Q32T1. We do this by Going to the settings header and then Clicking Settings on the dropdown.
image

Next, change the queues of test 1 to 32 and make sure it matches the image below:
image

Click OK.

From here, you must select your network attached SMB drive as the target. By default, it will be pointed to the C:\ drive. Click the drop-down menu and click “select folder”
image

Choose the full path to the SMB share.
image

You are now ready to run the first test. There is no need to run “ALL” for the purposes of this challenge. We are simply looking for the results from the first SEQ1M Q32T1 test.

10GiB File creation on Windows:

NOTE: If you want to achieve GB/s or more speed to be able to show the full performance of your HL15 array you will need to have a local NVMe on your workstation. If you do not have a local NVMe, you can instead create a temporary RAM disk using this tool: ImDisk Toolkit download | SourceForge.net The ramdisk will act as a great substitute temporary drive to copy the file from to the HL15 SMB share, and then back to from the HL15 SMB share.

Open Windows terminal or CMD if it is not installed.

“cd” into the local directory you wish to put the new file into. In my example below I’m using the G:\ drive:
image

Next, we create our 10G file:

fsutil file createnew testfile.img 10737418240

This will instantly create the file in the directory you are sitting in, and now we can use this file for our transfer to and from our HL15 SMB share.

So, there we have it! The gauntlet has been thrown down and I can’t wait to see what everyone comes up with. Good luck everyone. I encourage everyone taking part to talk about the parameters and tunings you are trying out! I will definitely monitor and help out with some questions :blush:

4 Likes

Just a follow up as well - All submissions can be sent to info@45homelab.com

Set the subject to: Performance challenge submission

Attach the required images/documentation

Hope competition is still running after I get my HL15

2 Likes

Just firing mine up, I’m no pro but I love a friendly challenge… I’m in…

7 Likes

Fun challange, love it, but are you going to publish the result? And btw, is there a deadline?
I’m not have a HL15 (want one!) - but I’ll try to beat the performance anyway on my homelab hw!

3 Likes

Hey!

We will definitely publish the results. We are hoping to have the community who are working on the challenge share amongst each other the different things they’re trying to improve performance.

I will definitely be a guide :slight_smile: However I do want to encourage people to get dirty and try. If I see people going off road towards things that may not be helpful or detrimental I will definitely let y’all know.

There is no hard deadline right now. I want to give people time to get their submissions in and anyone waiting on their HL15 to have some time to get it set up and try!

I definitely encourage you to do it and anyone else who might not have a HL15!! While it won’t be eligible for the metal Protocase made “Performance Challenge” Award - but there may still be some swag involved for people who hop in and take part and post about it :slight_smile:

2 Likes

Heh, I would totally join the challenge… but in my case, could we do a challenge for who gets the least bandwidth? Because I’m not sure a Raspberry Pi can pump through 20 Gbps :crazy_face:

7 Likes

Can we utilize 2TB RAM DISKS, or is that cheating?

1 Like

Hey Jeff! For the windows client a ramdisk is fair game my friend ! It will just make sure the bottleneck lies solely on the HL15 side. So go nuts. Obviously no ramdisk on the HL15 heh - 15 HDDs or less with no helper VDEVs :upside_down_face:

I heard RGB makes it go faster…

9 Likes

I will need to grab a few more drives to do that competition, but I do have a 25G adapter in mine. I also should probably put some drive adapters in there but this is a “Lab” system right? :slight_smile:

10 Likes

I don’t have a pre-built system as I used my own hardware I already had. I have a feeling I’d be cheating, haha.

2 Likes

@Glitch3dPenguin
I think your unit will work:

The server used in the challenge must be a HL15 chassis * The electronics are your choice. All CPUs, RAM and motherboards are accepted.

2 Likes

@Glitch3dPenguin , Bring it ALL!
Let’s see if someone can make the backplane the bottleneck.

2 Likes

Haha well maybe I’ll have to slap some of my extra hardware in and see what I can do. I have a buddy that works at an enterprise SI and I can get some pretty spicy hardware from him for dirt cheap.

2 Likes

I’ve see the videos on your channel to prove it!

1 Like

The “dont do this” That’s literally a checklist of how I built my system! It’s basically a solid contiguous brick of high speed storage. So what you’re saying is my system is right out? :smiley:

I can do 3.5 gigabytes/sec with no special tuning currently on windows; just drag and drop files in windows explorer… but I’m using just two metadata special devices and 12 dual actuator hard drives.

And it is via a 1x100gbe interface.

Does this mean I could do a pure SAS SSD array? I could throw in 15 Samsung PM1633s; those are pretty cheap in the secondary market. Pretty sure I could get close to 50 gigabit on a good windows machine across a couple of HBAs. Maybe. Each drive can do close to 1 gigabytes/sec.

For the NIC, instead of 1x100gbe, I could maybe do 2x40gbe PHY configured as 8x10gbe logical LAG to really thread the needle on the rules here? haha.

If we toss out smb and do it via 100g omnipath I’m getting sub 1ms latencies all day long… at which point any kind of HDD is too slow. Tho I could maybe do something with those PM1633… problem is I can’t plug those directly into my epyc motherboard the way I am with the dual actuator HDDs, and I’ll probably need two HBAs to potentially push past 50 gigabit.

5 Likes

All that knowledge and you’re trying to sidestep the rules already??? :slight_smile: haha.

The fact that all of the “don’t do this” stuff are things you did in your production server is actually a sign that you’re doing something right :slight_smile: The challenge is built to take away all of the obvious advantages that we know makes a ZFS array sing ! hehe.

The purpose of this challenge centers around the additional tunings we can do once we’ve made all of the right architecture choices… and to make them more obvious on 10G it made sense to remove all of the big ones.

For example, hitting 1.7GB/s with 25/40/100 on a 15 drive array is fairly trivial. Attempting to use software bonding or SMB multichannel on the on-board 10Gb NICs to split it across 2X NICs is less trivial.

Big picture - we want to see what additional performance people can get by things like: NIC tuning (buffers, interrupt mapping, drivers, etc) ZFS tuning, system tuning, disk tuning (kernel parameters - scheduler, read-ahead, nr requests vm swappiness etc etc)

Can’t wait to see what some people come up with. I will definitely share some of my go-to tunings for different types of arrays on HDD,SSD and NVMe, and especially ones that are geared towards specific workloads.

1 Like

Hey guys, so I just posted an update to this challenge if anyone is interested in taking part! You can find the video here: https://www.youtube.com/watch?v=7sRDI4WHvDo&t=

So, the same steps apply for entering the performance challenge. We will still be using the same tests to qualify to be entered in to the challenge.

What is changing however, is the requirements for entering the challenge!

Here are the updated requirements for entry:

ZFS must be used
15 HDD’s or less
Any VDEV combination is valid
support VDEVs allowed
There will be entry categories for each network tier:
10G
25G
40G
100g
SMB multi-channel is encouraged.
Dual actuator drives are valid but will be graded on a curve :slight_smile:

any additional information can be found at the top of this post with how to enter and what is required.

1 Like