Excited that the HL15 is here, and looking forward to setting it up. I purchased the full build out, and acquired 15 of the Western Digital 10TB WD Red Plus 7200 CMR drives for this. My intended use is to have dual 10GbE NICs from the HL15 into a dedicated switch, and 2 dedicated xcp-ng hypervisor computers also linked at 10GbE into said switch. We’ll have 1 other tie breaker workstation added so the failover works well. The VMs don’t have a lot of use or traffic, and this is definitely an overkill build for what we’re doing. Most of them sit idle all day.
The exception to this is that we run a backup server that writes threaded backups to different storage containers. They have a proprietary format that basically allows deletion of older backups while maintaining daily change level integrity, but also this requires re-hashing and updating their containers (updating the database that’s also stored in the container), which currently is a very time consuming process due to read/write speeds. I’m hoping to greatly improve this.
Total storage in use today is around 30TB, but I’m expecting that could double in the next 3 years.
Is anyone able to offer some best practice suggestions on how to set this up as a storage target for this? I’m comfortable with Ubuntu, and I think I understand the Houston UI doesn’t work with the latest LTS release of 22.04, is that correct?
I’m definitely looking for parity here, but I’ve never worked with ZFS. How should I set up my ZFS pool for a good performance mix, but still allowing for some simultaneous drive failure?
Would adding an NVMe SSD here for ZFS metadata be prudent? I understand in reading about this that keeping the index/metadata on the SSD greatly improves performance. Is there a such thing as an SSD cache? I’m not sure how many slots the HL15 supports, but maybe one of each? Large Boot drive partitioned to hold the OS, and the rest can house the metadata… then the other for caching writes?
If I’ve missed the reading on this, please let me know. I looked around the forums and tried to find a comprehensive guide, but I’m not seeing it. I have read a ton of threads that have my head spinning for sure. I appreciate any advice anyone can provide.
Hi @JohnLCS and welcome to the Forum.
First off it would highly depend on your use case on how to configure the storage. if you need something high IOPs doing a mirror setup would be best.
If you are doing a lot of sequential transfers then RaidZ2 would work fine.
Our typical configuration for our enterprise servers would be a RaidZ2 with the 15 drives and then if you need more performance we would look at adding NVMe/SSD drives for a special vdev for metadata and/or SSD/NVMe drives for a cache and a log for reads and writes. It heavily depends on your workload but generally, the RaidZ2 with 15 drives is normally enough to saturate a 10G connection
Thanks for the reply. Yes, so the 10G saturation is good, and will make pulling data off of the array that much more quick. I’m not sure how else to characterize the on-HL15 speeds, and where to find improvement. The backup software will revisit all of the compressed data, and reindex it. I’m not sure that it’s totally decompressing it, but it’s making changes to the data on disk and updating the index. This can be very time consuming, so I’d like to get as much performance here as we can. What I don’t know is if that’s best done with a special vdev for metadata, and/or a cache for reads and writes, or both. The software manufacturer said that the faster the storage is, the faster the reindex operations will be. Obviously we also want the VMs to be performant too.
Is there a flow chart or checklist I can look at to help make these decisions? When I have that, are there guides on how to set this up @45Drives or elsewhere online?
HI @JohnLCS, We do not have guides on how to configure or deploy an HL15 besides form what you got with the guide on the HL15 website.
Normally for these kinds of recommendations, we would involve a storage architect who would gather the information on your use case and determine the best use for you.
A special vdev will help with listing and finding the files on the server so if you have a large directory structure with a lot of little files this is where a special vdev will help.
An L2ARC and an SLOG are devices we recommend when looking to get more bursty workloads and more throughput if the RAID can’t keep up. We would always recommend maxing out your RAM before adding an L2ARC drive where possible and a SLOG simply keeps a cache of the most used files so that if you need to load a recent file it is in the cache and the server knows exactly where to go instead of looking for it on the disks
There are plenty of guides online for setting up ZFS and vdevs.
If you would like more in-depth assistance we do offer application time through 45Drives which you could purchase some time and we would be able to help you farther