TrueNAS Tuning - Sync, ZIL, SLOG

Hi All!

I have recently completed my entire NAS rebuild with my HL15 and finally migrating all my data to my new pool I setup. I am looking to start flipping some switches and turning some dials to adjust my TrueNAS software to allow for the fastest possible w/r performance I can achieve with my hardware. I figured I’d discuss this with the community to try to gain a better understanding of my system.


Current Setup:

  • TrueNAS Scale
  • 1 vdev - 6 Disks - RAIDZ1 (I assume using ZIL since I have not made a SLOG yet)
  • 10GbE Networking
  • Xeon E5-2620 v4 on an ASUS X99-WS Board (Reused some Older hardware)
  • 64GB DDR4 RAM
  • Using sync on pool
  • LZ4 Compression

I can read at about a max of 3.4Gbps/s (continuous - 15GB file)

I can write at about a max of 2.1Gbps/s (continuous - 15GB file)


I feel that this could be higher. I know I can manually define the amount of memory my system will use other than the 50% that it’s defaulted to. I also know I can disable Sync.

However my biggest question is would a mirrored Sata SSD pool be fast enough to act as a SLOG. My goal is to hit at least 5Gbps/s and I feel that my setup should allow for this with the correct tuning.


As a disclosure, I have hella backups on my system. I am not worried about data loss. I run no Databases, VMs, or services on my NAS that would worry about in-flight data loss. Turning Sync off is an option for me. BUT if I do that is the system still using ZIL/SLOG?

Thanks for any pointers!

2 Likes

I’ll try to be more clear. I didn’t do a very good job. post to follow.

2 Likes

Wow, is this really the case? If so I misunderstood that when setting up my array. If this is true, could I split my 6 disks into 2 RaidZ1 vdevs in the same pool?

((3 Disk vdev) + (3 disk vdev)) to double the speed?

Also no, I am using 7,200 RPM HDDs and thanks for the reply.

1 Like

This is me trying to simplify what I tried to say earlier - hopefully clearer.

For a RaidZ1, reads are usually faster than writes.
For sequential read/ writes, stripe width increases performance.
For random read/write you’re limited to the IOPS of a single disk (earlier I said " single VDEV can be no faster than a single disk")

ZFS mirror (with 2 disks) should give roughly 2x read performance.

For faster write and reads use multiple VDEVs and stripe them. The more VDEVs you stripe together, the faster your pool will get. For both reads and writes it will aggregate your throughput and IOPS.

2 Likes

Thank you for this clarification! This was super helpful and gives me a better understanding of how to increase r/w performance! I’ll need to go back and see if I can squeeze out a little extra performance.

Do you think that 2 SATA drives would be fast enough for a SLOG or should I really only be using NVME drives for that?

How big are those drives? RAIDZ1 is a pretty controversial and notorious choice for HDD’s. The biggest sticking point is that if/when a drive fails, the resilver time is days if not weeks in some case. During that time, the other drives would be under higher than normal stress, and you’re already down your redundant drive.

Tom Lawrence has a few good videos worth a watch, specifically on using more smaller vdevs rather than one large vdev.

1 Like

But as a friendly PSA, remember RAID is not a backup.

I am using 14TB HDDs.

Yep, and that’s why I have multiple separate backups. I have already learned that lesson in my life.

1 Like

@Glitch3dPenguin,

I say with your setup and a 10Gb connection, NVME is the preferred route. If you were on a plain 1 Gb connection SATA would be fine.

For a good write up comparing NVME, SAS and SATA for ZIL/SLOG check out:

Mirrored is preferred, because even if a special VDEV fails, your ZPOOL will fail.

Now for homelabbing vs real world. Enterprise drives will have capacitors so any writes stored in DRAM can complete in case of power failure. Consumer drives lack this protection for DRAM and writes will fail with power loss.

Do you know you need synchronous writes (VM’s, databases or such)? or are you just trying to improve write speeds to the pool?

1 Like

Absolutely, though I’d still prefer not having to restore a whole pool if I can from backup.

Now, just abiding by that doesn’t always work though. As in my case, one of my backup targets is a RAIDZ2 pool, which is where I want my redundancy. Backups still need fault tolerance.

1 Like

Thanks for the response and the STH post! I am going to read over it! Additionally, I appreciate all the helpful information.

To answer your question, I am just trying to improve write performance. I do not store any VM data on this NAS, It’s main uses are Archival Data Storage, Plex Media Storage, And one share that IMMICH stores backup media to.

@Glitch3dPenguin ,
Probably won’t help in your use case. It’s not a write cache for increasing write speed. It’s basically a log file and only used if power failure.

I tend to think of writes in terms of udp/tcp in that tcp connections use more steps than udp. Asynchronous writes to me happen sort of like udp -best effort, first served, etc. There is no response given that the transaction has been recorded. Synchronous writes (like tcp) require an additional step - the reply that the transaction has been written to disk before receiving more transactions.

45Drives has some good info as well:

I realize everybody’s interest and reasons (and time) for homelabbing differ, but I recommend setting up a SLOG (and killing it later). It’s easy to setup. Run some write tests and compare speeds. Good way to get familiar with special VDEVs. It’s a good way to start to really understand the ZFS filesystem that underlies Truenas or your storage system.

1 Like

I recently had a conversation with TrueNAS enterprise sales/engineers, as we’re looking at making a purchase at work. For VMware datastores, they did recommend a SLOG. IIRC it was a 16GB overprovisioned SAS SSD, despite our pool being SATA SSD.

I brought up the question of “What happens if that SLOG dies? Don’t we lose the pool?”. Surprisingly, they said we would not, and that the pool would continue on just fine as it can be thought of as intent not cache.

Well, I was debating whether to use some of my nvme drives as SLOG / L2ARC devices or to use them as storage… but the “You should do it and test it for the experience so you learn how ZFS works” argument is convincing.

Can you recommend a good testing method to bechmark the differences?

fio or iozone, it takes some time and effort but if you really want to dig into how your system differs with/without it’s worth the effort. Also p1600x optane drives are pretty cheap for what they are and they make perfect SLOG drives. You can also look at other optane drives for l2arc/special devices since most consumer nvme aren’t all that fast once you hammer them with high IO operations. There are other high IO options out there depending on your budget and willingness to track them down (generally second hand enterprise drives).


…wtf? something’s bugged.

ty for the heads up about using the optane drives, was just starting to think through those options

No idea at the 5 month thing. Took a look this week and it looks like prices on p1600x’s have actually gone up a bit but you can still find the 58gb versions for under $40 on ebay from newegg. 2 of those would make perfect slog drives unless you happen to be running a 100gb network.

Well - problem solved! I was not convinced I was getting the speed I should have with the amount disks I had. So this is what I did:

  1. Added 4 SSDs and added them to their own pool. (Trying to eliminate any disk bottle neck)
  2. I moved and read some files from the test SSD pool just to see if the performance was any better. (it wasn’t)
  3. I ran iperf3 tests to the NAS with 3 other 10gbe devices I had. Even with iperf3 to eliminate any disks, I was still getting the same speeds.
  4. Took a guess and figured it was my Network card that was the issue. (it was!)

So the NIC I WAS using was an older Chelsio T420-BT that came out of an older server that I moved to my build since the card was unused. I have since bought and installed a SuperMicro AOC-STGN-i2S and it SCREAMS!

Victory!

4 Likes