Alongside the HL15, I noticed lots of buzz from 45Drives around Ceph. The main theme of the discussions was demystifying and encouragement that it’s not as out of reach as it may seem.
Now, I’ve been following 45Drives for some time through many different podcasts and forums. So I may be mixing up the context of discussions between their enterprise and homelab talking points.
From my current understanding, Ceph is not unlike Gluster and that it’s best (or only applicable) in multi node setups. But I can’t help wonder if this is one of the myths 45Drives is trying to debunk.
Is Ceph viable in a single node such as the HL15?
If so, can anyone help point me in the right direction to some required reading/viewing?
I use the Ceph integration in ProxMox across 3 nodes. I’m planning on adding my HL15 as a fourth node (eventually as a fifth ).
I tried gluster before but Ceph is far more capable solution IMHO. I’m not sure if I will every need to use ZFS ever again, no snapshot replication required with Ceph.
Don’t forget that Caph also provides RadosGW (?), which is an S3 compatible API. I think there is NFS and iSCSI in there somewhere too. And don’t forget CephFS, a POSIX compatible file system.
Thanks Bill! That is encouraging to say the least. I’ve come across it mentioned in ProxMox forums and threads as well. However, those seem to be clustered and not a standalone node.
I may just need to wrap my head around it some more, as I still have more general questions than I do focused ones at this point. I’m almost certain a stickied post or thread regarding Ceph on the HL15 would be welcomed as more people join the forums.
Hi @orix With Ceph it is recommended to have a minimum of 3 nodes in your cluster. Without going into too many details, this is so that there are no split-brain decisions as there must be an odd number of decision-makers.
Although it is possible to build a single node ceph cluster (or any number of nodes you wish) it is not advised to do so.
Ceph is generally designed for High Availability and multi-user throughput whereas a single server using let’s say a ZFS filesystem is much better at handling single-user throughput.
For a HomeLab environment, I would recommend using something like ZFS for your RAID.
This is honestly all gibberish for me, if you can have a 3 or more node storage server in your homelab, thats just a lab now, kinda overkill for what most would consider a homelab
Thanks for this @Hutch-45Drives! I had a suspicion that this was the case. ZFS is perfect still for my use cases but am always willing to try something new out.
@Lavavex I’m not sure I entirely agree here. Multi node storage servers are the core for some homelabs. No two homelabs will be alike or share the same requirements, as we are all learning something different. I’ve had vSAN running for VMware certs at work on a few small 1u servers. Homelabs are not production environments and allow us to try new things we may not have the ability to at work. I’ve seen Docker swarms across several NUC’s, VMware/Starwinds vSAN across several hosts of different capacities, and homelabs with several physical switches and routers for CCNA and other Cisco certs.
I don’t have any prior experience with Ceph, but I have come across it on several occassions while researching how I could migrate my VMs and self hosted services to a high availability setup. So it is definitely something I want to learn more about!
But as has been mentioned in this thread, one needs a minimum of 3 nodes and I am lacking spare hardware. Currently evaluating various budget options for building such a cluster, and even came across this video by none other than @geerlingguy !
Indeed! And there are also some great videos on Ceph clustering with TinyMiniMicro x86 PCs, like this one by apalrd’s adventures: https://www.youtube.com/watch?v=Vd8GG9twjRU
Awesome! I’ll toss that video on in a bit. I blame Jeff for single-handedly causing the Pi shortage the last few years, and refuse any other factual evidence that it’s not true!!! Haha.
@geerlingguy, that also looks very very promising! I’ll soon have a few DL20 G9’s that need something to do as they lack expansion capabilities (1U) for my original plans. They are however low power consumption (~30W idle with esxi) and have nice Intel SSD’s, so this may be a perfect use case. Thanks for the link!