Ceph help with 3 host Erasure coded 4+2 pool

I thought I’d try to post here as I think I saw some info about this on a 45 drives post or youtube. I can’t recall. Do we have any Ceph experts?
I’m not sure what is relevant so if you need more info let me know. Or more detail
I’m using proxmox with 3 host and ceph 19.2.2 installed. 3 hosts 4 osds per host.
I have setup a erasure coding pool with 4+2.
I have created a ceph rule to use
rule ec_pool_3host {
id 2
type erasure
step set_chooseleaf_tries 50
step set_choose_tries 150
step take default class hdd
step choose indep 3 type host
step chooseleaf indep 2 type osd
step emit
}
I’ve also tried chooseleaf firstn 2 type osd

I need 6 chunks placed. My goal is to distribute these across the 3 hosts where each host has a max of 2 chunks. So if any one hosts goes down I still have all PG with 4 chunks available. I understand it’s risky and would not self heal in that situation.

However After placing data in the pool and working with I found that there is

1 PG that is using 3 osds from host 1
4 PGs using 3 osds from host 2
1 PG using 3 osds from host 3 and

It seems to have almost work and I don’t understand why it did not Place it where it only allowed a max of 2 OSDs per host.

Is there a rule I could put to pervent ceph using 3 osds per host?

Hey, you would want your EC rule so that you modify your “step chooseleaf indept type 3 host” and then add a new line like I have below, which tells ceph to pick 3 hosts and 2 OSDs in each host:


        step choose indep 3 type host
        step choose indep 2 type osd

You will also want to change the min_size of the pools using this rule. the default is a min_size of 5 so if you took 1 full host down for repairs that would put you down to 4 chunks causing IO to stop

you can do this by running this command on the pools using this rule:

ceph osd pool set <poolname> min_size 4

Then check it with this:

ceph osd pool get <poolname> min_size
2 Likes