I thought I’d try to post here as I think I saw some info about this on a 45 drives post or youtube. I can’t recall. Do we have any Ceph experts?
I’m not sure what is relevant so if you need more info let me know. Or more detail
I’m using proxmox with 3 host and ceph 19.2.2 installed. 3 hosts 4 osds per host.
I have setup a erasure coding pool with 4+2.
I have created a ceph rule to use
rule ec_pool_3host {
id 2
type erasure
step set_chooseleaf_tries 50
step set_choose_tries 150
step take default class hdd
step choose indep 3 type host
step chooseleaf indep 2 type osd
step emit
}
I’ve also tried chooseleaf firstn 2 type osd
I need 6 chunks placed. My goal is to distribute these across the 3 hosts where each host has a max of 2 chunks. So if any one hosts goes down I still have all PG with 4 chunks available. I understand it’s risky and would not self heal in that situation.
However After placing data in the pool and working with I found that there is
1 PG that is using 3 osds from host 1
4 PGs using 3 osds from host 2
1 PG using 3 osds from host 3 and
It seems to have almost work and I don’t understand why it did not Place it where it only allowed a max of 2 OSDs per host.
Is there a rule I could put to pervent ceph using 3 osds per host?