Hi all
Hope someone has, Looking for a helm chart, to install a Ceph backed CSI provider onto a K8S cluster.
My CEPH cluster/storage originates from my Proxmox cluster. which is then mounted/visible on the K8S hosting nodes as /cephfs. I want to “expose” these mount points, or maybe even the Ceph volumes directly (without needing to mount storage onto /cephfs on the host nodes) to my K8S environment to be consumable locations for PVC’s
'NOTE” I do not want to run another CEPH cluster on my K8S cluster, which seem to be what 90% of the examples out there wants me to do.
G
Eventually found the following:
Is there a helm chart that install a csi driver to access storage presented by a ceph source not on the kubernetes cluster
Yes, there are Helm charts available for this purpose. The most common solution is the Ceph CSI driver, which allows Kubernetes to consume storage from an external Ceph cluster.
Ceph CSI Helm Chart
The official Ceph CSI project provides Helm charts that can connect your Kubernetes cluster to an external Ceph cluster. The main options are:
1. ceph-csi-rbd (for block storage)
-
Connects to Ceph RBD (RADOS Block Device)
-
Provides ReadWriteOnce volumes
-
Best for databases, traditional applications
2. ceph-csi-cephfs (for filesystem storage)
-
Connects to CephFS (Ceph Filesystem)
-
Provides ReadWriteMany volumes
-
Best for shared storage across multiple pods
Installation
You can install from the official Ceph CSI Helm repository:
Bash
# Add the Ceph CSI Helm repository
helm repo add ceph-csi https://ceph.github.io/csi-charts
helm repo update
# Install RBD CSI driver
helm install ceph-csi-rbd ceph-csi/ceph-csi-rbd \
–namespace ceph-csi-rbd \
–create-namespace
# Or install CephFS CSI driver
helm install ceph-csi-cephfs ceph-csi/ceph-csi-cephfs \
–namespace ceph-csi-cephfs \
–create-namespace
Key Configuration
You’ll need to configure the chart with your external Ceph cluster details:
-
Ceph monitors - IP addresses/hostnames of your Ceph mon nodes
-
Ceph credentials - keyring/secret for authentication
-
Pool name - for RBD volumes
-
Filesystem name - for CephFS volumes
After installation, you create StorageClass resources that reference your external Ceph cluster, and then use PVCs to provision volumes.
The official documentation and chart repository can be found at https://github.com/ceph/ceph-csi