ceph csi

image

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# v3.13
quay.io/cephcsi/cephcsi:canary

registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0
registry.k8s.io/sig-storage/csi-provisioner:v5.1.0
registry.k8s.io/sig-storage/csi-resizer:v1.13.1
registry.k8s.io/sig-storage/csi-snapshotter:v8.2.0

# v3.10
quay.io/cephcsi/cephcsi:v3.10-canary

registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1
registry.k8s.io/sig-storage/csi-provisioner:v3.6.2
registry.k8s.io/sig-storage/csi-resizer:v1.9.2
registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2

resource

basic

  • step1
1
2
3
4
# 18 server: /home/dongwei/k8s/ceph-resource/v3.10-work/deploy/cephfs/kubernetes
kubectl create -f csidriver.yaml
kubectl create -f csi-provisioner-rbac.yaml
kubectl create -f csi-nodeplugin-rbac.yaml
  • step2

    kubectl create -f csi-config-map.yaml

    check by ceph mon dump

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: ceph-csi-config
    namespace: ceph
    data:
    config.json: |-
    [
    {
    "clusterID": "xxx",
    "monitors": [
    "172.20.7.xxx:6789",
    "172.20.7.xxx:6789",
    "172.20.7.xxx:6789"
    ]
    }
    ]
  • step3

    kubectl create -f ceph-conf.yaml

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    apiVersion: v1
    kind: ConfigMap
    data:
    ceph.conf: |
    [global]
    auth_cluster_required = cephx
    auth_service_required = cephx
    auth_client_required = cephx

    # keyring is a required key and its value should be empty
    keyring: |
    metadata:
    name: ceph-config
    namespace: ceph
  • step4

    1
    2
    3
    kubectl create -f kms-config.yaml
    kubectl create -f csi-cephfsplugin-provisioner.yaml
    kubectl create -f csi-cephfsplugin.yaml

provision

Requires subvolumegroup to be created before provisioning the PVC. If the subvolumegroup provided in ceph-csi-config ConfigMap is missing in the ceph cluster, the PVC creation will fail and will stay in Pending state.

We do NOT need to manually mount CephFS on the worker node host. The Ceph CSI driver (deployed as a DaemonSet called csi-cephfsplugin ) handles all mounting automatically on each worker node on behalf of your pods.
The CSI node plugin performs a staged mount on the host at a path like /var/lib/kubelet/plugins/cephfs.csi.ceph.com/… , then bind-mounts it into the target pod’s filesystem. This is entirely managed by the CSI driver — you never touch /etc/fstab or run mount manually.

dynamic provision

  • step1

    1
    ceph fs subvolumegroup create <fsName> csi # 会生成子目录(volumes/csi) /挂载点/volumes/csi/
  • step2

    kubectl create -f secret.yaml

    check by ceph auth get client.dongwei and ceph auth get client.admin

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    apiVersion: v1
    kind: Secret
    metadata:
    name: csi-cephfs-secret
    namespace: ceph
    stringData:
    # Required for statically provisioned volumes, 🚨 this use userID
    userID: dongwei
    userKey: <>

    # Required for dynamically provisioned volumes, 🚨 this use adminID
    adminID: admin
    adminKey: <>

    # Encryption passphrase
    encryptionPassphrase: test_passphrase

    Required secrets for provisioning: Admin credentials are required for provisioning new volumes

    • adminID: ID of an admin client
    • adminKey: key of the admin client

    Required secrets for statically provisioned volumes: User credentials with access to an existing volume

    • userID: ID of a user client
    • userKey: key of a user client
  • step3

    1
    2
    3
    kubectl create -f storageclass.yaml
    kubectl create -f pvc.yaml
    kubectl create -f pod.yaml

static provision

  • step1

    1
    2
    3
    4
    5
    ceph fs subvolumegroup create <fsName> testGroup
    ceph fs subvolume create <fsName> testSubVolume testGroup --size=1073741824 # byte 1073741824/1024/1024=1024MB

    # check
    ceph fs subvolume getpath <fsName> testSubVolume testGroup
  • step2

    kubectl create -f secret.yaml

    check by ceph auth get client.dongwei

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    apiVersion: v1
    kind: Secret
    metadata:
    name: csi-cephfs-secret-static
    namespace: ceph
    stringData:
    # Required for statically provisioned volumes, 🚨 this use userID
    userID: dongwei
    userKey: <>

    # Encryption passphrase
    encryptionPassphrase: test_passphrase
  • step3

    kubectl create -f pv.yaml

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    apiVersion: v1
    kind: PersistentVolume
    metadata:
    name: cephfs-static-pv
    spec:
    accessModes:
    - ReadWriteMany
    capacity:
    storage: 1Gi
    csi:
    driver: cephfs.csi.ceph.com
    nodeStageSecretRef:
    # node stage secret name
    name: csi-cephfs-secret
    # node stage secret namespace where above secret is created
    namespace: ceph
    volumeAttributes:
    # optional file system to be mounted
    "fsName": "cephfs-1"
    # Required options from storageclass parameters need to be added in volumeAttributes
    "clusterID": "ba68226a-672f-4ba5-97bc-22840318b2ec"
    "staticVolume": "true"
    "rootPath": /volumes/testGroup/testSubVolume
    # volumeHandle can be anything, need not to be same
    # as PV name or volume name. keeping same for brevity
    volumeHandle: cephfs-static-pv
    persistentVolumeReclaimPolicy: Retain
    volumeMode: Filesystem

    rootPath 也可以指定为 /volumes/csi

  • step4

    kubectl create -f pvc.yaml

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: cephfs-static-pvc
    namespace: ceph
    spec:
    accessModes:
    - ReadWriteMany
    resources:
    requests:
    storage: 1Gi
    storageClassName: ""
    volumeMode: Filesystem
    # volumeName should be same as PV name
    volumeName: cephfs-static-pv

best practices

更高clusterID

需要删除历史的 PV 和 PVC

1
2
# 强制删除PV
kubectl patch pv pvc-xxx -p '{"metadata":{"finalizers":null}}' --type=merge