image
1 | v3.13 |
resource
basic
- step1
1 | 18 server: /home/dongwei/k8s/ceph-resource/v3.10-work/deploy/cephfs/kubernetes |
step2
kubectl create -f csi-config-map.yamlcheck by
ceph mon dump1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17apiVersion: v1
kind: ConfigMap
metadata:
name: ceph-csi-config
namespace: ceph
data:
config.json: |-
[
{
"clusterID": "xxx",
"monitors": [
"172.20.7.xxx:6789",
"172.20.7.xxx:6789",
"172.20.7.xxx:6789"
]
}
]step3
kubectl create -f ceph-conf.yaml1
2
3
4
5
6
7
8
9
10
11
12
13
14apiVersion: v1
kind: ConfigMap
data:
ceph.conf: |
[global]
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
# keyring is a required key and its value should be empty
keyring: |
metadata:
name: ceph-config
namespace: cephstep4
1
2
3kubectl create -f kms-config.yaml
kubectl create -f csi-cephfsplugin-provisioner.yaml
kubectl create -f csi-cephfsplugin.yaml
provision
Requires subvolumegroup to be created before provisioning the PVC. If the subvolumegroup provided in
ceph-csi-configConfigMap is missing in the ceph cluster, the PVC creation will fail and will stay inPendingstate.We do NOT need to manually mount CephFS on the worker node host. The Ceph CSI driver (deployed as a DaemonSet called csi-cephfsplugin ) handles all mounting automatically on each worker node on behalf of your pods.
The CSI node plugin performs a staged mount on the host at a path like /var/lib/kubelet/plugins/cephfs.csi.ceph.com/… , then bind-mounts it into the target pod’s filesystem. This is entirely managed by the CSI driver — you never touch /etc/fstab or run mount manually.
dynamic provision
step1
1
ceph fs subvolumegroup create <fsName> csi # 会生成子目录(volumes/csi) /挂载点/volumes/csi/
step2
kubectl create -f secret.yamlcheck by
ceph auth get client.dongweiandceph auth get client.admin1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16apiVersion: v1
kind: Secret
metadata:
name: csi-cephfs-secret
namespace: ceph
stringData:
# Required for statically provisioned volumes, 🚨 this use userID
userID: dongwei
userKey: <>
# Required for dynamically provisioned volumes, 🚨 this use adminID
adminID: admin
adminKey: <>
# Encryption passphrase
encryptionPassphrase: test_passphraseRequired secrets for provisioning: Admin credentials are required for provisioning new volumes
adminID: ID of an admin clientadminKey: key of the admin client
Required secrets for statically provisioned volumes: User credentials with access to an existing volume
userID: ID of a user clientuserKey: key of a user client
step3
1
2
3kubectl create -f storageclass.yaml
kubectl create -f pvc.yaml
kubectl create -f pod.yaml
static provision
step1
1
2
3
4
5ceph fs subvolumegroup create <fsName> testGroup
ceph fs subvolume create <fsName> testSubVolume testGroup --size=1073741824 # byte 1073741824/1024/1024=1024MB
check
ceph fs subvolume getpath <fsName> testSubVolume testGroupstep2
kubectl create -f secret.yamlcheck by
ceph auth get client.dongwei1
2
3
4
5
6
7
8
9
10
11
12apiVersion: v1
kind: Secret
metadata:
name: csi-cephfs-secret-static
namespace: ceph
stringData:
# Required for statically provisioned volumes, 🚨 this use userID
userID: dongwei
userKey: <>
# Encryption passphrase
encryptionPassphrase: test_passphrasestep3
kubectl create -f pv.yaml1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28apiVersion: v1
kind: PersistentVolume
metadata:
name: cephfs-static-pv
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
csi:
driver: cephfs.csi.ceph.com
nodeStageSecretRef:
# node stage secret name
name: csi-cephfs-secret
# node stage secret namespace where above secret is created
namespace: ceph
volumeAttributes:
# optional file system to be mounted
"fsName": "cephfs-1"
# Required options from storageclass parameters need to be added in volumeAttributes
"clusterID": "ba68226a-672f-4ba5-97bc-22840318b2ec"
"staticVolume": "true"
"rootPath": /volumes/testGroup/testSubVolume
# volumeHandle can be anything, need not to be same
# as PV name or volume name. keeping same for brevity
volumeHandle: cephfs-static-pv
persistentVolumeReclaimPolicy: Retain
volumeMode: FilesystemrootPath 也可以指定为 /volumes/csi
step4
kubectl create -f pvc.yaml1
2
3
4
5
6
7
8
9
10
11
12
13
14
15apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cephfs-static-pvc
namespace: ceph
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: ""
volumeMode: Filesystem
# volumeName should be same as PV name
volumeName: cephfs-static-pv
best practices
更高clusterID
需要删除历史的 PV 和 PVC
1 | 强制删除PV |