Terminology
NSFS
NSFS (short for Namespace-Filesystem) is a capability to use a shared filesystem (mounted in the endpoints) for the storage of S3 buckets, while keeping a 1-1 mapping between Object and File.
reference
- FS backend supported types are
GPFS
,CEPH_FS
,NFSv4
default isPOSIX
code
image
1 | make noobaa # include: 1 make build, 2 make base, 3 make noobaa |
如果修改代码需要执行
1
2
3
4make base # 制作 noobaa-base:dingofs
make noobaa # 制作 noobaa-core:dingofs-v1.0
docker tag noobaa-core:dingofs-v1.0 harbor.zetyun.cn/dingofs/noobaa-core:dingofs-v1.0.x
docker push harbor.zetyun.cn/dingofs/noobaa-core:dingofs-v1.0.x
quay archifact
1 | docker pull quay.io/noobaa/noobaa-builder:master-20250623 |
workflow
build image
1
.github/workflows/manual-full-build.yaml
Deploy
1 | curl -LO https://github.com/noobaa/noobaa-operator/releases/download/v5.18.4/noobaa-operator-v5.18.4-linux-amd64.tar.gz |
install
default sc
default-sc-dingofs-noobaa.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: default-sc-dingofs-noobaa
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.dingofs.com
allowVolumeExpansion: true
reclaimPolicy: Retain
parameters:
csi.storage.k8s.io/provisioner-secret-name: dingofs-secret-noobaa
csi.storage.k8s.io/provisioner-secret-namespace: dingofs
csi.storage.k8s.io/node-publish-secret-name: dingofs-secret-noobaa
csi.storage.k8s.io/node-publish-secret-namespace: dingofs
pathPattern: "${.pvc.namespace}-${.pvc.name}"
mountOptions:
- diskCache.diskCacheType=2
- block_cache.cache_store=disk
- disk_cache.cache_dir=/dingofs/client/data/cache/0:10240 # "/data1:100;/data2:200"
- disk_cache.cache_size_mb=102400 # MBprepare
1
2
3sudo ctr -n k8s.io images pull dockerproxy.zetyun.cn/docker.io/noobaa/noobaa-core:master-20250623
sudo ctr -n k8s.io images pull dockerproxy.zetyun.cn/docker.io/noobaa/noobaa-operator:5.18.4
sudo ctr -n k8s.io images pull quay.io/sclorg/postgresql-15-c9s:latestinstall
1
2
3
4
5
6
7
8
9
10
11kubectl create ns noobaa
kubectl config set-context --current --namespace noobaa
use internal postgres
noobaa install --noobaa-image=dockerproxy.zetyun.cn/docker.io/noobaa/noobaa-core:master-20250623 --operator-image=dockerproxy.zetyun.cn/docker.io/noobaa/noobaa-operator:5.18.4 --db-image=quay.io/sclorg/postgresql-15-c9s:latest --namespace=noobaa
use external postgres
noobaa install --noobaa-image=dockerproxy.zetyun.cn/docker.io/noobaa/noobaa-core:master-20250623 --operator-image=dockerproxy.zetyun.cn/docker.io/noobaa/noobaa-operator:5.18.4 --postgres-url="postgres://postgres:noobaa123@10.220.32.18:5432/nbcore" --namespace=noobaa
use dingofs image
noobaa install --noobaa-image harbor.zetyun.cn/dingofs/noobaa-core:dingofs-v1.0 --operator-image dockerproxy.zetyun.cn/docker.io/noobaa/noobaa-operator:5.18.4 --namespace=noobaa --debug-level allstatus
1
noobaa status --show-secrets
uninstall
1
2
3
4
5
6
7noobaa uninstall --cleanup
clean expired data
kubectl exec -it <noobaaFS-debug-pod> -n dingofs -- bash
cd /dfs/noobaa-debug-pv-xxx
rm -rf noobaa-db-noobaa-db-pg-0
rm -rf noobaa-noobaa-default-backing-store-noobaa-pvc-xxxupgrade
1
2
3
4
5
6
7noobaa upgrade --noobaa-image <noobaa-image-path-and-tag> --operator-image <operator-image-path-and-tag>
image update
sudo ctr -n k8s.io images pull harbor.zetyun.cn/dingofs/noobaa-core:dingofs-v1.0.3
e.g.
noobaa upgrade --noobaa-image harbor.zetyun.cn/dingofs/noobaa-core:dingofs-v1.0.3 --operator-image dockerproxy.zetyun.cn/docker.io/noobaa/noobaa-operator:5.18.4 --debug-level all
pod
1 | noobaa-core-0 2/2 Running 0 2m5s |
default resource
noobaa-core-0
1
2
3
4
5
6Limits:
cpu: 999m
memory: 4Gi
Requests:
cpu: 999m
memory: 4Ginoobaa-db-pg-0
1
2
3
4
5
6Limits:
cpu: 500m
memory: 4Gi
Requests:
cpu: 500m
memory: 4Ginoobaa-default-backing-store-……
1
2
3
4
5
6Limits:
cpu: 100m
memory: 400Mi
Requests:
cpu: 100m
memory: 400Minoobaa-endpoint-……
1
2
3
4
5
6Limits:
cpu: 999m
memory: 2Gi
Requests:
cpu: 999m
memory: 2Ginoobaa-operator-…
1
2
3
4
5
6Limits:
cpu: 250m
memory: 512Mi
Requests:
cpu: 250m
memory: 512Mi
endpoint
Endpoints are deployed as a deployment with autoscaling, so the
minCount
/maxCount
values should be used to set a range for the autoscaler to use, and this is typically how you can increase the system’s S3 throughput. It should be preferred to increase the number of endpoints rather than increasing the resources for each endpoint.
replica
1 | kubectl patch noobaa noobaa --type merge --patch '{ |
boostrap
1 | /noobaa_init_files/noobaa_init.sh init_endpoint |
noobaa-db-pg-0
1 | volume |
serivice
1 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE |
volume
1 | noobaa-db-noobaa-db-pg-0 |
systemc info
1 | NOOBAA_SECRET=$(kubectl get noobaa noobaa -n noobaa -o json | jq -r '.status.accounts.admin.secretRef.name' ) |
Mgmt UI
1 | kubectl get secret noobaa-admin -n noobaa -o json | jq '.data|map_values(@base64d)' |
aws-cli
1 | alias s3='AWS_ACCESS_KEY_ID=3gR5xxxxxxxxOx AWS_SECRET_ACCESS_KEY=zu229aAJxxxxxxxxxxxxxxxxxcfX aws --endpoint https://10.xxx.xx.18:30478 --no-verify-ssl s3' |
NSFS
1. create NSFS resource
1 | noobaa namespacestore create nsfs dingofs --pvc-name='noobaa-nsfs-pvc' |
nsfs-dingofs-pvc.yaml
1 | apiVersion: v1 |
delete
1 | noobaa namespacestore delete dingofs-nsfs |
list
1 | noobaa namespacestore list |
2. create bucket (optional)
该步骤主要用于设置文件系统已有的目录为 bucket ,如果已使用 s3-user-dingofs mb s3://dingofs-bucket-1 在文件系统中创建目录,则无需执行此步骤(否则会提示bucket已存在)。后续直接使用 s3-user-dingofs 进行 bucket 的操作即可
1 | 映射文件系统的目录 dingofs-bucket-1 为 bucket |
status
1 | noobaa bucket status <bucketName> |
list bucket
1 | noobaa api bucket_api list_buckets '{}' |
get_bucket_policy
1 | noobaa api bucket_api get_bucket_policy '{"name": "<bucketName>"}' |
delete bucket
1 | noobaa api bucket_api delete_bucket '{"name": "<bucketName>"}' |
add bucket policy
1 | 设置bucket访问策略只能使用 admin 账号 |
policy.json
1 | { |
Sid
(Statement ID) serves as an optional identifier for individual policy statements, allowing for easier management and referencing of specific rules within a larger policy. It’s a way to give a unique name to each permission set within the bucket policy, which can be helpful when dealing with multiple statements or when debugging policy issues
3. create account
1 | noobaa api account_api create_account '{ |
- check account
1
2
3noobaa api account_api list_accounts {}
check specify accout
noobaa api account_api read_account '{"email":"jenia@noobaa.io"}'
4. config s3 client
1 | alias s3-user-dingofs='AWS_ACCESS_KEY_ID=UanhxxxxxxggJP AWS_SECRET_ACCESS_KEY=ptsSJUYVCxxxxxxxxxxxSF8ltV aws --endpoint https://10.220.32.18:30478 --no-verify-ssl s3' |
5. operate
aws s3
1 | create bucket |
mc
1 | mc alias set <aliasName> <entrypoint> <ak> <sk> --insecure |
Best Practices
External Postgresql DB (TBD)
binary install
1
2
3
4
5
6sudo dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-9-x86_64/pgdg-redhat-repo-latest.noarch.rpm
sudo dnf -qy module disable postgresql
sudo dnf install -y postgresql17-server
sudo /usr/pgsql-17/bin/postgresql-17-setup initdb
sudo systemctl enable postgresql-17
sudo systemctl start postgresql-17docker install
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
2517 version
docker run --name noobaa-postgres \
-e POSTGRES_PASSWORD=<> \
--restart always \
--network host \
-d dockerproxy.zetyun.cn/docker.io/postgres:latest
15 version
docker run --name noobaa-postgres \
-e POSTGRESQL_ADMIN_PASSWORD=<> \
--restart always \
--network host \
-d quay.io/sclorg/postgresql-15-c9s:latest
enter psql
psql -U postgres
create nbcore database
CREATE DATABASE nbcore WITH LC_COLLATE = 'C' TEMPLATE template0;
check
\list
db url
postgres://postgres:<mysecretpassword>@<ip>:5432/nbcore"use postgres 15, should config
variables:
POSTGRESQL_USER POSTGRESQL_PASSWORD POSTGRESQL_DATABASE
Or the following environment variable:
POSTGRESQL_ADMIN_PASSWORD
Or both.
Troubleshooting
entrypoint的loadbalance
TODO
验证 install 时候使用 –db-storage-class 和 –pv-pool-default-storage-class 指定默认sc
不设置 storageclass.kubernetes.io/is-default-class: “true” 的 default sc