k8s部署固定pvc配置

k8s版本:v1.19.7
参考官方文档( 在 Kubernetes 环境中部署 OceanBase 集群-V4.3.0-OceanBase 数据库文档-分布式数据库使用文档)在k8s中部署集群成功并成功连接数据库写入数据,但是在服务器上创建的数据存储目录是随机生成的,当删除集群再次创建后无法加载上次创建集群写入的数据(数据已持久化到磁盘)而是重新随机创建了存储牡蛎,请问如何配置使集群使用固定的存储pvc,实现新集群可以加载以前写入的数据?
如图:
image

pv、pvc配置:

关键修改1:将回收策略改为 Retain(PV 不被删除)

apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

name: local-path

provisioner: rancher.io/local-path

volumeBindingMode: WaitForFirstConsumer

reclaimPolicy: Retain # 从 Delete 改为 Retain

pv定义


ob-data-pv.yaml

apiVersion: v1

kind: PersistentVolume

metadata:

name: ob-data-pv

spec:

capacity:

storage: 30Gi

accessModes:

- ReadWriteOnce

persistentVolumeReclaimPolicy: Retain

storageClassName: local-path

local:

path: /home/services/obdata/data  # 独立目录,需提前创建

nodeAffinity:

required:

  nodeSelectorTerms:

    - matchExpressions:

        - key: kubernetes.io/hostname

          operator: In

          values:

            - yun238

ob-redo-pv.yaml

apiVersion: v1

kind: PersistentVolume

metadata:

name: ob-redo-pv

spec:

capacity:

storage: 30Gi

accessModes:

- ReadWriteOnce

persistentVolumeReclaimPolicy: Retain

storageClassName: local-path

local:

path: /home/services/obdata/redo  # 独立目录,需提前创建

nodeAffinity:

required:

  nodeSelectorTerms:

    - matchExpressions:

        - key: kubernetes.io/hostname

          operator: In

          values:

            - yun238

ob-log-pv.yaml

apiVersion: v1

kind: PersistentVolume

metadata:

name: ob-log-pv

spec:

capacity:

storage: 10Gi

accessModes:

- ReadWriteOnce

persistentVolumeReclaimPolicy: Retain

storageClassName: local-path

local:

path: /home/services/obdata/log  # 独立目录,需提前创建

nodeAffinity:

required:

  nodeSelectorTerms:

    - matchExpressions:

        - key: kubernetes.io/hostname

          operator: In

          values:

            - yun238

pvc定义


ob-data-pvc.yaml

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: ob-data-pvc

namespace: oceanbase # 必须与 OBCluster 同命名空间

spec:

accessModes:

- ReadWriteOnce

storageClassName: local-path

resources:

requests:

  storage: 30Gi

volumeName: ob-data-pv # 绑定到固定 PV


ob-redo-pvc.yaml

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: ob-redo-pvc

namespace: oceanbase

spec:

accessModes:

- ReadWriteOnce

storageClassName: local-path

resources:

requests:

  storage: 30Gi

volumeName: ob-redo-pv


ob-log-pvc.yaml

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: ob-log-pvc

namespace: oceanbase

spec:

accessModes:

- ReadWriteOnce

storageClassName: local-path

resources:

requests:

  storage: 10Gi

volumeName: ob-log-pv


kind: ConfigMap

apiVersion: v1

metadata:

name: local-path-config

namespace: local-path-storage

data:

config.json: |-

{

        "nodePathMap":[

        {

          "node":"yun238",

          "paths":["/home/services/obdata"]

        },

        {

          "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",

          "paths":["/home/services/obdata"]

        }

        ]

}

setup: |-

#!/bin/sh

set -eu

mkdir -m 0777 -p "$VOL_DIR"  # 保持不变:创建数据目录

关键修改2:禁止删除数据目录(原脚本为 rm -rf “$VOL_DIR”)

teardown: |-

#!/bin/sh

set -eu

echo "Skip deleting volume directory: $VOL_DIR"  # 仅输出日志,不删除数据

helperPod.yaml: |-

apiVersion: v1

kind: Pod

metadata:

  name: helper-pod

spec:

  priorityClassName: system-node-critical

  tolerations:

    - key: node.kubernetes.io/disk-pressure

      operator: Exists

      effect: NoSchedule

  containers:

  - name: helper-pod

    image: quay.io/prometheus/busybox:latest

    imagePullPolicy: IfNotPresent

obcluster配置:
apiVersion: oceanbase.oceanbase.com/v1alpha1

kind: OBCluster

metadata:

name: obcluster

namespace: oceanbase

annotations:

"oceanbase.oceanbase.com/independent-pvc-lifecycle": "true"  #true: 可在删除集群后保留 PVC

# "oceanbase.oceanbase.com/mode": "standalone" 或 "service" #standalone: 使用 127.0.0.1 初始化单节点集群,无法与其他节点通信; service: 为每个 OBServer 创建单独的 K8s Service,用 Service 的 ClusterIP 作为 OBServer 的通讯 IP

# "oceanbase.oceanbase.com/single-pvc": "true" # true: 为每个 OBServer 的 Pod 创建并绑定一个整体的 PVC(默认创建三个)

spec:

clusterName: obcluster

clusterId: 1

userSecrets:

root: sc-sys-root

proxyro: sc-sys-proxyro

monitor: sc-sys-monitor

operator: sc-sys-operator

topology:

- zone: zone1

  replica: 1    

  nodeSelector:

    kubernetes.io/hostname: yun238

- zone: zone2

  replica: 1    

  nodeSelector:

    kubernetes.io/hostname: yun239 

- zone: zone3

  replica: 1  

  nodeSelector:

    kubernetes.io/hostname: yun240  

observer:

image: quay.io/oceanbase/oceanbase-cloud-native:4.3.5.4-104000042025090916

resource:

  cpu: 2

  memory: 8Gi

storage:

  dataStorage:  

    storageClass: local-path    

    size: 30Gi        

  redoLogStorage:  

    storageClass: local-path        

    size: 30Gi        

  logStorage:        

    storageClass: local-path    

    size: 10Gi

   

# storage:

#   # 引用固定 PVC:ob-data-pvc

#   dataStorage:

#     existingClaimName: ob-data-pvc  # 关键:指定已创建的 PVC 名称

#     size: 30Gi          

#   # 引用固定 PVC:ob-redo-pvc

#   redoLogStorage:

#     existingClaimName: ob-redo-pvc

#     size: 30Gi          

#   # 引用固定 PVC:ob-log-pvc

#   logStorage:

#     existingClaimName: ob-log-pvc

#     size: 10Gi          

# storage:

#   # Data 存储:通过注解指定固定路径,复用提前创建的 PVC

#   dataStorage:

#     storageClass: local-path

#     size: 30Gi  

#     annotations:

#       rancher.io/local-path: "/home/services/obdata/data"  # 与提前创建的 PV 路径一致

#   # RedoLog 存储:同理

#   redoLogStorage:

#     storageClass: local-path

#     size: 30Gi  

#     annotations:

#       rancher.io/local-path: "/home/services/obdata/redo"

#   # Log 存储:同理

#   logStorage:

#     storageClass: local-path

#     size: 10Gi

#     annotations:

#       rancher.io/local-path: "/home/services/obdata/log"    

monitor:

image: quay.io/oceanbase/obagent:4.2.2-100000042024011120

resource:

  cpu: 1

  memory: 1Gi

parameters:

- name: system_memory

  value: 2G

- name: obconfig_url

  value: 'http://svc-ob-configserver.oceanbase.svc:8080/services?Action=ObRootServiceInfo&ObCluster=obcluster'

ob不支持这种功能。新集群无法加载其他集群的数据

还没用k8s部署过呢,回头试一下,是不是对硬件要求更高了

666

666