OB4.0在k8s部署失败

【 使用环境 】测试环境
【 OB or 其他组件 】
【 使用版本 】
【问题描述】清晰明确描述问题
【复现路径】问题出现前后相关操作
【问题现象及影响】

在k8s测试不是OB4.0失败,参考文档https://www.oceanbase.com/docs/community-observer-cn-10000000000901210

【附件】

查看容器日志



agent日志不存在网络问题

observer日志,报网络异常,caller=server/http.go:95:func1 不清楚这个是什么
下面是管理节点正常的截图


管理manager的容器日志

rbac-proxy的日志
在命令行的截图

[root@master deploy]# kubectl describe pod sapp-ob-cloud-zaob-zone1-0 -n obcluster
Name: sapp-ob-cloud-zaob-zone1-0
Namespace: obcluster
Priority: 0
Node: work01/192.168.19.130
Start Time: Wed, 14 Dec 2022 11:05:56 +0800
Labels: app=sapp-ob-cloud
index=0
subset=zone1
Annotations: cni.projectcalico.org/containerID: ae04f926c236f1a7592d6d2c589d581e181a4aa73ed68edd0cfdf25238a864c1
cni.projectcalico.org/podIP: 10.233.119.40/32
cni.projectcalico.org/podIPs: 10.233.119.40/32
Status: Running
IP: 10.233.119.40
IPs:
IP: 10.233.119.40
Containers:
observer:
Container ID: docker://e7d1674df1cb86247cc23a4cd550c0e426094f5bb493c3e2c101176f13bdb1f2
Image: oceanbasedev/oceanbase-cn:v3.1.4-10000092022071511-snapshot-08172042
Image ID: docker-pullable://oceanbasedev/oceanbase-cn@sha256:f20aa5c81c6dbed4fd4fa05a3e154eb7a45348352d125f504d147ae7a7e97529
Ports: 19001/TCP, 2881/TCP, 2882/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
State: Running
Started: Wed, 14 Dec 2022 11:06:00 +0800
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 2
memory: 4Gi
Readiness: http-get http://:19001/api/ob/readiness delay=0s timeout=1s period=2s #success=1 #failure=3
Environment:
Mounts:
/home/admin/data_file from data-file (rw)
/home/admin/data_log from data-log (rw)
/home/admin/log from log (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6gx6l (ro)
obagent:
Container ID: docker://26a8143ea3aa0141ac1cbc7f319da5726d807d4ee6ae5660b02cff2e8fe282f8
Image: oceanbase/obagent:1.2.0
Image ID: docker-pullable://oceanbase/obagent@sha256:ac37a475b3c8ac88ed80f231adc8b079eda8c112dd23f5ec5b1056b83dada025
Port: 8088/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 14 Dec 2022 11:06:03 +0800
Ready: True
Restart Count: 0
Readiness: http-get http://:8088/metrics/stat delay=0s timeout=1s period=2s #success=1 #failure=3
Environment:
Mounts:
/home/admin/obagent/conf from obagent-conf-file (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6gx6l (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data-file:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: sapp-ob-cloud-zaob-zone1-0-data-file
ReadOnly: false
data-log:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: sapp-ob-cloud-zaob-zone1-0-data-log
ReadOnly: false
log:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: sapp-ob-cloud-zaob-zone1-0-log
ReadOnly: false
obagent-conf-file:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: sapp-ob-cloud-zaob-zone1-0-obagent-conf-file
ReadOnly: false
kube-api-access-6gx6l:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: topology.kubernetes.io/zone=zone1
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling 6m51s default-scheduler 0/3 nodes are available: 3 persistentvolumeclaim “sapp-ob-cloud-zaob-zone1-0-data-file” not found.
Warning FailedScheduling 6m50s default-scheduler 0/3 nodes are available: 3 persistentvolumeclaim “sapp-ob-cloud-zaob-zone1-0-data-log” not found.
Warning FailedScheduling 6m47s default-scheduler 0/3 nodes are available: 3 persistentvolumeclaim “sapp-ob-cloud-zaob-zone1-0-log” not found.
Normal Scheduled 6m34s default-scheduler Successfully assigned obcluster/sapp-ob-cloud-zaob-zone1-0 to work01
Normal CreatedPod 6m51s statefulapp-controller create Podsapp-ob-cloud-zaob-zone1-0
Normal Pulling 6m33s kubelet Pulling image “oceanbasedev/oceanbase-cn:v3.1.4-10000092022071511-snapshot-08172042”
Normal Pulling 6m31s kubelet Pulling image “oceanbase/obagent:1.2.0”
Normal Pulled 6m31s kubelet Successfully pulled image “oceanbasedev/oceanbase-cn:v3.1.4-10000092022071511-snapshot-08172042” in 2.360159039s
Normal Started 6m31s kubelet Started container observer
Normal Created 6m31s kubelet Created container observer
Normal Pulled 6m28s kubelet Successfully pulled image “oceanbase/obagent:1.2.0” in 2.387843357s
Normal Created 6m28s kubelet Created container obagent
Normal Started 6m28s kubelet Started container obagent
Warning Unhealthy 6m28s kubelet Readiness probe failed: Get “http://10.233.119.40:8088/metrics/stat”: dial tcp 10.233.119.40:8088: connect: connection refused
Warning Unhealthy 92s (x156 over 6m28s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
[root@master deploy]#
相关的ping截图


私网

公网网卡也不存在延时(observer之间不存在网络延时)

感谢提供这么详细的信息,部署的时候是直接使用文档中github地址对应的配置文件吗,还是有一些自定义的
看错误信息是因为sapp-ob-cloud-zaob-zone1-0-log,sapp-ob-cloud-zaob-zone1-0-data-log,sapp-ob-cloud-zaob-zone1-0-data-file 这三个pvc没有创建成功,所以observer的容器不正常
存储的配置是否是使用的local-path方式,如果是的话,可以确认下,现在配置的是在哪个目录下,也可能是因为磁盘空间不够,所以创建不出来

非常感谢提问,看着像是nfs没有装,导致没有挂载成功



nfs挂载是成功的

kind: Pod
apiVersion: v1
metadata:
name: sapp-ob-cloud-zaob-zone1-0
namespace: obcluster
labels:
app: sapp-ob-cloud
index: ‘0’
subset: zone1
annotations:
cni.projectcalico.org/containerID: dee2d9461cdaafd7d5fb3a3d7ecc02b836e46e12221717d325b1fa469d9592d4
cni.projectcalico.org/podIP: 10.233.119.120/32
cni.projectcalico.org/podIPs: 10.233.119.120/32
spec:
volumes:
- name: data-file
persistentVolumeClaim:
claimName: sapp-ob-cloud-zaob-zone1-0-data-file
- name: data-log
persistentVolumeClaim:
claimName: sapp-ob-cloud-zaob-zone1-0-data-log
- name: log
persistentVolumeClaim:
claimName: sapp-ob-cloud-zaob-zone1-0-log
- name: obagent-conf-file
persistentVolumeClaim:
claimName: sapp-ob-cloud-zaob-zone1-0-obagent-conf-file
- name: kube-api-access-h5pt7
projected:
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
name: kube-root-ca.crt
items:
- key: ca.crt
path: ca.crt
- downwardAPI:
items:
- path: namespace
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
defaultMode: 420
containers:
- name: observer
image: ‘oceanbasedev/oceanbase-cn:v3.1.4-10000092022071511-snapshot-08172042’
ports:
- name: cable
containerPort: 19001
protocol: TCP
- name: mysql
containerPort: 2881
protocol: TCP
- name: rpc
containerPort: 2882
protocol: TCP
resources:
limits:
cpu: ‘2’
memory: 4Gi
requests:
cpu: ‘2’
memory: 4Gi
volumeMounts:
- name: data-file
mountPath: /home/admin/data_file
- name: data-log
mountPath: /home/admin/data_log
- name: log
mountPath: /home/admin/log
- name: kube-api-access-h5pt7
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
readinessProbe:
httpGet:
path: /api/ob/readiness
port: 19001
scheme: HTTP
timeoutSeconds: 1
periodSeconds: 2
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
- name: obagent
image: ‘oceanbase/obagent:1.2.0’
ports:
- name: monagent
containerPort: 8088
protocol: TCP
resources: {}
volumeMounts:
- name: obagent-conf-file
mountPath: /home/admin/obagent/conf
- name: kube-api-access-h5pt7
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
readinessProbe:
httpGet:
path: /metrics/stat
port: 8088
scheme: HTTP
timeoutSeconds: 1
periodSeconds: 2
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
nodeSelector:
topology.kubernetes.io/zone: zone1
serviceAccountName: default
serviceAccount: default
nodeName: work01
securityContext: {}
schedulerName: default-scheduler
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
tolerationSeconds: 300
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
tolerationSeconds: 300
priority: 0
enableServiceLinks: true
preemptionPolicy: PreemptLowerPriority

对应的yaml文件


磁盘是挂载成功的 数据目录是有数据的



整个k8s资源充足

[root@master deploy]# more obcluster.yaml
apiVersion: OceanBase Cloud
kind: OBCluster
metadata:
name: ob-cloud
namespace: obcluster
spec:
imageRepo: oceanbasedev/oceanbase-cn
tag: v3.1.4-10000092022071511-snapshot-08172042
imageObagent: oceanbase/obagent:1.2.0
clusterID: 1
topology:
- cluster: zaob
zone:
- name: zone1
region: region1
nodeSelector:
topology.kubernetes.io/zone: zone1
replicas: 1
- name: zone2
region: region1
nodeSelector:
topology.kubernetes.io/zone: zone1
replicas: 1
- name: zone3
region: region1
nodeSelector:
topology.kubernetes.io/zone: zone1
replicas: 1
parameters:
- name: log_disk_size
value: “6G”
resources:
cpu: 2
memory: 4Gi
storage:
- name: data-file
storageClassName: “local-path”
size: 10Gi
- name: data-log
storageClassName: “local-path”
size: 4Gi
- name: log
storageClassName: “local-path”
size: 3Gi
- name: obagent-conf-file
storageClassName: “local-path”
size: 1Gi
volume:
name: backup
path: /root/data/nfs

部署用的obcluster.yaml文件


资源的分配可以参考一下这个,基本上默认配置给的就是推荐的最小配置了,再小的话可能启动会有问题

另外,现在的问题是zone2和zone3一直没有把容器创建出来,应该还是有些条件没有满足,是否有更大一点的磁盘目录呢,zone2和zone3的pod也describe一下看看呢。还有一个问题再确认下,现在配置文件中nodeSelector配置的都是zone1,实际的node是如何打的label呢,如果要分散到三个节点的话,得给三个node分别打不同的label,然后nodeSelector中对应配置。





每个节点资源都是充足的,磁盘基本空闲

kind: Pod
apiVersion: v1
metadata:
name: sapp-ob-cloud-zaob-zone3-0
namespace: obcluster
labels:
app: sapp-ob-cloud
index: ‘0’
subset: zone3
spec:
volumes:
- name: data-file
persistentVolumeClaim:
claimName: sapp-ob-cloud-zaob-zone3-0-data-file
- name: data-log
persistentVolumeClaim:
claimName: sapp-ob-cloud-zaob-zone3-0-data-log
- name: log
persistentVolumeClaim:
claimName: sapp-ob-cloud-zaob-zone3-0-log
- name: obagent-conf-file
persistentVolumeClaim:
claimName: sapp-ob-cloud-zaob-zone3-0-obagent-conf-file
- name: kube-api-access-xf4zh
projected:
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
name: kube-root-ca.crt
items:
- key: ca.crt
path: ca.crt
- downwardAPI:
items:
- path: namespace
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
defaultMode: 420
containers:
- name: observer
image: ‘oceanbasedev/oceanbase-cn:v3.1.4-10000092022071511-snapshot-08172042’
ports:
- name: cable
containerPort: 19001
protocol: TCP
- name: mysql
containerPort: 2881
protocol: TCP
- name: rpc
containerPort: 2882
protocol: TCP
resources:
limits:
cpu: ‘2’
memory: 4Gi
requests:
cpu: ‘2’
memory: 4Gi
volumeMounts:
- name: data-file
mountPath: /home/admin/data_file
- name: data-log
mountPath: /home/admin/data_log
- name: log
mountPath: /home/admin/log
- name: kube-api-access-xf4zh
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
readinessProbe:
httpGet:
path: /api/ob/readiness
port: 19001
scheme: HTTP
timeoutSeconds: 1
periodSeconds: 2
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
- name: obagent
image: ‘oceanbase/obagent:1.2.0’
ports:
- name: monagent
containerPort: 8088
protocol: TCP
resources: {}
volumeMounts:
- name: obagent-conf-file
mountPath: /home/admin/obagent/conf
- name: kube-api-access-xf4zh
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
readinessProbe:
httpGet:
path: /metrics/stat
port: 8088
scheme: HTTP
timeoutSeconds: 1
periodSeconds: 2
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
nodeSelector:
topology.kubernetes.io/zone: zone1
serviceAccountName: default
serviceAccount: default
securityContext: {}
schedulerName: default-scheduler
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
tolerationSeconds: 300
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
tolerationSeconds: 300
priority: 0
enableServiceLinks: true
preemptionPolicy: PreemptLowerPriority

这是zone3的yaml文件

kind: Pod
apiVersion: v1
metadata:
name: sapp-ob-cloud-zaob-zone2-0
namespace: obcluster
labels:
app: sapp-ob-cloud
index: ‘0’
subset: zone2
annotations:
cni.projectcalico.org/containerID: a7fb6346ea63dd0abb91017d4544cddf80fa797745f540179970143a0b4a58a9
cni.projectcalico.org/podIP: 10.233.119.40/32
cni.projectcalico.org/podIPs: 10.233.119.40/32
spec:
volumes:
- name: data-file
persistentVolumeClaim:
claimName: sapp-ob-cloud-zaob-zone2-0-data-file
- name: data-log
persistentVolumeClaim:
claimName: sapp-ob-cloud-zaob-zone2-0-data-log
- name: log
persistentVolumeClaim:
claimName: sapp-ob-cloud-zaob-zone2-0-log
- name: obagent-conf-file
persistentVolumeClaim:
claimName: sapp-ob-cloud-zaob-zone2-0-obagent-conf-file
- name: kube-api-access-8n4fl
projected:
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
name: kube-root-ca.crt
items:
- key: ca.crt
path: ca.crt
- downwardAPI:
items:
- path: namespace
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
defaultMode: 420
containers:
- name: observer
image: ‘oceanbasedev/oceanbase-cn:v3.1.4-10000092022071511-snapshot-08172042’
ports:
- name: cable
containerPort: 19001
protocol: TCP
- name: mysql
containerPort: 2881
protocol: TCP
- name: rpc
containerPort: 2882
protocol: TCP
resources:
limits:
cpu: ‘2’
memory: 4Gi
requests:
cpu: ‘2’
memory: 4Gi
volumeMounts:
- name: data-file
mountPath: /home/admin/data_file
- name: data-log
mountPath: /home/admin/data_log
- name: log
mountPath: /home/admin/log
- name: kube-api-access-8n4fl
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
readinessProbe:
httpGet:
path: /api/ob/readiness
port: 19001
scheme: HTTP
timeoutSeconds: 1
periodSeconds: 2
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
- name: obagent
image: ‘oceanbase/obagent:1.2.0’
ports:
- name: monagent
containerPort: 8088
protocol: TCP
resources: {}
volumeMounts:
- name: obagent-conf-file
mountPath: /home/admin/obagent/conf
- name: kube-api-access-8n4fl
readOnly: true
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
readinessProbe:
httpGet:
path: /metrics/stat
port: 8088
scheme: HTTP
timeoutSeconds: 1
periodSeconds: 2
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
nodeSelector:
topology.kubernetes.io/zone: zone1
serviceAccountName: default
serviceAccount: default
nodeName: work01
securityContext: {}
schedulerName: default-scheduler
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoExecute
tolerationSeconds: 300
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
tolerationSeconds: 300
priority: 0
enableServiceLinks: true
preemptionPolicy: PreemptLowerPriority

这是zone2的yaml文件

Screenshot 2022-12-14 at 14.18.51
我看配置文件里配置的都是zone1,都放在一个节点上了,而且这个local-path使用的路径也不是很大,按照现在的配置也放不下三个节点的数据
要先把nodeselector里面对应的label先配置正确,然后资源的规格还是要按照文档里说明的作为最小规格来配置,要不之后在启动的时候也会起不来,现在是zone2和zone3的容器没有创建出来,还没到起进程的步骤,如果资源太小,observer进程启动也会失败的。

这块没注意,我就只改了一个

en, 可以先改一下,我们平时在一个node上测试的,所以就写成一个值了

现在所有的数据文件都已经绑定


三个节点的agent都已经启动

observer的报错日志

2022/12/14 15:10:22 observer 675 [running]

2022-12-14T15:10:24.25212+08:00 INFO [7,] caller=server/http.go:95:func1: request: from 192.168.0.118, method GET, path /api/ob/readiness, response: status code 500, latency 58.35µs, comment

2022-12-14T15:10:26.25112+08:00 INFO [7,] caller=server/http.go:95:func1: request: from 192.168.0.118, method GET, path /api/ob/readiness, response: status code 500, latency 65.087µs, comment

2022-12-14T15:10:28.27412+08:00 INFO [7,] caller=server/http.go:95:func1: request: from 192.168.0.118, method GET, path /api/ob/readiness, response: status code 500, latency 70.657µs, comment

2022-12-14T15:10:30.2608+08:00 INFO [7,] caller=server/http.go:95:func1: request: from 192.168.0.118, method GET, path /api/ob/readiness, response: status code 500, latency 60.947µs, comment

2022-12-14T15:10:32.25161+08:00 INFO [7,] caller=server/http.go:95:func1: request: from 192.168.0.118, method GET, path /api/ob/readiness, response: status code 500, latency 56.746µs, comment

2022-12-14T15:10:32.44341+08:00 ERROR [7,] caller=observer/check.go:58:checkerObserver: observer process not running, try restart…

2022-12-14T15:10:33.74826+08:00 INFO [7,] caller=server/http.go:95:func1: request: from 192.168.0.118, method GET, path /api/ob/readiness, response: status code 500, latency 59.147µs, comment

2022-12-14T15:10:34.25344+08:00 INFO [7,] caller=server/http.go:95:func1: request: from 192.168.0.118, method GET, path /api/ob/readiness, response: status code 500, latency 61.924µs, comment

2022-12-14T15:10:36.25073+08:00 INFO [7,] caller=server/http.go:95:func1: request: from 192.168.0.118, method GET, path /api/ob/readiness, response: status code 500, latency 58.77µs, comment

2022-12-14T15:10:38.28176+08:00 INFO [7,] caller=server/http.go:95:func1: request: from 192.168.0.118, method GET, path /api/ob/readiness, response: status code 500, latency 69.05µs, comment

2022-12-14T15:10:40.30433+08:00 INFO [7,] caller=server/http.go:95:func1: request: from 192.168.0.118, method GET, path /api/ob/readiness, response: status code 500, latency 267.218µs, comment

2022-12-14T15:10:42.25108+08:00 INFO [7,] caller=server/http.go:95:func1: request: from 192.168.0.118, method GET, path /api/ob/readiness, response: status code 500, latency 56.719µs, comment

2022-12-14T15:10:42.44486+08:00 INFO [7,] caller=shell/exec.go:97:execute: execute shell command start, command=Command{user=admin, program=sh, cmd=cd /home/admin/oceanbase; ulimit -s 10240; ulimit -c unlimited; LD_LIBRARY_PATH=/home/admin/oceanbase/lib:$LD_LIBRARY_PATH LD_PRELOAD=’’ /home/admin/oceanbase/bin/observer, timeout=10s}

2022-12-14T15:10:42.69269+08:00 INFO [7,] caller=shell/exec.go:129:execute: execute shell command end, command=Command{user=admin, program=sh, cmd=cd /home/admin/oceanbase; ulimit -s 10240; ulimit -c unlimited; LD_LIBRARY_PATH=/home/admin/oceanbase/lib:$LD_LIBRARY_PATH LD_PRELOAD=’’ /home/admin/oceanbase/bin/observer, timeout=10s}

2022/12/14 15:10:42 observer 695 [running]

2022-12-14T15:10:44.25272+08:00 INFO [7,] caller=server/http.go:95:func1: request: from 192.168.0.118, method GET, path /api/ob/readiness, response: status code 500, latency 52.252µs, comment

这个日志就是已经在启动observer了,到容器里看observer是否启动了呢,observer启动了之后会更新是否ready的状态,之后再请求readiness接口才会正常

按照官方文档 https://www.oceanbase.com/docs/common-oceanbase-database-cn-1000000000218236 操作部署。


执行部署时报错,Error from server (BadRequest): error when creating “obcluster.yaml”: OBCluster in version “v1” cannot be handled as a OBCluster: strict decoding error: unknown field “spec.resources.volume”
kubectl explain OBCluster.spec.resources.storage 查看api具体信息
image
确实没有这个字段的定义。。。。这官网文档好离谱