【 使用环境 】
测试环境
【 OB or 其他组件 】
libobclient-2.0.1-3.el7.x86_64.rpm
obclient-2.0.1-2.el7.x86_64.rpm
ob-deploy-1.3.3-11.el7.x86_64.rpm
obproxy-ce-3.2.3-2.el7.x86_64.rpm
oceanbase-ce-3.1.3-10100032022041510.el7.x86_64.rpm
oceanbase-ce-devel-3.1.3-10100032022041510.el7.x86_64.rpm
oceanbase-ce-libs-3.1.3-10100032022041510.el7.x86_64.rpm
oceanbase-ce-utils-3.1.3-10100032022041510.el7.x86_64.rpm
【 使用版本 】
ocp-3.3.0-ce
【问题描述】
OCP作为不跑实际业务的管理服务器,对硬件配置要求过高(文档要求: 24 个 CPU 和 64G 内存),目前我用一台8U32G虚拟机修改了配置跑了起来,也是十分吃力(最后我会贴一下配置与运行情况)
而且OCP所需要的oceanbase集群不能用现有业务的oceanbase集群,如果考虑高可用还要多台服务器才行
我们在项目中面临的有如下情况:
-
有些项目服务器规模能达到20+(裸机非云),这些服务器单台配置高,如果用来搭建OCP,感觉会浪费比较多的性能
-
有些项目四五台机器,大概就是刚够搭建一个集群的节点数量,低配置的机器OCP搭建不起来,高配置的机器不划算
-
数据库升级需求,有些项目业务量并不大,传统的mysql也够,但是招标文件对数据库升级有硬性要求,初期的规模很小,需要随着业务量的增加逐渐添加节点
目前我的想法是OCP在大多数情况下部署单机的就行,OCP挂了手动重启一下也能接受,只要业务OceanBase节点保持运行且高可用就行
下面是我目前8U32G虚拟机的运行情况:
ocp-3.3.0-ce config.yaml
precheck_ignore: true
create_metadb_cluster: true
clean_metadb_cluster: false
ob_cluster:
name: obcluster
home_path: /home/admin/oceanbase
root_password: 'xxxxxx'
data_path: /data/1
redo_path: /redo/log1
sql_port: 2881
rpc_port: 2882
zones:
- name: zone1
servers:
- xx.xx.xx.102
meta:
tenant: meta_tenant
user: meta_user
password: 'xxxxxx'
database: meta_database
cpu: 2
memory: 3
monitor:
tenant: monitor_tenant
user: monitor_user
password: 'xxxxxx'
database: monitor_database
cpu: 4
memory: 3
obproxy:
home_path: /home/admin/obproxy
port: 2883
servers:
- xx.xx.xx.102
ssh:
port: 22
user: root
auth_method: pubkey
timeout: 60
password: 'xxxxxx'
ocp:
name: 'ocp'
process:
port: 8080
log_dir: /home/admin/ocp/log
servers:
- xx.xx.xx.102
resource:
cpu: 4
memory: 3
auth:
user: admin
password: xxxxxx
metadb:
host: xx.xx.xx.102
port: 2883
meta_user: meta_user@meta_tenant#obcluster
meta_password: 'xxxxxx'
meta_database: meta_database
monitor_user: monitor_user@monitor_tenant#obcluster
monitor_password: 'xxxxxx'
monitor_database: monitor_database
目前机器运行情况
oceanbaseocp.novalocal Uptime: 3 days, 0:15:41
CPU [ 28.3%] CPU 28.3% MEM 98.4% SWAP 0.0% LOAD 8-core
MEM [ 98.4%] user: 25.1% total: 31.3G total: 0 1 min: 13.10
SWAP [ 0.0%] system: 3.5% used: 30.8G used: 0 5 min: 10.64
idle: 70.3% free: 513M free: 0 15 min: 10.27
NETWORK Rx/s Tx/s TASKS 154 (2185 thr), 2 run, 151 slp, 1 oth sorted automatically
docker0 0b 0b
eth0 213Kb 8Kb CPU% MEM% PID USER NI S Command
lo 59.0Mb 59.0Mb 169.6 74.1 21685 root 0 S /home/admin/oceanbase/bin/observe
15.8 8.7 11320 500 0 S /usr/lib/jvm/java-1.8.0/bin/java
DISK I/O R/s W/s 10.1 0.6 21575 root 0 S /home/admin/obproxy/bin/obproxy -
vda 38K 397K 1.3 0.6 31032 500 0 S obproxy -p2888 -n ocp_obproxy -o
vda1 38K 397K 0.3 0.1 1201 root 0 S /usr/bin/dockerd-current --add-ru
vdb 0 0 4.7 0.0 32439 root 0 R /usr/bin/python /usr/bin/glances
vdb1 0 0 0.0 0.0 30118 root 0 S /usr/bin/python2 /usr/bin/supervi
vdb2 0 0 0.0 0.0 836 root 0 S /sbin/dhclient -1 -q -lf /var/lib
0.0 0.0 897 root 0 S /usr/bin/python -Es /usr/sbin/tun
FILE SYS Used Total 0.0 0.0 1251 root 0 S /usr/bin/docker-containerd-curren
/ (vda1) 22.7G 100.0G 0.0 0.0 603 polkitd 0 S /usr/lib/polkit-1/polkitd --no-de
/data 153G 192G 0.0 0.0 411 root 0 S /usr/lib/systemd/systemd-journald
_ntainers 22.7G 100.0G 0.0 0.0 599 root 0 S /usr/sbin/rsyslogd -n
_overlay2 22.7G 100.0G 0.0 0.0 27365 root 0 S sshd: root@pts/0
0.0 0.0 6504 postfix 0 S pickup -l -t unix -u
0.0 0.0 30096 root 0 S /usr/bin/docker-containerd-shim-c
0.0 0.0 1 root 0 S /usr/lib/systemd/systemd --switch
0.0 0.0 1041 postfix 0 S qmgr -l -t unix -u
0.0 0.0 1046 root 0 S /usr/sbin/sshd -D
0.0 0.0 21546 root 0 S bash /home/admin/obproxy/obproxyd
0.0 0.0 27408 root 0 S -bash