社区版ocp安装报错

测试环境,使用的virtualbox 虚拟机 centos 7.6
社区版OCP 3.3.0-bp2
执行安装命令报错
./ocp_installer.sh install -c config.yaml -i ./ocp-installer.tar.gz -o ./ocp.tar.gz
2025-07-08 10:53:11 - INFO - 1 - [ob_precheck.py:28] - ob precheck using command: sudo /tmp/precheck-5336f5ab-f51b-4e85-b120-bad5ca6715d3.sh -m ob
2025-07-08 10:53:11 - ERROR - 1 - [ob_precheck.py:32] - precheck for ocp on host 192.168.56.80 failed
2025-07-08 10:53:11 - INFO - 1 - [ob_precheck.py:35] - ob precheck result: Machine Role: ob
Peer IP List:
Machine Type: PHY
Inspect Mode: FALSE

check CPU count: 8 > 8 … PASS
check total MEM: 33 GB < 64 GB … EXPECT >= 64 GB … FAIL
TIPS: replace another machine with more MEM
check linux version: CentOS Linux release 7.6.1810 (Core) … PASS
check SELinux status: Disabled … PASS
check account [admin] and home dir, exist … PASS
check service [firewalld]: inactive … PASS
check service [firewalld]: disabled … PASS

SUMMARY OF ISSUES IN PRE-CHECK

check total MEM: 33 GB < 64 GB … EXPECT >= 64 GB … FAIL
TIPS: replace another machine with more MEM
,
2025-07-08 10:53:11 - INFO - 1 - [ob_install.py:82] - clean obd dir
2025-07-08 10:53:11 - INFO - 1 - [ob_install.py:91] - install ob using obd
2025-07-08 10:53:11 - INFO - 1 - [ob_install.py:95] - deploy task with config:obproxy-ce:
depends:

  • oceanbase-ce
    global:
    home_path: /home/admin/obproxy
    listen_port: 2883
    servers:
  • 192.168.56.80
    oceanbase-ce:
    global:
    appname: obcluster
    data_dir: /data/2
    home_path: /home/admin/oceanbase
    mysql_port: 2881
    redo_dir: /data/log2
    root_password: ‘123456’
    rpc_port: 2882
    server1:
    zone: zone1
    servers:
  • ip: 192.168.56.80
    name: server1
    user:
    key_file: ‘’
    password: ‘123456’
    port: 22
    timeout: 10
    username: root

2025-07-08 10:53:11 - INFO - 1 - [ob_install.py:100] - deploy obcluster command: obd cluster autodeploy obcluster -c /tmp/ocp_cluster.yaml
2025-07-08 10:53:31 - INFO - 1 - [ob_install.py:104] - deploy obcluster got result Package obproxy-ce-3.2.3 is available.
Package oceanbase-ce-3.1.4 is available.
install obproxy-ce-3.2.3 for local ok
install oceanbase-ce-3.1.4 for local ok
Cluster param config check ok
Open ssh connection ok
Generate obproxy configuration ok
Generate observer configuration ok
obproxy-ce-3.2.3 already installed.
oceanbase-ce-3.1.4 already installed.
±------------------------------------------------------------------------------------------+
| Packages |
±-------------±--------±----------------------±-----------------------------------------+
| Repository | Version | Release | Md5 |
±-------------±--------±----------------------±-----------------------------------------+
| obproxy-ce | 3.2.3 | 2.el7 | bdd299bda2bdf71fd0fd3f155b6a2e39dffd2be1 |
| oceanbase-ce | 3.1.4 | 10000092022071511.el7 | c5cd94f4f190317b6a883c58a26460a506205ce6 |
±-------------±--------±----------------------±-----------------------------------------+
Repository integrity check ok
Parameter check ok
Open ssh connection ok
Remote obproxy-ce-3.2.3-bdd299bda2bdf71fd0fd3f155b6a2e39dffd2be1 repository install ok
Remote obproxy-ce-3.2.3-bdd299bda2bdf71fd0fd3f155b6a2e39dffd2be1 repository lib check ok
Remote oceanbase-ce-3.1.4-c5cd94f4f190317b6a883c58a26460a506205ce6 repository install ok
Remote oceanbase-ce-3.1.4-c5cd94f4f190317b6a883c58a26460a506205ce6 repository lib check ok
Cluster status check ok
Initializes obproxy work home ok
Initializes observer work home ok
obcluster deployed
Get local repositories and plugins ok
Open ssh connection ok
Load cluster param plugin ok
Check before start obproxy ok
Check before start observer x
[WARN] (192.168.56.80) clog and data use the same disk (/data)
> [ERROR] server1(192.168.56.80) lo fail to ping 192.168.56.80. Please check configuration devname

配置文件如下:

# OCP deploy config
# Note:
# Do not use 127.0.0.1 or hostname as server address
# When a server has both public ip and private ip, if private ip is connectable, use private ip for faster connection
# If a vip is configured, it should be already created and bonded to the right server and port, the installation script won’t do any work with vip maintainance, just use it to connect to the service

# Ignore precheck errors
# It’s recommanded to not ignore precheck errors
precheck_ignore: true

# Create an obcluster as OCP’s metadb
create_metadb_cluster: true

# Clean OCP’s metadb cluster when uninstall
clean_metadb_cluster: true

# Metadb cluster deploy config
ob_cluster:
** name: obcluster**
** home_path: /home/admin/oceanbase**
** root_password: ‘123456’**
** # The directory for data storage, it’s recommanded to use an independent path**
** data_path: /data/2**
** # The directory for clog, ilog, and slog, it’s recommanded to use an independent path.**
** redo_path: /data/log2**
** sql_port: 2881**
** rpc_port: 2882**
** zones:**
** - name: zone1**
** servers:**
** - 192.168.56.80**
** ## custom obd config for obcluster**
** #custom_config:**
** # - key: devname**
** # value: enp0s8**

** # Meta user info**
** meta:**
** tenant: meta_tenant**
** user: meta_user**
** password: meta_password**
** database: meta_database**
** cpu: 2**
** # Memory configs in GB, 4 means 4GB**
** memory: 4**

** # Monitor user info**
** monitor:**
** tenant: monitor_tenant**
** user: monitor_user**
** password: monitor_password**
** database: monitor_database**
** cpu: 2**
** # Memory configs in GB, 8 means 8GB**
** memory: 4**

# Obproxy to connect metadb cluster
obproxy:
** home_path: /home/admin/obproxy**
** port: 2883**
** servers:**
** - 192.168.56.80**

** ## custom config for obproxy**
** # custom_config:**
** # - key: clustername**
** # value: obcluster**

** ## Vip is optional, if vip is not configured, one of obproxy servers’s address will be used**
** # vip:**
** # address: 1.1.1.1**
** # port: 2883**

# Ssh auth config
ssh:
** port: 22**
** user: root**
** # auth method, support password and pubkey**
** auth_method: password**
** timeout: 10**
** password: ‘123456’**

# OCP config
ocp:
** # ocp container’s name**
** name: ‘ocp’**

** # OCP process listen port and log dir on host**
** process:**
** port: 8080**
** log_dir: /tmp/ocp/log**
** servers:**
** - 192.168.56.80**
** # OCP container’s resource**
** resource:**
** cpu: 2**
** # Memory configs in GB, 8 means 8GB**
** memory: 8**
** # Vip is optional, if vip is not configured, one of ocp servers’s address will be used**
** # vip:**
** # address: 1.1.1.1**
** # port: 8080**
** # OCP basic auth config, used when upgrade ocp**
** auth:**
** user: admin**
** password: admin**
** # OCP metadb config, for ocp installation, if “create_metadb_cluster” is configured true, this part will be replaced with the configuration of metadb cluster and obproxy**
** metadb:**
** host: 192.168.56.80**
** port: 2883**
** meta_user: meta_user@meta_tenant#obcluster**
** meta_password: meta_password**
** meta_database: meta_database**
** monitor_user: monitor_user@monitor_tenant#obcluster**
** monitor_password: monitor_password**
** monitor_database: monitor_database**

1 个赞

推荐你使用obd web方法白屏化部署ocp

1 个赞

报错没有变化

1 个赞

而且当前3.x版本已经不进行维护,建议部署4.x集群。obd web执行报错是什么发一下

Repository integrity check ok
Parameter check ok
Open ssh connection ok
Remote obproxy-ce-3.2.3-bdd299bda2bdf71fd0fd3f155b6a2e39dffd2be1 repository install ok
Remote obproxy-ce-3.2.3-bdd299bda2bdf71fd0fd3f155b6a2e39dffd2be1 repository lib check ok
Remote oceanbase-ce-3.1.4-c5cd94f4f190317b6a883c58a26460a506205ce6 repository install ok
Remote oceanbase-ce-3.1.4-c5cd94f4f190317b6a883c58a26460a506205ce6 repository lib check ok
Cluster status check ok
Initializes obproxy work home ok
Initializes observer work home ok
obcluster deployed
Get local repositories and plugins ok
Open ssh connection ok
Load cluster param plugin ok
Check before start obproxy ok
Check before start observer x
[WARN] OBD-1007: (192.168.56.80) The recommended number of open files is 655350 (Current value: %s)
[WARN] (192.168.56.80) clog and data use the same disk (/data)
[ERROR] server1(192.168.56.80) lo fail to ping 192.168.56.80. Please check configuration devname

看着应该是网卡有问题
ping 192.168.56.80看看通不通

1 个赞

本地能ping通,不知道这个代码里配置是从哪里ping哪里,这个配置检查代码具体在哪个位置

[root@ocpserver oceanbase]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:f2:23:98 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 82419sec preferred_lft 82419sec
inet6 fd00::a00:27ff:fef2:2398/64 scope global mngtmpaddr dynamic
valid_lft 86167sec preferred_lft 14167sec
inet6 fe80::a00:27ff:fef2:2398/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:5f:37:e6 brd ff:ff:ff:ff:ff:ff
inet 192.168.56.80/24 brd 192.168.56.255 scope global enp0s8
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe5f:37e6/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:99:b5:e8:b9 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
[root@ocpserver oceanbase]#
我的网卡 第二个是nat 第三个 是仅主机模式

[root@ocpserver oceanbase]# ping 192.168.56.80
PING 192.168.56.80 (192.168.56.80) 56(84) bytes of data.
64 bytes from 192.168.56.80: icmp_seq=1 ttl=64 time=0.048 ms
64 bytes from 192.168.56.80: icmp_seq=2 ttl=64 time=0.378 ms
64 bytes from 192.168.56.80: icmp_seq=3 ttl=64 time=0.019 ms
^C
— 192.168.56.80 ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 2006ms
rtt min/avg/max/mdev = 0.019/0.148/0.378/0.163 ms

[root@ocpserver oceanbase]# ip route show
default via 192.168.56.1 dev enp0s8
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15
169.254.0.0/16 dev enp0s3 scope link metric 1002
169.254.0.0/16 dev enp0s8 scope link metric 1003
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.56.0/24 dev enp0s8 proto kernel scope link src 192.168.56.80
[root@ocpserver oceanbase]#

1 个赞

看着有点乱,yaml文件中是否写了网卡配置?麻烦提供一份yaml文件。

1 个赞

内存没检查过去

2 个赞

学习学习 :+1: :+1: :+1:

内存没事,可以跳过

1 个赞

执行日志.txt (3.9 KB)
config.txt (3.0 KB)

另外,我用这个ocp-all-in-one-4.3.5-20250319105844.el7.x86_64.tar.gz 包 obd web 安装 的能安装ocp成功,报错的那个始终报上边的错误

这个应该是你的devname配置的有问题,可以在白屏化更多配置里修改

我没配置过这个设备名字