AnolisOS-8.6RHCK部署ocp报错:ocp still not ok, check failed

【 使用环境 】测试环境
【 OB or 其他组件 】
ocp-3.3.0-ce-bp1-x86_64
oceanbase-ce-3.1.4
【问题描述】AnolisOS-8.6部署ocp报错:ocp still not ok, check failed
【复现路径】

部署命令

./ocp_installer.sh install -c config.yaml -k /root/.ssh/id_rsa -i ./ocp-installer.tar.gz -o ./ocp.tar.gz

运行过程(结尾一部分)

start create database monitor_database
monitordb all sql files:
['monitordb_ddl_2.5.0.sql', 'monitordb_ddl_2.5.1.sql', 'monitordb_ddl_3.1.0.sql', 'monitordb_ddl_3.1.1.sql', 'monitordb_ddl_3.1.2.sql', 'monitordb_ddl_3.1.3.sql', 'monitordb_ddl_3.2.0.sql', 'monitordb_ddl_3.2.1.sql', 'monitordb_ddl_3.3.0.sql', 'monitordb_dml_3.2.0.sql', 'monitordb_dml_3.2.1.sql']
start to load sqls
replace table_group in sql file: monitordb_ddl_2.5.0.sql
sed 's/$VAR_TABLEGROUP_NAME/meta_database/g' ../../sqls/monitordb_ddl_2.5.0.sql > ../../sqls/real/monitordb_ddl_2.5.0.sql
executing real sql script: real/monitordb_ddl_2.5.0.sql
replace table_group in sql file: monitordb_ddl_2.5.1.sql
sed 's/$VAR_TABLEGROUP_NAME/meta_database/g' ../../sqls/monitordb_ddl_2.5.1.sql > ../../sqls/real/monitordb_ddl_2.5.1.sql
executing real sql script: real/monitordb_ddl_2.5.1.sql
replace table_group in sql file: monitordb_ddl_3.1.0.sql
sed 's/$VAR_TABLEGROUP_NAME/meta_database/g' ../../sqls/monitordb_ddl_3.1.0.sql > ../../sqls/real/monitordb_ddl_3.1.0.sql
executing real sql script: real/monitordb_ddl_3.1.0.sql
replace table_group in sql file: monitordb_ddl_3.1.1.sql
sed 's/$VAR_TABLEGROUP_NAME/meta_database/g' ../../sqls/monitordb_ddl_3.1.1.sql > ../../sqls/real/monitordb_ddl_3.1.1.sql
executing real sql script: real/monitordb_ddl_3.1.1.sql
replace table_group in sql file: monitordb_ddl_3.1.2.sql
sed 's/$VAR_TABLEGROUP_NAME/meta_database/g' ../../sqls/monitordb_ddl_3.1.2.sql > ../../sqls/real/monitordb_ddl_3.1.2.sql
executing real sql script: real/monitordb_ddl_3.1.2.sql
replace table_group in sql file: monitordb_ddl_3.1.3.sql
sed 's/$VAR_TABLEGROUP_NAME/meta_database/g' ../../sqls/monitordb_ddl_3.1.3.sql > ../../sqls/real/monitordb_ddl_3.1.3.sql
executing real sql script: real/monitordb_ddl_3.1.3.sql
replace table_group in sql file: monitordb_ddl_3.2.0.sql
sed 's/$VAR_TABLEGROUP_NAME/meta_database/g' ../../sqls/monitordb_ddl_3.2.0.sql > ../../sqls/real/monitordb_ddl_3.2.0.sql
executing real sql script: real/monitordb_ddl_3.2.0.sql
[2022-10-12 13:45:17] run sql alter table ob_hist_sqltext add column statement text after sql_text; got duplicate column error 1060 (42S21): Duplicate column name 'statement', just skip
replace table_group in sql file: monitordb_ddl_3.2.1.sql
sed 's/$VAR_TABLEGROUP_NAME/meta_database/g' ../../sqls/monitordb_ddl_3.2.1.sql > ../../sqls/real/monitordb_ddl_3.2.1.sql
executing real sql script: real/monitordb_ddl_3.2.1.sql
[2022-10-12 13:45:21] run sql alter table ob_hist_sqltext add column sql_type varchar(1024) COMMENT 'SQL的类型' after statement; got duplicate column error 1060 (42S21): Duplicate column name 'sql_type', just skip
replace table_group in sql file: monitordb_ddl_3.3.0.sql
sed 's/$VAR_TABLEGROUP_NAME/meta_database/g' ../../sqls/monitordb_ddl_3.3.0.sql > ../../sqls/real/monitordb_ddl_3.3.0.sql
executing real sql script: real/monitordb_ddl_3.3.0.sql
replace table_group in sql file: monitordb_dml_3.2.0.sql
sed 's/$VAR_TABLEGROUP_NAME/meta_database/g' ../../sqls/monitordb_dml_3.2.0.sql > ../../sqls/real/monitordb_dml_3.2.0.sql
executing real sql script: real/monitordb_dml_3.2.0.sql
replace table_group in sql file: monitordb_dml_3.2.1.sql
sed 's/$VAR_TABLEGROUP_NAME/meta_database/g' ../../sqls/monitordb_dml_3.2.1.sql > ../../sqls/real/monitordb_dml_3.2.1.sql
executing real sql script: real/monitordb_dml_3.2.1.sql
finish to load sqls
SUCCESS.
start create backup databases
start create backup1472
loading sql script: backup_metadb_init.sql
loading sql script: restore_metadb_init.sql
end create backup1472
start create backup2230
loading sql script: backup_metadb_init.sql
loading sql script: restore_metadb_init.sql
end create backup2230
start create backup147x
loading sql script: backup_metadb_init.sql
loading sql script: restore_metadb_init.sql
end create backup147x
start create backup21
loading sql script: backup_metadb_init.sql
loading sql script: restore_metadb_init.sql
end create backup21
end create backup databases
, Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
create_metadb.py:21: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  data = yaml.load(file)
/home/admin/ocp-init/src/ocp-init/generate/gen_dynamic_config_properties.py:14: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  data = yaml.load(file)
/home/admin/ocp-init/src/ocp-init/generate/yml_to_table.py:69: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  data = yaml.load(file)
No handlers could be found for logger "root"

2022-10-12 13:46:15 - INFO - 1 - [metadb_init.py:44] - customize site url: sudo docker run --rm --net=host --entrypoint=mysql reg.docker.alibaba-inc.com/oceanbase/ocp-all-in-one:3.3.0-ce-bp1 -h10.10.53.24 -P2883 -umeta_user@meta_tenant#obcluster -pThtfA25600100 -Dmeta_database -e"update config_properties set value='http://10.10.53.24:8080' where \`key\` = 'ocp.site.url';"
2022-10-12 13:46:17 - INFO - 1 - [metadb_init.py:50] - customize server port: sudo docker run --rm --net=host --entrypoint=mysql reg.docker.alibaba-inc.com/oceanbase/ocp-all-in-one:3.3.0-ce-bp1 -h10.10.53.24 -P2883 -umeta_user@meta_tenant#obcluster -pThtfA25600100 -Dmeta_database -e"update config_properties set value='8080' where \`key\` = 'server.port';"
2022-10-12 13:46:18 - INFO - 1 - [ocp_start.py:38] - prepare log dir on server: 10.10.53.24 with command: sudo mkdir -p /home/admin/ocp/log/{ocp,obproxy/log,obproxy/minidump,obproxy/etc} && sudo chown -R 500:500 /home/admin/ocp/log
2022-10-12 13:46:19 - INFO - 1 - [ocp_start.py:42] - start ocp docker on server: 10.10.53.24 with command: sudo docker run -d --name ocp --cpu-period 100000 --cpu-quota 400000 --memory=8G -e OCP_METADB_HOST=10.10.53.24 -e OCP_METADB_PORT=2883 -e OCP_METADB_USER=meta_user@meta_tenant#obcluster -e OCP_METADB_PASSWORD='ThtfA25600100' -e OCP_METADB_DBNAME=meta_database -e OB_PORT=8080 -e observer_sys_password=f0455d5edb4833fa2e91762e71b61bc35d45cf0b -e observer_sys_password1=ee0e5138c912aed80b683c05303684be347ce81d -v /home/admin/ocp/log:/home/admin/logs --net=host --restart on-failure:5 reg.docker.alibaba-inc.com/oceanbase/ocp-all-in-one:3.3.0-ce-bp1
2022-10-12 13:46:19 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:46:20 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:46:25 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:46:25 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:46:30 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:46:30 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:46:35 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:46:35 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:46:40 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:46:40 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:46:45 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:46:45 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:46:50 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:46:50 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:46:55 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:46:55 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:47:00 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:47:00 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:47:05 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:47:05 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:47:10 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:47:10 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:47:15 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:47:15 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:47:20 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:47:20 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:47:25 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:47:25 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:47:30 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:47:30 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:47:35 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:47:35 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:47:40 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:47:40 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:47:45 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:47:45 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:47:50 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:47:50 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:47:55 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:47:55 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:48:00 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:48:00 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:48:05 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:48:05 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:48:10 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:48:10 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:48:15 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:48:15 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:48:20 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:48:20 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:48:25 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:48:25 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:48:30 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:48:30 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:48:35 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:48:35 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:48:40 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:48:40 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:48:45 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:48:45 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:48:50 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:48:50 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:48:55 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:48:55 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:49:00 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:49:00 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:49:05 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:49:05 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:49:10 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:49:10 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:49:15 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-12 13:49:15 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-12 13:49:20 - INFO - 1 - [ocp_check.py:35] - ocp still not ok, check failed
Traceback (most recent call last):
  File "install_ocp.py", line 43, in <module>
    run(context)
  File "install_ocp.py", line 33, in run
    install_ocp_pipeline.run()
  File "/root/installer/pipeline.py", line 10, in run
    task.run()
  File "/root/installer/task/ocp_check.py", line 36, in run
    raise Exception("ocp still not ok, check failed")
Exception: ocp still not ok, check failed

配置文件config.yaml

# OCP deploy config
# Note:
# Do not use 127.0.0.1 or hostname as server address
# When a server has both public ip and private ip, if private ip is connectable, use private ip for faster connection
# If a vip is configured, it should be already created and bonded to the right server and port, the installation script won't do any work with vip maintainance, just use it to connect to the service

# Ignore precheck errors
# It's recommanded to not ignore precheck errors
precheck_ignore: true

# Create an obcluster as OCP's metadb
create_metadb_cluster: true

# Clean OCP's metadb cluster when uninstall
clean_metadb_cluster: false

# Metadb cluster deploy config
ob_cluster:
  name: obcluster
  home_path: /home/admin/oceanbase
  root_password: 'ZAQ!xsw2'
  # The directory for data storage, it's recommanded to use an independent path
  data_path: /data
  # The directory for clog, ilog, and slog, it's recommanded to use an independent path.
  redo_path: /redo
  sql_port: 2881
  rpc_port: 2882
  zones:
    - name: zone1
      servers:
        - xx.xx.xx.24

  # Meta user info
  meta:
    tenant: meta_tenant
    user: meta_user
    password: 'ZAQ!xsw2'
    database: meta_database
    cpu: 2
    # Memory configs in GB, 4 means 4GB
    memory: 4

  # Monitor user info
  monitor:
    tenant: monitor_tenant
    user: monitor_user
    password: 'ZAQ!xsw2'
    database: monitor_database
    cpu: 4
    # Memory configs in GB, 8 means 8GB
    memory: 8

# Obproxy to connect metadb cluster
obproxy:
  home_path: /home/admin/obproxy
  port: 2883
  servers:
    - xx.xx.xx.24
  # Vip is optional, if vip is not configured, one of obproxy servers's address will be used
  # vip:
  #   address: 1.1.1.1
  #   port: 2883

# Ssh auth config
ssh:
  port: 22
  user: root
  # auth method, support password and pubkey
  auth_method: pubkey
  timeout: 60
  password: 'ZAQ!xsw2'

# OCP config
ocp:
  # ocp container's name
  name: 'ocp'

  # OCP process listen port and log dir on host
  process:
    port: 8080
    log_dir: /home/admin/ocp/log
  servers:
    - xx.xx.xx.24
  # OCP container's resource
  resource:
    cpu: 4
    # Memory configs in GB, 8 means 8GB
    memory: 8
  # Vip is optional, if vip is not configured, one of ocp servers's address will be used
  # vip:
  #   address: 1.1.1.1
  #   port: 8080
  # OCP basic auth config, used when upgrade ocp
  auth:
    user: admin
    password: ZAQ!xsw2
  # OCP metadb config, for ocp installation, if "create_metadb_cluster" is configured true, this part will be replaced with the configuration of metadb cluster and obproxy
  metadb:
    host: xx.xx.xx.24
    port: 2883
    meta_user: meta_user@meta_tenant#obcluster
    meta_password: 'ZAQ!xsw2'
    meta_database: meta_database
    monitor_user: monitor_user@monitor_tenant#obcluster
    monitor_password: 'ZAQ!xsw2'
    monitor_database: monitor_database

【问题现象及影响】
安装失败,ocp容器运行,也存在ob相关进程,8080端口网页无法访问
【附件】
完整安装过程.txt (51.0 KB)

进入ocp容器中以后,日志文件显示无法分配内存,实际宿主机内存只有20多G了

[root@oceanbase_ocp ocp]# pwd
/home/admin/logs/ocp
[root@oceanbase_ocp ocp]# tail -n 1000 ocp-server.0.err
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007fc8b5000000, 51522830336, 0) failed; error='Cannot allocate memory' (errno=12)
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007fb28d000000, 51522830336, 0) failed; error='Cannot allocate memory' (errno=12)
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007ef3e9000000, 51522830336, 0) failed; error='Cannot allocate memory' (errno=12)
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f4831000000, 51522830336, 0) failed; error='Cannot allocate memory' (errno=12)

目前的做法如下:
修改镜像中ocp-server脚本heapsize内存量,从总内存的70%降低到20%

FROM reg.docker.alibaba-inc.com/oceanbase/ocp-all-in-one:3.3.0-ce-bp1

RUN sed -i 's@1024 \* 7 / 10@1024 * 2 / 10@g' /home/admin/ocp-server/bin/ocp-server && cat /home/admin/ocp-server/bin/ocp-server

覆盖原有镜像

docker build -f ./Dockerfile -t reg.docker.alibaba-inc.com/oceanbase/ocp-all-in-one:3.3.0-ce-bp1 .

再次运行时在多次尝试后依然会报如下错误

2022-10-13 16:45:45 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-13 16:45:45 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-13 16:45:50 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-13 16:45:50 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-13 16:45:55 - INFO - 1 - [ocp_check.py:23] - query ocp to check...
2022-10-13 16:45:55 - INFO - 1 - [ocp_check.py:30] - ocp still not active
2022-10-13 16:46:00 - INFO - 1 - [ocp_check.py:35] - ocp still not ok, check failed
Traceback (most recent call last):
  File "install_ocp.py", line 43, in <module>
    run(context)
  File "install_ocp.py", line 33, in run
    install_ocp_pipeline.run()
  File "/root/installer/pipeline.py", line 10, in run
    task.run()
  File "/root/installer/task/ocp_check.py", line 36, in run
    raise Exception("ocp still not ok, check failed")
Exception: ocp still not ok, check failed

但是等待十分钟以后ocp页面可以访问了

经过和官方人员交流,之所以会失败,是因为默认检查ocp三分钟,三分钟内ocp都没有启动成功导致脚本失败,可以通过如下修改解决:

再建立一个Dockerfile2文件,修改最长等待时间由3分钟改为1个小时:

FROM reg.docker.alibaba-inc.com/ocp2/ocp-installer:3.3.0-x86_64

RUN sed -i 's@check_wait_time = 180@check_wait_time = 3600@g' /root/installer/task/ocp_check.py && cat /root/installer/task/ocp_check.py

覆盖原有镜像:

docker build -f ./Dockerfile2 -t reg.docker.alibaba-inc.com/ocp2/ocp-installer:3.3.0-x86_64 .

杀掉ob相关的进程

关闭并且删除旧容器

docker kill ocp
docker container rm ocp

清理之前安装失败产生的文件

rm -rf /home/admin/oceanbase && rm -rf /home/admin/obproxy && rm -rf /home/admin/ocp && rm -rf /data/* && rm -rf /redo/*

#在脚本的执行目录下执行
rm -rf ./.obd

再次安装即可:

./ocp_installer.sh install -c config.yaml -k /root/.ssh/id_rsa -i ./ocp-installer.tar.gz -o ./ocp.tar.gz

整个过程总结如下:

一开始出现ocp still not ok, check failed是因为observer与ocp同时安装在同一台机器上时,ocp需要的内存过高(70%),调整其占用比例

之后再次出现ocp still not ok, check failed是因为ocp启动时间较长,修改等待时间即可

ocp still not ok, check failed 是因为ocp启动时间较长,修改等待时间即可。
请问这个等待时间在哪可以修改?

这个时间是没有办法改的,如果失败。你可以刷新页面重新看看

下图这种情况,等了一个多小时都是这样。端口也没有启动成功,进入容器内部查看erro日志,提示的是一堆字符串。


稍等,我联系一下OCP同学帮你看看

在容器中是否能看到ocp-server进程,进程是否重启过,可以在容器里看下/home/admin/logs/ocp/ocp.log

我重启了很多次。我怀疑导致的原因可能是无法通过obproxy连接。请看这个帖子:为什么加上#demo集群名称登录就失败了?

我觉得你的方向是对的,我看了你引用我的帖子,之前我因为密码修改问题也出现过reading authorization packet问题

其实修改这个时间并不能解决问题,因为如果没有其他问题,即使因为时间问题超时了,虽然脚本报错了,等待一段时间也可以正常启动。但是看到脚本正常可以确认整个过程的正确性