启动oceanbas集群后,observer进程异常退出

【 使用环境 】测试环境
【 OB or 其他组件 】 V4.3.5
【问题描述】根据官网使用图形化界面安装oceanbase集群,安装完成之后随机一个observer节点的observer进程会异常退出,节点之间rpc端口通信没有问题,资源充足
请帮忙看看是哪里的问题

user:
  username: root
  password: letsg0
  port: 22
oceanbase-ce:
  version: 4.3.5.0
  release: 100000202024123117.el7
  package_hash: 48b61655aaa13e9b01b722928d1979c76b41937e
  10.10.180.166:
    zone: zone1
  10.10.180.135:
    zone: zone2
  servers:
  - 10.10.180.166
  - 10.10.180.135
  global:
    appname: oceanbase
    root_password: /,6Ib%}=yPf1aJh5k:*APaH97@
    ocp_meta_password: CLjslrl0620@_
    mysql_port: 2881
    rpc_port: 2882
    data_dir: /root/data
    redo_dir: /root/redolog
    home_path: /root/oceanbase/oceanbase
    scenario: htap
    cluster_id: 1738996760
    ocp_agent_monitor_password: LBmXmFUspr
    proxyro_password: n15WZsVOtH
    enable_syslog_wf: true
    max_syslog_file_count: 4
    production_mode: false
    memory_limit: 14G
    datafile_size: 2G
    system_memory: 6G
    log_disk_size: 34G
    cpu_count: 8
    datafile_maxsize: 34G
    datafile_next: 3G
obproxy-ce:
  version: 4.3.2.0
  package_hash: b7ae0af3860478f3caecaaec05bd67d0565e4021
  release: 42.el7
  servers:
  - 10.10.180.26
  global:
    prometheus_listen_port: 2884
    listen_port: 2883
    rpc_listen_port: 2885
    home_path: /root/oceanbase/obproxy
    obproxy_sys_password: '*gIyJ@IatO0TlX[p2Jl'
    skip_proxy_sys_private_check: true
    enable_strict_kernel_release: false
	enable_cluster_checkout: false
    rs_list: 10.10.180.166:2881;10.10.180.135:2881
    cluster_name: oceanbase
    observer_root_password: /,6Ib%}=yPf1aJh5k:*APaH97@
  10.10.180.26:
    proxy_id: 4281
    client_session_id_version: 2
  depends:
  - oceanbase-ce
obagent:
  version: 4.2.2
  package_hash: 19739a07a12eab736aff86ecf357b1ae660b554e
  release: 100000042024011120.el7
  servers:
  - 10.10.180.166
  - 10.10.180.135
  global:
    monagent_http_port: 8088
    mgragent_http_port: 8089
    home_path: /root/oceanbase/obagent
    http_basic_auth_password: a1nOC6S9s
    ob_monitor_status: active
  depends:
  - oceanbase-ce
ocp-express:
  version: 4.2.2
  package_hash: 09ffcf156d1df9318a78af52656f499d2315e3f7
  release: 100000022024011120.el7
  servers:
  - 10.10.180.26
  global:
    port: 8180
    admin_passwd: '{WAkHO75T4sa---Ou9'
    home_path: /root/oceanbase/ocpexpress
    ocp_root_password: r%05SbL_
    memory_size: 812M
  depends:
  - obagent
  - oceanbase-ce
  - obproxy-ce


[2025-02-10 16:40:17.760] [DEBUG] - cmd: ['oceanbase']
[2025-02-10 16:40:17.760] [DEBUG] - opts: {}
[2025-02-10 16:40:17.760] [DEBUG] - mkdir /root/.obd/lock/
[2025-02-10 16:40:17.761] [DEBUG] - unknown lock mode 
[2025-02-10 16:40:17.761] [DEBUG] - try to get share lock /root/.obd/lock/global
[2025-02-10 16:40:17.761] [DEBUG] - share lock `/root/.obd/lock/global`, count 1
[2025-02-10 16:40:17.761] [DEBUG] - Get Deploy by name
[2025-02-10 16:40:17.762] [DEBUG] - mkdir /root/.obd/cluster/
[2025-02-10 16:40:17.762] [DEBUG] - mkdir /root/.obd/config_parser/
[2025-02-10 16:40:17.762] [DEBUG] - try to get exclusive lock /root/.obd/lock/deploy_oceanbase
[2025-02-10 16:40:17.763] [DEBUG] - exclusive lock `/root/.obd/lock/deploy_oceanbase`, count 1
[2025-02-10 16:40:17.772] [DEBUG] - Deploy status judge
[2025-02-10 16:40:17.773] [DEBUG] - Get deploy config
[2025-02-10 16:40:17.813] [INFO] Get local repositories and plugins
[2025-02-10 16:40:17.814] [DEBUG] - mkdir /root/.obd/repository
[2025-02-10 16:40:17.815] [DEBUG] - Get local repository oceanbase-ce-4.3.5.0-48b61655aaa13e9b01b722928d1979c76b41937e
[2025-02-10 16:40:17.815] [DEBUG] - try to get share lock /root/.obd/lock/mirror_and_repo
[2025-02-10 16:40:17.815] [DEBUG] - share lock `/root/.obd/lock/mirror_and_repo`, count 1
[2025-02-10 16:40:17.819] [DEBUG] - Get local repository obproxy-ce-4.3.2.0-b7ae0af3860478f3caecaaec05bd67d0565e4021
[2025-02-10 16:40:17.819] [DEBUG] - share lock `/root/.obd/lock/mirror_and_repo`, count 2
[2025-02-10 16:40:17.822] [DEBUG] - Get local repository obagent-4.2.2-19739a07a12eab736aff86ecf357b1ae660b554e
[2025-02-10 16:40:17.822] [DEBUG] - share lock `/root/.obd/lock/mirror_and_repo`, count 3
[2025-02-10 16:40:17.825] [DEBUG] - Get local repository ocp-express-4.2.2-09ffcf156d1df9318a78af52656f499d2315e3f7
[2025-02-10 16:40:17.825] [DEBUG] - share lock `/root/.obd/lock/mirror_and_repo`, count 4
[2025-02-10 16:40:17.829] [DEBUG] - Searching param plugin for components ...
[2025-02-10 16:40:17.829] [DEBUG] - Search param plugin for oceanbase-ce
[2025-02-10 16:40:17.829] [DEBUG] - mkdir /root/.obd/plugins
[2025-02-10 16:40:17.830] [DEBUG] - Found for oceanbase-ce-param-4.3.3.0 for oceanbase-ce-4.3.5.0
[2025-02-10 16:40:17.830] [DEBUG] - Applying oceanbase-ce-param-4.3.3.0 for oceanbase-ce-4.3.5.0-100000202024123117.el7-48b61655aaa13e9b01b722928d1979c76b41937e
[2025-02-10 16:40:18.598] [DEBUG] - Search param plugin for obproxy-ce
[2025-02-10 16:40:18.599] [DEBUG] - Found for obproxy-ce-param-4.3.0 for obproxy-ce-4.3.2.0
[2025-02-10 16:40:18.599] [DEBUG] - Applying obproxy-ce-param-4.3.0 for obproxy-ce-4.3.2.0-42.el7-b7ae0af3860478f3caecaaec05bd67d0565e4021
[2025-02-10 16:40:18.791] [DEBUG] - Search param plugin for obagent
[2025-02-10 16:40:18.792] [DEBUG] - Found for obagent-param-4.2.2 for obagent-4.2.2
[2025-02-10 16:40:18.792] [DEBUG] - Applying obagent-param-4.2.2 for obagent-4.2.2-100000042024011120.el7-19739a07a12eab736aff86ecf357b1ae660b554e
[2025-02-10 16:40:18.885] [DEBUG] - Search param plugin for ocp-express
[2025-02-10 16:40:18.885] [DEBUG] - Found for ocp-express-param-4.1.0 for ocp-express-4.2.2
[2025-02-10 16:40:18.886] [DEBUG] - Applying ocp-express-param-4.1.0 for ocp-express-4.2.2-100000022024011120.el7-09ffcf156d1df9318a78af52656f499d2315e3f7
[2025-02-10 16:40:19.133] [INFO] Open ssh connection
[2025-02-10 16:40:19.140] [DEBUG] - host: 10.10.180.135, port: 22, user: root, password: ******
[2025-02-10 16:40:19.233] [DEBUG] - host: 10.10.180.166, port: 22, user: root, password: ******
[2025-02-10 16:40:19.323] [DEBUG] - host: 10.10.180.26, port: 22, user: root, password: ******
[2025-02-10 16:40:19.532] [DEBUG] - Searching display template for components ...
[2025-02-10 16:40:19.533] [DEBUG] - mkdir /root/.obd/workflows
[2025-02-10 16:40:19.535] [DEBUG] - Call workflow oceanbase-ce-py_script_workflow_display-0.1 for oceanbase-ce-4.3.5.0-100000202024123117.el7-48b61655aaa13e9b01b722928d1979c76b41937e
[2025-02-10 16:40:19.535] [DEBUG] - mkdir /root/.obd/mirror
[2025-02-10 16:40:19.535] [DEBUG] - mkdir /root/.obd/mirror/remote
[2025-02-10 16:40:19.535] [DEBUG] - mkdir /root/.obd/mirror/local
[2025-02-10 16:40:19.536] [DEBUG] - mkdir /root/.obd/optimize/
[2025-02-10 16:40:19.536] [DEBUG] - mkdir /root/.obd/tool/
[2025-02-10 16:40:19.536] [DEBUG] - import display
[2025-02-10 16:40:19.537] [DEBUG] - add display ref count to 1
[2025-02-10 16:40:19.538] [DEBUG] - sub display ref count to 0
[2025-02-10 16:40:19.538] [DEBUG] - export display
[2025-02-10 16:40:19.538] [DEBUG] - plugin oceanbase-ce-py_script_workflow_display-0.1 result: True
[2025-02-10 16:40:19.538] [DEBUG] - Found for oceanbase-ce-py_script_workflow_display-0.1 for oceanbase-ce-0.1
[2025-02-10 16:40:19.538] [DEBUG] - Searching display template for components ...
[2025-02-10 16:40:19.538] [DEBUG] - Call workflow obproxy-ce-py_script_workflow_display-0.1 for obproxy-ce-4.3.2.0-42.el7-b7ae0af3860478f3caecaaec05bd67d0565e4021
[2025-02-10 16:40:19.538] [DEBUG] - import display
[2025-02-10 16:40:19.539] [DEBUG] - add display ref count to 1
[2025-02-10 16:40:19.539] [DEBUG] - sub display ref count to 0
[2025-02-10 16:40:19.539] [DEBUG] - export display
[2025-02-10 16:40:19.539] [DEBUG] - plugin obproxy-ce-py_script_workflow_display-0.1 result: True
[2025-02-10 16:40:19.540] [DEBUG] - Found for obproxy-ce-py_script_workflow_display-0.1 for obproxy-ce-0.1
[2025-02-10 16:40:19.540] [DEBUG] - Searching display template for components ...
[2025-02-10 16:40:19.540] [DEBUG] - Call workflow obagent-py_script_workflow_display-0.1 for obagent-4.2.2-100000042024011120.el7-19739a07a12eab736aff86ecf357b1ae660b554e
[2025-02-10 16:40:19.540] [DEBUG] - import display
[2025-02-10 16:40:19.541] [DEBUG] - add display ref count to 1
[2025-02-10 16:40:19.541] [DEBUG] - sub display ref count to 0
[2025-02-10 16:40:19.541] [DEBUG] - export display
[2025-02-10 16:40:19.541] [DEBUG] - plugin obagent-py_script_workflow_display-0.1 result: True
[2025-02-10 16:40:19.541] [DEBUG] - Found for obagent-py_script_workflow_display-0.1 for obagent-0.1
[2025-02-10 16:40:19.541] [DEBUG] - Searching display template for components ...
[2025-02-10 16:40:19.542] [DEBUG] - Call workflow ocp-express-py_script_workflow_display-0.1 for ocp-express-4.2.2-100000022024011120.el7-09ffcf156d1df9318a78af52656f499d2315e3f7
[2025-02-10 16:40:19.542] [DEBUG] - import display
[2025-02-10 16:40:19.542] [DEBUG] - add display ref count to 1
[2025-02-10 16:40:19.543] [DEBUG] - sub display ref count to 0
[2025-02-10 16:40:19.543] [DEBUG] - export display
[2025-02-10 16:40:19.543] [DEBUG] - plugin ocp-express-py_script_workflow_display-0.1 result: True
[2025-02-10 16:40:19.543] [DEBUG] - Found for ocp-express-py_script_workflow_display-0.1 for ocp-express-0.1
[2025-02-10 16:40:19.543] [DEBUG] - share lock `/root/.obd/lock/mirror_and_repo`, count 5
[2025-02-10 16:40:19.547] [DEBUG] - Searching status plugin for components ...
[2025-02-10 16:40:19.547] [DEBUG] - Searching status plugin for oceanbase-ce-4.3.5.0-100000202024123117.el7-48b61655aaa13e9b01b722928d1979c76b41937e
[2025-02-10 16:40:19.548] [DEBUG] - Found for oceanbase-ce-py_script_status-3.1.0 for oceanbase-ce-4.3.5.0
[2025-02-10 16:40:19.548] [DEBUG] - Call plugin oceanbase-ce-py_script_status-3.1.0 for oceanbase-ce-4.3.5.0-100000202024123117.el7-48b61655aaa13e9b01b722928d1979c76b41937e
[2025-02-10 16:40:19.548] [DEBUG] - import status
[2025-02-10 16:40:19.549] [DEBUG] - add status ref count to 1
[2025-02-10 16:40:19.550] [DEBUG] -- root@10.10.180.166 execute: cat /root/oceanbase/oceanbase/run/observer.pid 
[2025-02-10 16:40:19.596] [DEBUG] -- exited code 0
[2025-02-10 16:40:19.597] [DEBUG] -- root@10.10.180.166 execute: ls /proc/28469 
[2025-02-10 16:40:19.681] [DEBUG] -- exited code 0
[2025-02-10 16:40:19.682] [DEBUG] -- root@10.10.180.135 execute: cat /root/oceanbase/oceanbase/run/observer.pid 
[2025-02-10 16:40:19.725] [DEBUG] -- exited code 0
[2025-02-10 16:40:19.726] [DEBUG] -- root@10.10.180.135 execute: ls /proc/6570 
[2025-02-10 16:40:19.806] [DEBUG] -- exited code 2, error output:
[2025-02-10 16:40:19.806] [DEBUG] ls: cannot access /proc/6570: No such file or directory
[2025-02-10 16:40:19.806] [DEBUG] 
[2025-02-10 16:40:19.807] [DEBUG] - sub status ref count to 0
[2025-02-10 16:40:19.807] [DEBUG] - export status
[2025-02-10 16:40:19.807] [DEBUG] - plugin oceanbase-ce-py_script_status-3.1.0 result: True
[2025-02-10 16:40:19.808] [DEBUG] - Searching status_check plugin for components ...
[2025-02-10 16:40:19.808] [DEBUG] - Searching status_check plugin for general-4.3.5.0--None
[2025-02-10 16:40:19.809] [DEBUG] - Found for general-py_script_status_check-0.1 for general-4.3.5.0
[2025-02-10 16:40:19.809] [DEBUG] - Call plugin general-py_script_status_check-0.1 for oceanbase-ce-4.3.5.0-100000202024123117.el7-48b61655aaa13e9b01b722928d1979c76b41937e
[2025-02-10 16:40:19.809] [DEBUG] - import status_check
[2025-02-10 16:40:19.810] [DEBUG] - add status_check ref count to 1
[2025-02-10 16:40:19.811] [WARNING] 10.10.180.135 oceanbase-ce is not running
[2025-02-10 16:40:19.811] [DEBUG] - sub status_check ref count to 0
[2025-02-10 16:40:19.811] [DEBUG] - export status_check
[2025-02-10 16:40:19.811] [DEBUG] - plugin general-py_script_status_check-0.1 result: False
[2025-02-10 16:40:19.811] [DEBUG] - share lock /root/.obd/lock/mirror_and_repo release, count 4
[2025-02-10 16:40:19.812] [DEBUG] - share lock /root/.obd/lock/mirror_and_repo release, count 3
[2025-02-10 16:40:19.812] [DEBUG] - share lock /root/.obd/lock/mirror_and_repo release, count 2
[2025-02-10 16:40:19.812] [DEBUG] - share lock /root/.obd/lock/mirror_and_repo release, count 1
[2025-02-10 16:40:19.812] [DEBUG] - share lock /root/.obd/lock/mirror_and_repo release, count 0
[2025-02-10 16:40:19.812] [DEBUG] - unlock /root/.obd/lock/mirror_and_repo
[2025-02-10 16:40:19.812] [DEBUG] - exclusive lock /root/.obd/lock/deploy_oceanbase release, count 0
[2025-02-10 16:40:19.812] [DEBUG] - unlock /root/.obd/lock/deploy_oceanbase
[2025-02-10 16:40:19.812] [DEBUG] - share lock /root/.obd/lock/global release, count 0
[2025-02-10 16:40:19.812] [DEBUG] - unlock /root/.obd/lock/global
[2025-02-10 16:40:19.813] [INFO] See https://www.oceanbase.com/product/ob-deployer/error-codes .
[2025-02-10 16:40:19.813] [INFO] Trace ID: a7bee08c-e78a-11ef-9ff5-fa410d32c000
[2025-02-10 16:40:19.813] [INFO] If you want to view detailed obd logs, please run: obd display-trace a7bee08c-e78a-11ef-9ff5-fa410d32c000
1 个赞

推荐使用 obdiag 工具进行日志采集,并且将采集到的日志提供给我们以便确认问题所在。

日志采集场景: observer无故重启
日志采集命令: obdiag gather scene run --scene=observer.restart

为了更好地帮助您排查和解决问题,在此需要您提供一些额外的信息:

  1. 请提供集群的版本信息,这有助于我们了解是否存在已知的相关问题。
  2. 是否在observer进程退出前执行了特定的操作或任务?如果有,请详细描述这些操作。
  3. 集群在observer进程异常退出时是否有任何告警信息或者错误提示?
  4. 除了observer进程异常退出外,集群是否还存在其他异常现象,例如性能下降、连接数异常等?

请您根据上述命令收集日志并提供相关信息,这将有助于我们更准确地定位问题。

附上敏捷诊断工具 obdiag 使用帮助链接

1 个赞

ocp-express组件后续维护和更新力度小了,建议你把这个组件删掉重新部署下试试。
日志中存在10.10.180.135 oceanbase-ce is not running报错
如果还是存在一台obserever启动失败,你可以优先尝试使用obd cluster start xxxx -c oceanbase-ce 再拉起来一次。还是不行的话到失败节点的~/集群名/oceanbase/log/下吧observer日志提供一份

2 个赞

好的辞霜老师,我删掉ocp express的配置在重新部署一下看看

1 个赞

您的问题可能存在误解,实际上据我所知,删除 OCP Express 的配置并重新部署可能不会直接解决 observer 进程异常退出的问题。OCP Express 主要用于集群的管理和监控,并不是导致 observer 进程异常退出的直接原因。

根据您提供的日志信息,10.10.180.135 上的 observer 进程确实出现了异常退出的情况。为了更好地排查和解决问题,建议您按照以下步骤操作:

1. 使用 obdiag 工具进行日志采集

使用 obdiag 工具可以帮助我们更全面地收集日志信息,从而更好地定位问题。您可以执行以下命令来采集日志:

obdiag gather scene run --scene=observer.restart

2. 检查系统资源和配置

确保所有节点的系统资源(如内存、CPU、磁盘空间等)充足,并且配置文件中没有明显的错误。特别是检查以下几点:

  • 内存限制:确保 memory_limit 设置合理,避免因内存不足导致进程被杀。
  • 磁盘空间:确保 data_dirredo_dir 所在的磁盘有足够的可用空间。
  • 日志文件大小:确保 log_disk_size 设置合理,避免日志文件过大导致磁盘空间不足。

3. 检查日志文件

查看 10.10.180.135 上的 observer 日志文件,通常位于 /root/oceanbase/oceanbase/log/ 目录下,寻找任何异常或错误信息。重点关注以下日志文件:

  • observer.log
  • trace.log

4. 检查系统日志

查看系统日志(如 /var/log/messages/var/log/syslog),确认是否有与 observer 进程相关的错误信息或警告。

5. 确认网络通信

虽然您提到节点之间 RPC 端口通信没有问题,但仍然建议再次确认所有节点之间的网络连接是否稳定,特别是防火墙设置和端口开放情况。

6. 检查 OBD 部署日志

回顾 OBD 部署过程中的日志,确保没有遗漏任何重要的错误或警告信息。可以使用以下命令查看 OBD 的详细日志:

obd display-trace <trace_id>

7. 考虑升级或回滚

如果问题持续存在,考虑将 OceanBase 社区版升级到最新版本,或者回滚到之前的稳定版本,以排除是否存在已知的 bug。

总结

删除 OCP Express 的配置并重新部署并不是解决 observer 进程异常退出的有效方法。建议您按照上述步骤进行排查,并提供更多的日志信息以便进一步分析。如果您需要更多帮助,欢迎随时联系 OceanBase 的技术支持团队。

详情请参考:

  1. SYS 租户/RS 服务问题
  2. 系统内存不足/泄漏
  3. 通过 ODC 连接 OceanBase 数据库
  4. 使用 obd 进行运维
  5. 重启 OBProxy 集群下全部 OBProxy
  6. 通过 OCP 部署 OceanBase 集群
  7. 获取任务的详细信息
  8. 操作任务
  9. 查询状态
  10. 查询所有未完成的任务

(小助手的答复已结束,如未能解决您的问题,请继续提问并等待其他同学的回复,谢谢!)

2 个赞

按照您的解决方案删掉了ocp express的配置在部署完成大概10分钟后依然出行了该问题,问题节点的日志在附件,请帮忙分析下
observer.zip (25.8 MB)

1 个赞

[2025-02-10 17:25:47.043051] WDIAG load_file_to_string (utility.h:672) [29252][TimerWK3][T0][Y0-0000000000000000-0-0] [lt=36][errcode=0] read /sys/class/net/eth0/speed failed, errno 22
[2025-02-10 17:25:47.043098] WDIAG get_ethernet_speed (utility.cpp:584) [29252][TimerWK3][T0][Y0-0000000000000000-0-0] [lt=43][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000

日志中存在网卡eth0未识别问题,先排查下网卡问题

2 个赞

这套环境的网卡都是dhcp状态,我改成静态以后重新部署了集群,依然没有解决

[root@observer2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:d3:0d:45:44:00 brd ff:ff:ff:ff:ff:ff
    inet 10.10.180.135/24 brd 10.10.180.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f8d3:dff:fe45:4400/64 scope link 
       valid_lft forever preferred_lft forever

[root@observer2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE=eth0
IPV6INIT=yes
BOOTPROTO=static
ONBOOT=yes
TYPE=Ethernet
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPADDR=10.10.180.135
NETMASK=255.255.255.0
GATEWAY=10.10.180.254

麻烦再提供一份observer日志看下。

1 个赞

根据日志来看是无法检测到网卡速率导致的,我来手动设置一下网卡速率在试试

使用kvm拉起的虚拟机无法设置网卡速率,请问还有其他办法吗?
:slightly_frowning_face:

两台机器的时钟是不是不一样使用clockdiff查看一下

1 个赞

辞霜老师,我铲掉了原来的环境改用单节点的observer和一台OCP来验证是否是网络问题,依然出现了observer进程退出的情况,这是日志,请在帮忙分析下。
observer.zip (7.4 MB)

搭建的是ocp-express还是ocp,你的环境使用的是机械盘么

1 个赞

ocp express ,是hdd盘

麻烦发一份覆盖重启集群的日志

1 个赞

看下这两个日志有没有

[root@observer log]# ll
total 1431620
drwxr-xr-x 2 root root        23 Feb 11 16:30 alert
-rw-r--r-- 1 root root   4449473 Feb 11 18:16 election.log
-rw-r--r-- 1 root root         0 Feb 11 16:30 election.log.wf
-rw-r--r-- 1 root root 215028979 Feb 11 18:16 observer.log
-rw-r--r-- 1 root root 268439085 Feb 11 16:34 observer.log.20250211163417104
-rw-r--r-- 1 root root 268436222 Feb 11 16:41 observer.log.20250211164149085
-rw-r--r-- 1 root root 268439169 Feb 11 18:07 observer.log.20250211180732823
-rw-r--r-- 1 root root       870 Feb 11 18:07 observer.log.wf
-rw-r--r-- 1 root root 202007194 Feb 11 18:16 rootservice.log
-rw-r--r-- 1 root root         0 Feb 11 16:30 rootservice.log.wf
-rw-r--r-- 1 root root 239158571 Feb 11 18:16 trace.log
[root@observer log]# 

observer.log.zip (21.2 MB)
observer.zip (23.1 MB)

这个是用虚拟机部署的吗

是的

好的谢谢