搭建K8S 搭建 OceanBase社区版后使用obproxy 无法登入

【 使用环境 】测试环境
【 OB or 其他组件 】 obproxy
【 使用版本 】obproxy-ce:4.0.0-5
【问题描述】清晰明确描述问题
通过2881可以正常连接到各个数据库pod
$ mysql -h10.20.5.55 -P2881 -uroot@sys -p2025 oceanbase -A -c
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
mysql> show parameters like ‘cluster’;
±------±---------±------------±---------±--------±----------±----------±--------------------±---------±--------±--------±------------------+
| zone | svr_type | svr_ip | svr_port | name | data_type | value | info | section | scope | source | edit_level |
±------±---------±------------±---------±--------±----------±----------±--------------------±---------±--------±--------±------------------+
| zone1 | observer | 10.20.5.55 | 2882 | cluster | NULL | obcluster | Name of the cluster | OBSERVER | CLUSTER | DEFAULT | DYNAMIC_EFFECTIVE |
| zone2 | observer | 10.20.6.226 | 2882 | cluster | NULL | obcluster | Name of the cluster | OBSERVER | CLUSTER | DEFAULT | DYNAMIC_EFFECTIVE |
| zone3 | observer | 10.20.6.34 | 2882 | cluster | NULL | obcluster | Name of the cluster | OBSERVER | CLUSTER | DEFAULT | DYNAMIC_EFFECTIVE |
±------±---------±------------±---------±--------±----------±----------±--------------------±---------±--------±--------±------------------+
3 rows in set (0.01 sec)

mysql> select user,host,password from mysql.user;
±---------±-----±------------------------------------------+
| user | host | password |
±---------±-----±------------------------------------------+
| root | % | *7c1fcd04aa280db2804c2b0a4b38cd0397d5a57f |
| operator | % | *6cd5b49bf4f8fa98afd723460f7d96443a1c416b |
| monitor | % | *d31ff125d7f75a2268a9ad56fe9451888d5ff8e3 |
| proxyro | % | *7c1fcd04aa280db2804c2b0a4b38cd0397d5a57f |
±---------±-----±------------------------------------------+
4 rows in set (0.00 sec)

但使用obproxy无法连接
$ mysql -h 10.20.6.117 -P31992 -uroot@sys#obcluster -p2025 oceanbase -A -c
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 2013 (HY000): Lost connection to MySQL server at ‘reading authorization packet’, system error: 0

Name: svc-obproxy │
│ Namespace: oceanbase │
│ Labels: │
│ Annotations: │
│ Selector: app=obproxy │
│ Type: NodePort │
│ IP Family Policy: SingleStack │
│ IP Families: IPv4 │
│ IP: 172.20.193.129 │
│ IPs: 172.20.193.129 │
│ Port: sql 2883/TCP │
│ TargetPort: 2883/TCP │
│ NodePort: sql 31992/TCP │
│ Endpoints: 10.20.5.151:2883,10.20.6.210:2883 │
│ Port: prometheus 2884/TCP │
│ TargetPort: 2884/TCP │
│ NodePort: prometheus 32051/TCP │
│ Endpoints: 10.20.5.151:2884,10.20.6.210:2884 │
│ Session Affinity: None │
│ External Traffic Policy: Cluster │
│ Events:

obproxy error log显示
[root@obproxy-ffd6cdb9f-tvdnr admin]# cat obproxy/log/obproxy_error.log
2025-09-19 14:34:12.727085,helloworld,obcluster:sys:,OB_MYSQL,OB_MYSQL_COM_LOGIN,failed,-4152,33472us,0us,0us,0us,Y0-00007F1C5B8143A0,0,Null value

【复现路径】问题出现前后相关操作
【附件及日志】推荐使用OceanBase敏捷诊断工具obdiag收集诊断信息,详情参见链接(右键跳转查看):

【SOP系列 22 】——故障诊断第一步(自助诊断和诊断信息收集)

【备注】基于 LLM 和开源文档 RAG 的论坛小助手已开放测试,在发帖时输入 [@论坛小助手] 即可召唤小助手,欢迎试用!

2 个赞

针对您的问题,首先我们需要收集相关的日志信息来进一步分析问题的原因。您可以使用 obdiag 工具来采集日志,这将有助于我们找到问题的根本原因。

日志采集场景: SQL执行出错
日志采集命令: obdiag gather scene run --scene=observer.sql_err --env “{db_connect=’-h<obproxy_ip> -P2881 -u@<tenant_name> -p -D’, trace_id=’<trace_id>}’”

请您根据实际情况替换上述命令中的 <obproxy_ip><username><tenant_name><password><database> 以及可能的 <trace_id>。如果您不知道 trace_id,可以暂时忽略它。

此外,为了能够更有效地解决您的问题,请您提供以下信息:

  1. 您当前使用的OceanBase版本号是多少?
  2. obproxy的版本号是什么?
  3. 能否提供完整的obproxy错误日志?
  4. 您是如何配置obproxy的?是否有关于网络或安全组的特殊设置?
  5. 您尝试登录时使用的具体命令是什么?

请按照上述指导操作,并将采集到的日志压缩包发送给我们。我们将基于这些信息进一步分析问题并提供解决方案。

附上敏捷诊断工具 obdiag 使用帮助链接

$ obdiag gather scene run --scene=observer.sql_err --env “{db_connect=’-h10.20.6.117 -P31992 -uroot@sys#obcluster -p2025 -D’”
[WARN] no such option: -P
[WARN] no such option: -u
[WARN] no such option: -p
[WARN] no such option: -D
[ERROR] The option you provided with -c: /home/ops.wallpaper-stg/.obdiag/config.yml is not exist.

重新复现下连接报错,然后把 10.20.6.117 节点的obproxy.log日志提供下。

cat obproxy/log/obproxy_error.log

2025-09-19 14:34:12.727085,helloworld,obcluster:sys:,OB_MYSQL,OB_MYSQL_COM_LOGIN,failed,-4152,33472us,0us,0us,0us,Y0-00007F1C5B8143A0,0,Null value,
log日志就是这个

025-09-19 14:53:55.783496] INFO [PROXY.CS] ob_mysql_client_session.cpp:92 [7][Y0-00007F1C5AA143A0] [lt=3] [dc=0] client session destroy(cs_id=2147483658, proxy_sessid=726212939115331592, client_vc=NULL)
[2025-09-19 14:53:55.783506] INFO [PROXY.SM] ob_mysql_sm.cpp:8731 [7][Y0-00007F1C5AA143A0] [lt=5] [dc=0] deallocating sm(sm_id=10)
[2025-09-19 14:53:55.783514] WARN [PROXY] get_resultset_fetcher (ob_client_utils.cpp:245) [7][Y0-00007F1C5AA143A0] [lt=4] [dc=0] fail to execute sql(ret=-1045)
[2025-09-19 14:53:55.783520] WARN [PROXY] next (ob_client_utils.cpp:224) [7][Y0-00007F1C5AA143A0] [lt=6] [dc=0] fail to get rs_fetcher(ret=-1045)
[2025-09-19 14:53:55.783524] WARN [PROXY] finish_task (ob_resource_pool_processor.cpp:702) [7][Y0-00007F1C5AA143A0] [lt=3] [dc=0] fail to get server state info(ret=-1045)
[2025-09-19 14:53:55.783528] WARN [PROXY] handle_event_complete (ob_async_common_task.cpp:150) [7][Y0-00007F1C5AA143A0] [lt=2] [dc=0] fail to do finish task(task_name=“server_state_info_init_task”, ret=-1045)
[2025-09-19 14:53:55.783534] WARN [PROXY] main_handler (ob_async_common_task.cpp:59) [7][Y0-00007F1C5AA143A0] [lt=6] [dc=0] fail to handle event complete(ret=-1045)
[2025-09-19 14:53:55.783538] INFO [PROXY] ob_resource_pool_processor.cpp:1123 [7][Y0-00007F1C5AA143A0] [lt=3] [dc=0] ObClusterResourceCreateCont::main_handler(event=“ASYNC_TASK_DONE_EVENT”, init_status=4, cluster_name=obcluster, cluster_id=0, data=0x256fe1d0)
[2025-09-19 14:53:55.783542] INFO [PROXY] ob_resource_pool_processor.cpp:1353 [7][Y0-00007F1C5AA143A0] [lt=3] [dc=0] cluster resource create complete(created_cr={this:0x7f1c5b200080, ref_count:3, is_inited:true, cluster_info_key:{cluster_name:{config_string:“obcluster”}, cluster_id:0}, cr_state:“CR_INIT_FAILED”, version:3, last_access_time_ns:0, deleting_completed_thread_num:0, fetch_rslist_task_count:0, fetch_idc_list_task_count:0, last_idc_list_refresh_time_ns:0, last_rslist_refresh_time_ns:1758264835777340189, server_state_version:0}, pending_list_count=0)
[2025-09-19 14:53:55.783560] INFO [PROXY] ob_resource_pool_processor.cpp:1799 [7][Y0-00007F1C5AA143A0] [lt=17] [dc=0] ObClusterResource will destroy, and wait to be free(this={this:0x7f1c5b200080, ref_count:0, is_inited:true, cluster_info_key:{cluster_name:{config_string:“obcluster”}, cluster_id:0}, cr_state:“CR_INIT_FAILED”, version:3, last_access_time_ns:0, deleting_completed_thread_num:0, fetch_rslist_task_count:0, fetch_idc_list_task_count:0, last_idc_list_refresh_time_ns:0, last_rslist_refresh_time_ns:1758264835777340189, server_state_version:0})
[2025-09-19 14:53:55.783595] INFO [PROXY] ob_mysql_proxy.h:175 [7][Y0-00007F1C5AA143A0] [lt=8] [dc=0] client pool will be destroyed(client_pool={this:0x7f1c636a7940, ref_count:4, is_inited:true, stop:false, mc_count:2, cluster_resource:0x7f1c62c00080})
[2025-09-19 14:53:55.783621] INFO [PROXY] ob_congestion_manager.cpp:103 [7][Y0-00007F1C5AA143A0] [lt=6] [dc=0] ObCongestionManager will destroy(this={this:0x7f1c5b204180, is_inited:true, is_base_servers_added:false, is_congestion_enabled:true, zone_count:0, config:{ref_count:1, this:0x7f1c637e6ac0, conn_failure_threshold:5, alive_failure_threshold:5, fail_window_sec:120, retry_interval_sec:20, min_keep_congestion_interval_sec:20}})
[2025-09-19 14:53:55.783639] INFO [PROXY] ob_resource_pool_processor.cpp:1757 [7][Y0-00007F1C5AA143A0] [lt=9] [dc=0] the cluster resource will free(this={this:0x7f1c5b200080, ref_count:0, is_inited:false, cluster_info_key:{cluster_name:{config_string:“obcluster”}, cluster_id:0}, cr_state:“CR_DEAD”, version:3, last_access_time_ns:0, deleting_completed_thread_num:0, fetch_rslist_task_count:0, fetch_idc_list_task_count:0, last_idc_list_refresh_time_ns:0, last_rslist_refresh_time_ns:1758264835777340189, server_state_version:0})
[2025-09-19 14:53:55.783654] INFO [PROXY] ob_congestion_manager.cpp:103 [7][Y0-00007F1C5AA143A0] [lt=8] [dc=0] ObCongestionManager will destroy(this={this:0x7f1c5b204180, is_inited:false, is_base_servers_added:false, is_congestion_enabled:true, zone_count:0, config:NULL})
[2025-09-19 14:53:55.783781] INFO [PROXY] ob_resource_pool_processor.cpp:1123 [7][Y0-00007F1C5AA143A0] [lt=7] [dc=0] ObClusterResourceCreateCont::main_handler(event=“CLUSTER_RESOURCE_INFORM_OUT_EVENT”, init_status=4, cluster_name=obcluster, cluster_id=0, data=0x7f1c62fe8130)
[2025-09-19 14:53:55.783789] WARN [PROXY.SM] state_get_cluster_resource (ob_mysql_sm.cpp:1638) [7][Y0-00007F1C5AA143A0] [lt=7] [dc=0] data is NULL(sm_id=6, ret=-4152)
[2025-09-19 14:53:55.783796] WARN [PROXY.TXN] handle_error_jump (ob_mysql_transact.cpp:66) [7][Y0-00007F1C5AA143A0] [lt=6] [dc=0] [ObMysqlTransact::handle_error_jump]
[2025-09-19 14:53:55.783797] INFO [PROXY] ob_mysql_client_pool.cpp:135 [12][Y0-00007F1C637E9DA0] [lt=0] [dc=0] all mysql client has been scheduled to destroy self(deleted_count=2)
[2025-09-19 14:53:55.783801] WARN [PROXY.SM] setup_error_transfer (ob_mysql_sm.cpp:8171) [7][Y0-00007F1C5AA143A0] [lt=4] [dc=0] [setup_error_transfer] Now closing connection(sm_id=6, request_cmd=“Sleep”, sql_cmd=“Handshake”, sql=OB_MYSQL_COM_LOGIN)
[2025-09-19 14:53:55.783808] INFO [PROXY] ob_client_vc.cpp:595 [12][Y0-00007F1C637E9DE0] [lt=8] [dc=0] mysql client active timeout(active_timeout_ms=0, next_action=1, info={user_name:“proxyro@sys#obcluster:0”, database_name:“oceanbase”, request_param:{sql:"", is_deep_copy:false, current_idc_name:"", is_user_idc_name_set:false, need_print_trace_stat:false, target_addr:“0.0.0.0”}})
[2025-09-19 14:53:55.783820] INFO [PROXY] ob_client_vc.cpp:1111 [12][Y0-00007F1C637E9DE0] [lt=11] [dc=0] mysql client will kill self(this=0x7f1c637e3ec0)
[2025-09-19 14:53:55.783826] INFO [PROXY] ob_client_vc.cpp:595 [12][Y0-00007F1C637E9E20] [lt=3] [dc=0] mysql client active timeout(active_timeout_ms=0, next_action=1, info={user_name:“proxyro@sys#obcluster:0”, database_name:“oceanbase”, request_param:{sql:"", is_deep_copy:false, current_idc_name:"", is_user_idc_name_set:false, need_print_trace_stat:false, target_addr:“0.0.0.0”}})
[2025-09-19 14:53:55.783830] INFO [PROXY] ob_client_vc.cpp:1111 [12][Y0-00007F1C637E9E20] [lt=4] [dc=0] mysql client will kill self(this=0x7f1c637e41a0)
[2025-09-19 14:53:55.783832] INFO [PROXY] ob_mysql_client_pool.cpp:203 [12][Y0-00007F1C637E9E20] [lt=2] [dc=0] client pool will be free(this={this:0x7f1c636a7940, ref_count:0, is_inited:false, stop:true, mc_count:2, cluster_resource:0x7f1c62c00080})
[2025-09-19 14:53:55.783837] INFO [PROXY.SS] ob_mysql_client_session.cpp:637 [7][Y0-00007F1C5AA143A0] [lt=4] [dc=0] client session do_io_close((*this={this:0x7f1c5df41230, is_proxy_mysql_client:false, is_waiting_trans_first_request:false, need_delete_cluster:false, is_first_dml_sql_got:false, vc_ready_killed:false, active:true, magic:19132429, conn_decrease:true, current_tid:7, cs_id:6, proxy_sessid:0, session_info:{is_inited:true, priv_info:{has_all_privilege:false, cs_id:4294967295, user_priv_set:-1, cluster_name:"", tenant_name:"", user_name:""}, version:{common_hot_sys_var_version:0, common_sys_var_version:0, mysql_hot_sys_var_version:0, mysql_sys_var_version:0, hot_sys_var_version:0, sys_var_version:0, user_var_version:0, db_name_version:0, last_insert_id_version:0, sess_info_version:0}, hash_version:{common_hot_sys_var_version:0, common_sys_var_version:0, mysql_hot_sys_var_version:0, mysql_sys_var_version:0, hot_sys_var_version:0, sys_var_version:0, user_var_version:0, db_name_version:0, last_insert_id_version:0, sess_info_version:0}, val_hash:{common_hot_sys_var_hash:0, common_cold_sys_var_hash:0, mysql_hot_sys_var_hash:0, mysql_cold_sys_var_hash:0, hot_sys_var_hash:0, cold_sys_var_hash:0, user_var_hash:0}, global_vars_version:-1, is_global_vars_changed:false, is_trans_specified:false, is_user_idc_name_set:false, is_read_consistency_set:false, idc_name:"", cluster_id:0, real_meta_cluster_name:"", safe_read_snapshot:0, syncing_safe_read_snapshot:0, route_policy:1, proxy_route_policy:3, user_identity:2, global_vars_version:-1, is_read_only_user:false, is_request_follower_user:false, ob20_request:{remain_payload_len:0, ob20_request_received_done:false, ob20_header:{ob 20 protocol header:{compressed_len:0, seq:0, non_compressed_len:0}, magic_num:0, header_checksum:0, connection_id:0, request_id:0, pkt_seq:0, payload_len:0, version:0, flag_.flags:0, reserved:0}}, client_cap:0, server_cap:0}, dummy_ldc:{use_ldc:false, idc_name:"", item_count:0, site_start_index_array:[[0]0, [1]0, [2]0, [3]0], item_array:null, pl:null, ts:null, readonly_exist_status:“READONLY_ZONE_UNKNOWN”}, dummy_entry:null, server_state_version:0, cur_ss:null, bound_ss:null, lii_ss:null, cluster_resource:NULL, client_vc:0x7f1c5aa04120, using_ldg:false, trace_stats:NULL}, client_vc_=0x7f1c5aa04120, this=0x7f1c5df41230)
[2025-09-19 14:53:55.783879] INFO [PROXY.CS] ob_mysql_client_session.cpp:92 [7][Y0-00007F1C5AA143A0] [lt=41] [dc=0] client session destroy(cs_id=6, proxy_sessid=0, client_vc=NULL)
[2025-09-19 14:53:55.783910] INFO [PROXY.SM] ob_mysql_sm.cpp:8731 [7][Y0-00007F1C5AA143A0] [lt=4] [dc=0] deallocating sm(sm_id=6)

在obproxy的etc目录下 strings obproxy_config.bin | grep -w observer_sys_password

如果有普通租户,可以使用普通租户连接下 然后提供完整的obproxy.log

etc目录下没有obproxy_config.bin文件

正常应该都有的 ,看下目录有哪些文件。

[root@obproxy-ffd6cdb9f-tvdnr etc]# strings obproxy_config.bin | grep -w observer_sys_password
observer_sys_password=
目前没有建普通租户,只有

mysql> select user,host,password from mysql.user;
±---------±-----±------------------------------------------+
| user | host | password |
±---------±-----±------------------------------------------+
| root | % | *7c1fcd04aa280db2804c2b0a4b38cd0397d5a57f |
| operator | % | *6cd5b49bf4f8fa98afd723460f7d96443a1c416b |
| monitor | % | *d31ff125d7f75a2268a9ad56fe9451888d5ff8e3 |
| proxyro | % | *7c1fcd04aa280db2804c2b0a4b38cd0397d5a57f |
±---------±-----±------------------------------------------+
4 rows in set (0.00 sec)

obproxy.yaml如下
apiVersion: v1
kind: Service
metadata:
name: svc-obproxy
namespace: oceanbase
spec:
type: NodePort
selector:
app: obproxy
ports:
- name: “sql”
port: 2883
targetPort: 2883
#nodePort: 30083
- name: “prometheus”
port: 2884
targetPort: 2884
# nodePort: 30084

apiVersion: apps/v1
kind: Deployment
metadata:
name: obproxy
namespace: oceanbase
spec:
selector:
matchLabels:
app: obproxy
replicas: 2
template:
metadata:
labels:
app: obproxy
spec:
containers:
- name: obproxy
image: oceanbase/obproxy-ce:4.0.0-5
ports:
- containerPort: 2883
name: “sql”
- containerPort: 2884
name: “prometheus”
env:
- name: APP_NAME
value: helloworld
- name: OB_CLUSTER
value: obcluster
- name: RS_LIST
value: “10.20.5.55:2881;10.20.6.226:2881;10.20.6.34:2881”
- name: PROXYRO_PASSWORD
valueFrom:
secretKeyRef:
name: proxyro-password
key: password
resources:
limits:
memory: 2Gi
cpu: “1”
requests:
memory: 200Mi
cpu: 200m

obcluster.yaml 如下:
apiVersion: oceanbase.oceanbase.com/v1alpha1
kind: OBCluster
metadata:
name: obcluster
namespace: oceanbase
spec:
clusterName: obcluster
clusterId: 1
userSecrets:
root: root-password
proxyro: proxyro-password
topology:
- zone: zone1
replica: 1
- zone: zone2
replica: 1
- zone: zone3
replica: 1
observer:
image: oceanbase/oceanbase-cloud-native:4.2.1.1-101010012023111012
resource:
cpu: 2
memory: 10Gi
storage:
dataStorage:
storageClass: local-path
size: 50Gi
redoLogStorage:
storageClass: local-path
size: 50Gi
logStorage:
storageClass: local-path
size: 50Gi
monitor:
image: oceanbase/obagent:4.2.1-100000092023101717
resource:
cpu: 1
memory: 1Gi

有没有防火墙啊?

OceanBase运行在容器里效果怎么样啊

学习中

黑屏化关联一下ob和obproxy。
可以参考黑屏化搭建odp方法
https://www.oceanbase.com/knowledge-base/oceanbase-database-proxy-1000000001687223?back=kb