使用flinkcdc(2.4.2) + oblogproxy-2.0.0(ce版) 增量同步问题

【 使用环境 】生产环境 or 测试环境
【 OB or 其他组件 】
【 使用版本 】
flink 1.17.1
flink-cdc 2.4.2
oblogproxy-2.0.0-100000012023111521.el7.x86_64.rpm(社区版)
observer-ce-4.2.1
【问题描述】
我的flinkcdc 任务每次都是报这个错误,flink tm日志如下:
2023-11-22 15:36:14,673 ERROR com.oceanbase.clogproxy.client.connection.ClientHandler [] - Exception occurred ClientId: t_user_logproxy_client_id_100004_20231122: rootserver_list=xxx.xx.xxx.xx:2882:2881, cluster_id=, cluster_user=root, cluster_password=, , sys_user=, sys_password=, tb_white_list=sys.., tb_black_list=|, start_timestamp=0, start_timestamp_us=0, timezone=+00:00, working_mode=memory, with LogProxy: xxx.xx.xxx.xx:2983
com.oceanbase.clogproxy.client.exception.LogProxyClientException: LogProxy refused handshake request: code: 1
message: “Failed to create oblogreader”
at com.oceanbase.clogproxy.client.connection.ClientHandler.handleErrorResponse(ClientHandler.java:228) ~[flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at com.oceanbase.clogproxy.client.connection.ClientHandler.channelRead(ClientHandler.java:158) ~[flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:995) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [flink-sql-connector-oceanbase-cdc-2.4.2.jar:2.4.2]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_281]

查看logproxy日志 如下:
[2023-11-22 15:36:14] [info] arranger.cpp(80): Arranger on_msg fired: id:7479373611510267912, fd:8, addr:1741427372, port:6272
[2023-11-22 15:36:14] [info] arranger.cpp(87): Handshake request from peer: id:7479373611510267912, fd:8, addr:1741427372, port:6272, msg: log_type:0, id:t_user_logproxy_client_id_100004_20231122, ip:10.253.204.103, version:1.1.0, configuration:tb_white_list=sys.. cluster_user=root timezone=+00:00 rootserver_list=xxx.xx.xxx.xx:2882:2881 cluster_password=1BC9348EA250C98CB81619A96278BB564962F323 cluster_id= tb_black_list=| working_mode=memory first_start_timestamp_us=0 first_start_timestamp=0 , enable_monitor:0,
[2023-11-22 15:36:14] [info] arranger.cpp(99): ObConfig from peer: id:7479373611510267912, fd:8, addr:1741427372, port:6272 after resolve: cluster:,cluster_id:,cluster_password:Middle@2023,cluster_url:,cluster_user:root,first_start_timestamp:0,first_start_timestamp_us:0,id:,initial_trx_gtid_seq:1,initial_trx_seeking_abort_timestamp:0,initial_trx_xid:,rootserver_list:xxx.xx.xxx.xx:2882:2881,server_uuid:,sys_password:,sys_user:,tb_white_list:sys..,tenant:,tb_black_list:|,timezone:+00:00,working_mode:memory,
[2023-11-22 15:36:14] [info] ob_access.cpp(185): About to auth sys: root for observer: xxx.xx.xxx.xx:2881
[2023-11-22 15:36:14] [info] mysql_protocol.cpp(49): Connect to server success: xxx.xx.xxx.xx:2881, user: root
[2023-11-22 15:36:14] [info] mysql_protocol.cpp(120): Auth user success of server: xxx.xx.xxx.xx:2881, user: root
[2023-11-22 15:36:14] [info] mysql_protocol.cpp(211): Query obmysql SQL:show tenant
[2023-11-22 15:36:14] [info] mysql_protocol.cpp(49): Connect to server success: xxx.xx.xxx.xx:2881, user: root
[2023-11-22 15:36:14] [info] mysql_protocol.cpp(120): Auth user success of server: xxx.xx.xxx.xx:2881, user: root
[2023-11-22 15:36:14] [info] mysql_protocol.cpp(211): Query obmysql SQL:SELECT svr_min_log_timestamp FROM oceanbase.__all_virtual_server_clog_stat WHERE zone_status=‘ACTIVE’;
[2023-11-22 15:36:14] [error] mysql_protocol.cpp(236): Failed to query observer:Table ‘oceanbase.__all_virtual_server_clog_stat’ doesn’t exist
[2023-11-22 15:36:14] [error] clog_meta_routine.cpp(52): Failed to check the existence of svr_min_log_timestamp column in __all_virtual_server_clog_stat, disable clog check
[2023-11-22 15:36:14] [info] arranger.cpp(215): Client connecting: type:0, id:t_user_logproxy_client_id_100004_20231122, ip:10.253.204.103, version:1.1.0, configuration:tb_white_list=sys.. cluster_user=root timezone=+00:00 rootserver_list=xxx.xx.xxx.xx:2882:2881 cluster_password=1BC9348EA250C98CB81619A96278BB564962F323 cluster_id= tb_black_list=| working_mode=memory first_start_timestamp_us=0 first_start_timestamp=0 , pid:0, peer:fd:8, register_time:1700638574, enable_monitor:0, packet_version:2,
[2023-11-22 15:36:14] [warning] arranger.cpp(220): Duplication exist clientId: t_user_logproxy_client_id_100004_20231122, close last one: 7479373611508432904 with pid: 8953

flinksql 表定义
CREATE TEMPORARY TABLE cdc_t_user (
user_id INT,
user_name STRING,
age INT,
PRIMARY KEY (user_id) NOT ENFORCED
) WITH (
‘connector’ = ‘oceanbase-cdc’,
‘scan.startup.mode’ = ‘initial’,
‘username’ = ‘root’,
‘password’ = ‘${ob_root_pwd}’,
‘tenant-name’ = ‘sys’,
‘database-name’ = ‘test’,
‘table-name’ = ‘t_user’,
‘hostname’ = ‘host_name’,
‘port’ = ‘2881’,
‘rootserver-list’ = ‘host_name:2882:2881’,
‘logproxy.host’ = ‘host_name’,
‘logproxy.port’ = ‘2983’,
‘jdbc.driver’ = ‘com.mysql.jdbc.Driver’,
‘compatible-mode’ = ‘mysql’,
‘connect.timeout’ = ‘60000’,
‘logproxy.client.id’ = ‘t_user_logproxy_client_id_100004_2023111411’, #启动都给一个肯定唯一的id
‘working-mode’ = ‘memory’
);

【复现路径】
每次必现,即使变更 logproxy.client.id 或删除 run 目录下对应的数据,杀掉oblogreader进程 → 重启 oblogproxy → 启动 flink-cdc-ob

【问题现象及影响】
flinkcdc + oblogproxy 增量同步失败
【附件】

链路是 oceanbase → oblogproxy → flinkcdc 对吧

对的

是的