canal中间件连接oblogproxy报错

【 使用环境 】生产环境 or 测试环境
【 OB or 其他组件 】
【 使用版本 】oceanbase4.2社区版,obproxy-ce-4.1.0.0,canal-for-ob.deployer-1.6.0
【问题描述】
canal连接obproxy后,报错
2023-08-18 14:20:13.993 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## set default uncaught exception handler
2023-08-18 14:20:14.022 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## load canal configurations
2023-08-18 14:20:14.156 [main] WARN org.apache.kafka.clients.producer.ProducerConfig - The configuration ‘kerberos.enable’ was supplied but isn’t a known config.
2023-08-18 14:20:14.157 [main] WARN org.apache.kafka.clients.producer.ProducerConfig - The configuration ‘kerberos.krb5.file’ was supplied but isn’t a known config.
2023-08-18 14:20:14.157 [main] WARN org.apache.kafka.clients.producer.ProducerConfig - The configuration ‘kerberos.jaas.file’ was supplied but isn’t a known config.
2023-08-18 14:20:14.160 [main] INFO com.alibaba.otter.canal.deployer.CanalStarter - ## start the canal server.
2023-08-18 14:20:14.807 [main] INFO com.alibaba.otter.canal.deployer.CanalController - ## start the canal server[192.168.167.130(192.168.167.130):11111]
2023-08-18 14:20:15.461 [main] INFO com.alibaba.otter.canal.deployer.CanalStarter - ## the canal server is running now …
2023-08-18 14:20:15.464 [main] INFO com.alibaba.otter.canal.server.CanalMQStarter - ## start the MQ workers.
2023-08-18 14:20:15.465 [main] INFO com.alibaba.otter.canal.server.CanalMQStarter - ## the MQ workers is running now …
2023-08-18 14:20:15.467 [main] DEBUG com.alibaba.otter.canal.deployer.CanalStarter - canal admin port:11110, canal admin user:admin, canal admin password: 4ACFE3202A5FF5CF467898FC58AAB1D615029441, canal ip:
2023-08-18 14:20:15.469 [pool-5-thread-1] INFO com.alibaba.otter.canal.server.CanalMQStarter - ## start the MQ producer: example.
2023-08-18 14:20:19.726 [Thread-10] ERROR c.a.o.c.p.inbound.oceanbase.logproxy.LogProxyConnection - OceanBase LogProxyClient listener error :
com.oceanbase.clogproxy.client.exception.LogProxyClientException: LogProxy refused handshake request: code: 1
message: “Failed to create oblogreader”

at com.oceanbase.clogproxy.client.connection.ClientHandler.handleErrorResponse(ClientHandler.java:217)
at com.oceanbase.clogproxy.client.connection.ClientHandler.channelRead(ClientHandler.java:147)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:1018)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:402)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:307)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
at java.lang.Thread.run(Thread.java:750)

【复现路径】问题出现前后相关操作
【问题现象及影响】

【附件】
obproxy配置如下
{
“service_port”: 2983,
“encode_threadpool_size”: 8,
“encode_queue_size”: 20000,
“max_packet_bytes”: 67108864,
“record_queue_size”: 20000,
“read_timeout_us”: 2000000,
“read_fail_interval_us”: 1000000,
“read_wait_num”: 20000,
“send_timeout_us”: 2000000,
“send_fail_interval_us”: 1000000,
“check_quota_enable”: false,
“command_timeout_s”: 10,
“log_quota_size_mb”: 5120,
“log_quota_day”: 7,
“log_gc_interval_s”: 43200,
“oblogreader_path_retain_hour”: 168,
“oblogreader_lease_s”: 300,
“oblogreader_path”: “./run”,
“allow_all_tenant”: true,
“auth_user”: true,
“auth_use_rs”: false,
“auth_allow_sys_user”: true,
“ob_sys_username”: “8D472B19E6E96BC69ABE8070031BC534”,
“ob_sys_password”: “572391F17534A4AC7D46C4FF790B09B3”,
“counter_interval_s”: 2,
“metric_enable”: true,
“metric_interval_s”: 10,
“debug”: false,
“verbose”: false,
“verbose_packet”: false,
“readonly”: false,
“count_record”: false,
“channel_type”: “plain”,
“tls_ca_cert_file”: “”,
“tls_cert_file”: “”,
“tls_key_file”: “”,
“tls_verify_peer”: true,
“liboblog_tls”: false,
“liboblog_tls_cert_path”: “”
}

canal到oblogproxy网络是通的

oblogproxy 用的是哪个版本?
另外提供下 oblogproxy 的运行日志,目录在 run/{client-id}/logs/

oblogproxy版本
oblogproxy-ce-for-4x-1.1.0-20221201191325.tar.gz
libobcdc 4.2.0.0 100000052023080211
REVISION: 100000052023080211-1a7ba9b87945dd21fa40313b7240ba6c0c67e477
BUILD_TIME: Aug 2 2023 11:58:58
BUILD_FLAGS: RelWithDebInfo

Copyright (c) 2022 Ant Group Co., Ltd.

oblogproxy报错只有
[2023-08-18 16:49:33.279800] INFO [TLOG] read (ob_log_meta_data_baseline_loader.cpp:170) [1460][][T0][Y531C0A8A797-0000000000600002-0-0] [lt=7] table_meta(tenant_id=1002, table_meta={tenant_id:1002, database_id:201001, table_id:308, schema_version:1691636971733968, table_name:"__all_ddl_error_message", table_type:0, tablet_count:1, column_count:0, rowkey_column_count:0, index_table_count:0, index_column_count:0, index_type:0})
[2023-08-18 16:49:33.281994] INFO [TLOG] read (ob_log_meta_data_baseline_loader.cpp:170) [1460][][T0][Y531C0A8A797-0000000000600002-0-0] [lt=8] table_meta(tenant_id=1002, table_meta={tenant_id:1002, database_id:201001, table_id:50272, schema_version:1691636971581792, table_name:"__all_tenant_error_aux_lob_meta", table_type:13, tablet_count:1, column_count:0, rowkey_column_count:0, index_table_count:0, index_column_count:0, index_type:0})
[2023-08-18 16:49:33.282464] INFO [TLOG] read (ob_log_meta_data_baseline_loader.cpp:170) [1460][][T0][Y531C0A8A797-0000000000600002-0-0] [lt=7] table_meta(tenant_id=1002, table_meta={tenant_id:1002, database_id:201001, table_id:50308, schema_version:1691636971736600, table_name:"__all_ddl_error_message_aux_lob_meta", table_type:13, tablet_count:1, column_count:0, rowkey_column_count:0, index_table_count:0, index_column_count:0, index_type:0})
[2023-08-18 16:49:33.284230] INFO [TLOG] read (ob_log_meta_data_baseline_loader.cpp:170) [1460][][T0][Y531C0A8A797-0000000000600002-0-0] [lt=7] table_meta(tenant_id=1002, table_meta={tenant_id:1002, database_id:201001, table_id:60272, schema_version:1691636971583064, table_name:"__all_tenant_error_aux_lob_piece", table_type:12, tablet_count:1, column_count:0, rowkey_column_count:0, index_table_count:0, index_column_count:0, index_type:0})
[2023-08-18 16:49:33.284359] INFO [TLOG] read (ob_log_meta_data_baseline_loader.cpp:170) [1460][][T0][Y531C0A8A797-0000000000600002-0-0] [lt=8] table_meta(tenant_id=1002, table_meta={tenant_id:1002, database_id:201001, table_id:60308, schema_version:1691636971737480, table_name:"__all_ddl_error_message_aux_lob_piece", table_type:12, tablet_count:1, column_count:0, rowkey_column_count:0, index_table_count:0, index_column_count:0, index_type:0})
[2023-08-18 16:49:33.285473] INFO [TLOG] read (ob_log_meta_data_baseline_loader.cpp:170) [1460][][T0][Y531C0A8A797-0000000000600002-0-0] [lt=7] table_meta(tenant_id=1002, table_meta={tenant_id:1002, database_id:201001, table_id:101064, schema_version:1691636971735472, table_name:"__idx_308_idx_ddl_error_object", table_type:5, tablet_count:1, column_count:0, rowkey_column_count:0, index_table_count:0, index_column_count:0, index_type:1})
[2023-08-18 16:49:34.391601] INFO [TLOG] init_tz_info_wrap (ob_log_timezone_info_getter.cpp:435) [1329][][T0][Y0-0000000000000000-0-0] [lt=18] tz_info_wrap init_time_zone succ(tenant_id=1002, timezone="+8:00", tz_info_version=-1, tz_info_wrap={cur_version:-1, class:2, tz_info:0x7f9da05d1e50, error_on_overlap_time:false, tz_info_pos:{tz_name:"", tz_id:-1, default_transition_type:{lower_time:-9223372036854775808, info:{offset_sec:0, tran_type_id:-1, is_dst:false, abbr:""}}, tz_transition_types:[], tz_revert_types:[], curr_idx:0, next_tz_transition_types:[], next_tz_revert_types:[]}, tz_info_offset:{id:-1, offset:28800, error_on_overlap_time:false}})

大概率是版本不对,4.2 需要用 oblogproxy 1.1.3,你可以从 GitHub 或者官网下载中心下载最新版试试:

请教下,目前canal连接到oblogproxysql后,下游如果接kafka的话,是否无法获取CDC数据
比如update操作
{“data”:[{“id”:“10”,“name”:“bb”,“name2”:“aa”}],“database”:“test”,“es”:1692686974000,“id”:8,“isDdl”:false,“mysqlType”:{“id”:“bigint”,“naql”:"",“sqlType”:{“id”:-5,“name”:12,“name2”:12},“table”:“t2”,“ts”:1692686979422,“type”:“UPDATE”}

我不是很确定你提到的无法获取是什么意思,正常 canal 订阅到的数据都会转成 entry 然后写入到目的端,kafka 作为目的端是会存所有的变更数据的。