FLINK CDC 捕获不到 oceanbase 增量数据

【 使用环境 】测试环境
【 OB or 其他组件 】OB
【 使用版本 】 OceanBase_CE 4.2.0.0
【问题描述】
安装好ob后过了一段时间再单独安装的oblogproxy,希望通过FLINK CDC捕获OB中的表gc_dg_yw表,启动sql-client.sh 运行 select * from GC_DG_YW 时能查询到全量结果,但在ob中插入增量数据不会在select结果区进行展示(ob中新增YWZH = 9的记录不会展示),以下结果区不会变:
YWZH XSFS
1 1
2 2
3 3
4 1
5 5
6 6
7 7
8 7

以下是sql-client.sh启动时的初始脚本:
SET execution.checkpointing.interval = 3s;
SET table.local-time-zone = Asia/Shanghai;
create table GC_DG_YW
(
YWZH varchar(24),
XSFS varchar(12),

XT_XGSJ DATE,
primary key (YWZH) NOT ENFORCED
) WITH (
‘connector’ = ‘oceanbase-cdc’,
‘scan.startup.mode’ = ‘initial’,
‘username’ = ‘hnwq@rems_tenant’,
‘password’ = ‘******’,
‘tenant-name’ = ‘rems_tenant’,
‘database-name’ = ‘remstestdb’,
‘table-name’ = ‘gc_dg_yw’,
‘hostname’ = ‘10.1.12.47’,
‘port’ = ‘2883’,
‘rootserver-list’ = ‘10.1.12.47:2882:2881;10.1.12.48:2882:2881;10.1.12.50:2882:2881’,
‘logproxy.host’ = ‘10.1.12.47’,
‘logproxy.port’ = ‘2983’,
‘working-mode’ = ‘memory’
);

oblogproxy配置文件如下:
/data/install/oblogproxy/conf/conf.json
{
“service_port”: 2983,
“encode_threadpool_size”: 8,
“encode_queue_size”: 20000,
“max_packet_bytes”: 67108864,
“record_queue_size”: 20000,
“read_timeout_us”: 2000000,
“read_fail_interval_us”: 1000000,
“read_wait_num”: 20000,
“send_timeout_us”: 2000000,
“send_fail_interval_us”: 1000000,
“check_quota_enable”: false,
“command_timeout_s”: 10,
“log_quota_size_mb”: 5120,
“log_quota_day”: 7,
“log_gc_interval_s”: 43200,
“oblogreader_path_retain_hour”: 168,
“oblogreader_lease_s”: 300,
“oblogreader_path”: “./run”,
“allow_all_tenant”: true,
“auth_user”: true,
“auth_use_rs”: false,
“auth_allow_sys_user”: true,
“ob_sys_username”: “8D472B19E6E96BC69ABE8070031BC534”,
“ob_sys_password”: “******”,
“counter_interval_s”: 2,
“metric_enable”: true,
“metric_interval_s”: 10,
“debug”: false,
“verbose”: false,
“verbose_packet”: false,
“readonly”: false,
“count_record”: false,
“channel_type”: “plain”,
“tls_ca_cert_file”: “”,
“tls_cert_file”: “”,
“tls_key_file”: “”,
“tls_verify_peer”: true,
“liboblog_tls”: false,
“liboblog_tls_cert_path”: “”
}

【复现路径】问题出现前后相关操作
【问题现象及影响】

【附件】

oblogproxy/log/存在警告文件:logproxy_warn.20231030-223005.29796

E20231030 22:30:05.914144 29796 mysql_protocol.cpp:239] Failed to query observer:Table ‘oceanbase.__all_virtual_server_clog_stat’ doesn’t exist, unexpected column count: 0

E20231030 22:30:05.914330 29796 clog_meta_routine.cpp:45] Failed to check the existence of svr_min_log_timestamp column in __all_virtual_server_clog_stat, disable clog check

W20231030 22:30:32.920292 29796 arranger.cpp:327] Exited oblogreader of pid: 16776 with clientId: 10.1.12.26_8014_1698676203_31_rems_tenant of peer:id:2093049073045602312, fd:8, addr:487325962, port:57432

W20231030 22:30:32.920361 29796 arranger.cpp:342] Failed to fetch peer info of fd:8, errno:9, error:Bad file descriptor

W20231030 22:30:32.920380 29796 arranger.cpp:349] Try to shutdown fd: 8

W20231030 22:30:32.920389 29796 arranger.cpp:351] Shutdown fd: 8

我用的是自己创建租房rems_tenant,在租户下建的用户hnwq,并且给予了所有权限
CREATE USER hnwq IDENTIFIED BY ‘******’;
GRANT ALL PRIVILEGES ON . TO hnwq WITH GRANT OPTION;

flink cdc 版本是多少?推荐用最新的 2.4.x。另外,对于库表名称的配置,如果是需要精确匹配的话,建议用 table-list 参数。

warn 信息不用管,这个检查不影响实际的数据拉取。

flink的版本是1.17.1 , cdc的版本用的是 2.4.1 (flink-sql-connector-oceanbase-cdc-2.4.1.jar),都是用的最新的

已经改成 table-list,但实际还是读不到增量,请老师帮助看一下
) WITH (
‘connector’ = ‘oceanbase-cdc’,
‘scan.startup.mode’ = ‘initial’,
‘username’ = ‘hnwq@rems_tenant’,
‘password’ = ‘******’,
‘tenant-name’ = ‘rems_tenant’,
– ‘database-name’ = ‘remstestdb’,
‘table-list’ = ‘remstestdb.gc_dg_yw’,
‘hostname’ = ‘10.1.12.47’,
‘port’ = ‘2883’,
‘rootserver-list’ = ‘10.1.12.47:2882:2881;10.1.12.48:2882:2881;10.1.12.50:2882:2881’,
‘logproxy.host’ = ‘10.1.12.47’,
‘logproxy.port’ = ‘2983’,
‘working-mode’ = ‘memory’
);
@川粉

你先试下用这个客户端可以订阅到日志不 https://github.com/oceanbase/oblogclient/releases/download/logclient-1.0.7/oblogclient-demo.zip

如果上面这个 demo 运行正常的话,再跑一次 flink 任务然后把日志贴出来我看看。

@小白OB 请问解决了吗,遇到了同样的问题