OceanBase 迁移服务增量同步组件store当前拉取位点突然暂停不动

【 使用环境 】生产环境
【 OB or 其他组件 】增量同步组件store
【 使用版本 】
【问题描述】OceanBase 迁移服务增量同步组件store的当前拉取位点突然停止不更新了,ob和同步任务除了推送数据未进行其他操作。前期进行增量同步可以正常运行拉取日志进行数据同步,今天来看到延迟不断变大,并且同步流量无变化,store的拉取位点也不再更新。
【复现路径】未执行任何操作,突然出现的情况
【附件及日志】


store.log

congo.log

libobcdc.log

服务器组件日志:

[2024-12-11 14:37:25.944627] EDIAG [TLOG.FETCHER] handle_fetch_log_error_ (ob_log_ls_fetch_stream.cpp:1257) [5944][LogStreaWorkThr][T0][Y14A7AC1014C8-00000000B2147ED8-0-0] [lt=98][errcode=0] fetch log fail on server, need_switch_server(fetch_stream=0x7f38f7a0faa0, svr_=“172.16.10.165:2882”, svr_err=-4233, svr_debug_err=-4233, rcode={code:0, msg:"", warnings:[]}, resp={rpc_ver:1, err:-4233, debug_err:-4233, ls_id:{id:1001}, feedback_type:1, fetch_status:{is_reach_max_lsn:false, is_reach_upper_limit_ts:false, scan_round_count:2, l2s_net_time:607, svr_queue_time:75, log_fetch_time:0, ext_process_time:399}, next_req_lsn:{lsn:8997510678051}, log_num:0, pos:0}) BACKTRACE:0x17bbe240 0x9580485 0x96d39cd 0xbc1879f 0x9644912 0x9498e47 0xbdf6cb7 0xbdf0a1d 0xbdf0490 0xbde9b9f 0xbde92c5 0xc02f7a4 0x17ee11ee 0x17eeb2e1 0x17ee741d 0x7f3bd2f06ea5 0x7f3bd2520b0d [2024-12-11 14:37:55.947319] ERROR issue_dba_error (ob_log.cpp:1875) [5944][LogStreaWorkThr][T0][Y14A7AC1014C8-00000000B2347CB7-0-0] [lt=416][errcode=-4388] Unexpected internal error happen, please checkout the internal errcode(errcode=0, file=“ob_log_ls_fetch_stream.cpp”, line_no=1257, info=“fetch log fail on server, need_switch_server”) [2024-12-11 14:37:55.947422] EDIAG [TLOG.FETCHER] handle_fetch_log_error_ (ob_log_ls_fetch_stream.cpp:1257) [5944][LogStreaWorkThr][T0][Y14A7AC1014C8-00000000B2347CB7-0-0] [lt=98][errcode=0] fetch log fail on server, need_switch_server(fetch_stream=0x7f38f7a0faa0, svr_=“172.16.10.165:2882”, svr_err=-4233, svr_debug_err=-4233, rcode={code:0, msg:"", warnings:[]}, resp={rpc_ver:1, err:-4233, debug_err:-4233, ls_id:{id:1001}, feedback_type:1, fetch_status:{is_reach_max_lsn:false, is_reach_upper_limit_ts:false, scan_round_count:2, l2s_net_time:631, svr_queue_time:105, log_fetch_time:0, ext_process_time:588}, next_req_lsn:{lsn:8997510678051}, log_num:0, pos:0}) BACKTRACE:0x17bbe240 0x9580485 0x96d39cd 0xbc1879f 0x9644912 0x9498e47 0xbdf6cb7 0xbdf0a1d 0xbdf0490 0xbde9b9f 0xbde92c5 0xc02f7a4 0x17ee11ee 0x17eeb2e1 0x17ee741d 0x7f3bd2f06ea5 0x7f3

fetch log fail on server, need_switch_server
看着是clog拉取失败了 具体把画红框的日志发一下

libobcdc.7z (2.9 MB)

connector.log (3.2 KB)
error.log (117 字节)

[2024-12-11 13:41:55.460933] EDIAG [TLOG.FETCHER] handle_fetch_log_error_ (ob_log_ls_fetch_stream.cpp:1257) [5950][LogStreaWorkThr][T0][Y14A7AC1014C8-00000000B2446CF2-0-0] [lt=90][errcode=0] fetch log fail on server, need_switch_server(fetch_stream=0x7f38f7a0faa0, svr_=“172.16.10.165:2882”, svr_err=-4233, svr_debug_err=-4233, rcode={code:0, msg:"", warnings:[]}, resp={rpc_ver:1, err:-4233, debug_err:-4233, ls_id:{id:1001}, feedback_type:1, fetch_status:{is_reach_max_lsn:false, is_reach_upper_limit_ts:false, scan_round_count:2, l2s_net_time:620, svr_queue_time:73, log_fetch_time:0, ext_process_time:435}, next_req_lsn:{lsn:8997510678051}, log_num:0, pos:0}) BACKTRACE:0x17bbe240 0x9580485 0x96d39cd 0xbc1879f 0x9644912 0x9498e47 0xbdf6cb7 0xbdf0a1d 0xbdf0490 0xbde9b9f 0xbde92c5 0xc02f7a4 0x17ee11ee 0x17eeb2e1 0x17ee741d 0x7f3bd2f06ea5 0x7f3bd2520b0d

debug_err=-4233, CLOG 回收了
下面的这个信息 查询一下

最早可同步位点

系统租户oceanbase库 下执行如下查询SQL
WITH palf_log_stat AS (
SELECT
tenant_id,
MAX(begin_scn) AS palf_available_start_scn,
MIN(end_scn) AS palf_available_latest_scn,
SCN_TO_TIMESTAMP(MAX(begin_scn)) AS palf_available_start_scn_display,
SCN_TO_TIMESTAMP(MIN(end_scn)) AS palf_available_latest_scn_display
FROM GV$OB_LOG_STAT
WHERE tenant_id & 0x01 = 0 or tenant_id = 1
GROUP BY tenant_id
),
archivelog_stat AS (
SELECT
a.tenant_id AS tenant_id,
MIN(b.start_scn) AS archive_start_scn,
a.checkpoint_scn AS archive_latest_scn,
a.checkpoint_scn_display AS archive_available_latest_scn_display
FROM CDB_OB_ARCHIVELOG a
LEFT JOIN CDB_OB_ARCHIVELOG_PIECE_FILES b
ON a.tenant_id = b.tenant_id AND a.round_id = b.round_id
AND b.file_status != ‘DELETED’ AND a.STATUS = ‘DOING’
GROUP BY a.tenant_id
)
SELECT
pls.tenant_id,
pls.palf_available_start_scn,
pls.palf_available_latest_scn,
pls.palf_available_start_scn_display AS palf_available_start_scn_display,
pls.palf_available_latest_scn_display AS palf_available_latest_scn_display,
als.archive_start_scn AS archive_available_start_scn,
als.archive_latest_scn AS archive_available_latest_scn,
CASE WHEN als.archive_start_scn IS NOT NULL THEN SCN_TO_TIMESTAMP(als.archive_start_scn) ELSE NULL END AS archive_available_start_scn_dispalay,
als.archive_available_latest_scn_display
FROM palf_log_stat pls
LEFT JOIN archivelog_stat als ON pls.tenant_id = als.tenant_id
GROUP BY pls.tenant_id, pls.palf_available_start_scn;

日志报错信息debug_err=-4233, CLOG 回收了 所以拉取日志失败 只能重新建链路了

这个回收了是已生成的日志被回收了还是什么意思?
我这边直接新建一个增量同步任务重新指定拉取点位就可以了吗?数据是否会有丢失

是的 生成的clog日志被回收了 会的 因为clog的日志被回收了 导致的无法拉取 才导致的oms的数据不同步了

这种情况是什么原因导致的能说一下常见情况吗?
我们这边增量同步任务没有开启日志归档,是不是这个原因?

是的 是需要开启的 oms文档有解释

好的 清楚了 非常感谢