[2024-02-19 19:03:29.234260] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78801-0-0] [lt=11] failed to resolve(ret=-5019) [2024-02-19 19:03:29.234273] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78801-0-0] [lt=11] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.234284] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78801-0-0] [lt=9] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.234293] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78801-0-0] [lt=8] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:29.234304] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78801-0-0] [lt=8] fail to handle text query(stmt=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server, ret=-5019) [2024-02-19 19:03:29.234315] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78801-0-0] [lt=8] executor execute failed(ret=-5019) [2024-02-19 19:03:29.234326] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78801-0-0] [lt=10] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, retry_cnt=0) [2024-02-19 19:03:29.234345] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78801-0-0] [lt=13] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:29.234363] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78801-0-0] [lt=15] result set close failed(ret=-5019) [2024-02-19 19:03:29.234521] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78801-0-0] [lt=156] result set close failed(ret=-5019) [2024-02-19 19:03:29.234531] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78801-0-0] [lt=9] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:29.234556] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78801-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:29.234571] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106718][RSAsyncTask1][T0][YB42AC0103F2-000611B922A78801-0-0] [lt=12] failed to process final(executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, aret=-5019, ret=-5019) [2024-02-19 19:03:29.234583] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=9] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server) [2024-02-19 19:03:29.234594] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=9] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:29.234603] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=7] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:29.234612] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=8] query failed(ret=-5019, conn=0x7fdcdc89a050, start=1708340609233681, sql=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server) [2024-02-19 19:03:29.234624] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=11] read failed(ret=-5019) [2024-02-19 19:03:29.234788] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.234810] WARN [SHARE] run2 (ob_async_task_queue.cpp:148) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=8] task process failed, start retry(max retry time=0, retry interval=1000000, ret=-5019) [2024-02-19 19:03:29.234818] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.245021] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.245087] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=71] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.246875] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:738) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=35] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] current server is WRS leader, need start CLUSTER weak read service(tenant_id=1, serve_leader_epoch=0, cur_leader_epoch=138, cluster_service_tablet_id_={id:226}, in_service=false, can_update_version=false, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:29.246917] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:336) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=29] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] begin start service(tenant_id=1, is_in_service()=false, can_update_version=false) [2024-02-19 19:03:29.246929] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:338) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] start TenantWeakReadClusterService(tenant_id=1) [2024-02-19 19:03:29.248108] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=6] table not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:29.248132] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=23] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:29.248177] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=40] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:29.248190] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=13] resolve table relation factor failed(ret=-5019, table_name=__all_weak_read_service) [2024-02-19 19:03:29.248265] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=65] fail to resolve table(ret=-5019) [2024-02-19 19:03:29.248277] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=11] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:29.248309] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=9] Table 'oceanbase.__all_weak_read_service' doesn't exist [2024-02-19 19:03:29.248321] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=10] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:29.248331] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:29.248340] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=8] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:29.248350] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=8] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:29.248362] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=9] resolve normal query failed(ret=-5019) [2024-02-19 19:03:29.248373] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:29.248398] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=11] failed to resolve(ret=-5019) [2024-02-19 19:03:29.248412] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=13] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.248444] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=28] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.248455] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=10] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:29.248467] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=9] fail to handle text query(stmt=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '', ret=-5019) [2024-02-19 19:03:29.248464] INFO [STORAGE] operator() (ob_tenant_freezer.cpp:124) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=16] ====== tenant freeze timer task ====== [2024-02-19 19:03:29.248496] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=11] executor execute failed(ret=-5019) [2024-02-19 19:03:29.248509] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=12] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, retry_cnt=0) [2024-02-19 19:03:29.248532] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=14] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:29.248553] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=18] result set close failed(ret=-5019) [2024-02-19 19:03:29.248563] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=9] result set close failed(ret=-5019) [2024-02-19 19:03:29.248571] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=7] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:29.248598] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:29.248611] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E6-0-0] [lt=12] failed to process final(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, aret=-5019, ret=-5019) [2024-02-19 19:03:29.248623] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] execute sql failed(ret=-5019, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:29.248635] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:29.248644] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:29.248655] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8] query failed(ret=-5019, conn=0x7fdcdc9be050, start=1708340609247802, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:29.248666] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] read failed(ret=-5019) [2024-02-19 19:03:29.248678] WARN [STORAGE.TRANS] query_cluster_version_range_ (ob_tenant_weak_read_cluster_service.cpp:192) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8] execute sql read fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", exec_tenant_id=1, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:29.248755] WARN [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:367) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] query cluster version range from WRS table fail(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:29.248767] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:415) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] start service done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, in_service=false, leader_epoch=0, current_version=0, delta=1708340609248763, min_version=0, max_version=0, max_stale_time=5000000000, all_valid_server_count=0, total_time=1862, wlock_time=34, check_leader_time=2, query_version_time=0, persist_version_time=0) [2024-02-19 19:03:29.248786] WARN [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:781) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] start CLUSTER weak read service fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:29.248799] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:791) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, need_start_service=true, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=138, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:29.248864] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] get wrs ts(ls_id={id:1}, delta_ns=-1706042771806048699, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:29.248881] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:29.249805] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=17] table not exist(tenant_id=1, database_id=201001, table_name=__all_freeze_info, ret=-5019) [2024-02-19 19:03:29.249823] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=17] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_freeze_info, ret=-5019) [2024-02-19 19:03:29.249833] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=8] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:29.249841] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=8] resolve table relation factor failed(ret=-5019, table_name=__all_freeze_info) [2024-02-19 19:03:29.249851] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=7] fail to resolve table(ret=-5019) [2024-02-19 19:03:29.249858] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=7] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:29.249869] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=5] Table 'oceanbase.__all_freeze_info' doesn't exist [2024-02-19 19:03:29.249876] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=7] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:29.249883] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=7] resolve basic table failed(ret=-5019) [2024-02-19 19:03:29.249890] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=6] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:29.249896] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=5] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:29.249904] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=6] resolve normal query failed(ret=-5019) [2024-02-19 19:03:29.249912] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=7] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:29.249929] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=9] failed to resolve(ret=-5019) [2024-02-19 19:03:29.249962] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=31] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.249982] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=17] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.249991] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=9] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:29.250002] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=8] fail to handle text query(stmt=SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1, ret=-5019) [2024-02-19 19:03:29.250013] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=10] executor execute failed(ret=-5019) [2024-02-19 19:03:29.250024] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=10] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1"}, retry_cnt=0) [2024-02-19 19:03:29.250042] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=13] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:29.250059] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=14] result set close failed(ret=-5019) [2024-02-19 19:03:29.250069] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:29.250077] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:29.250137] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:29.250152] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D3-0-0] [lt=14] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1"}, aret=-5019, ret=-5019) [2024-02-19 19:03:29.250165] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1) [2024-02-19 19:03:29.250177] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:29.250187] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:29.250198] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] query failed(ret=-5019, conn=0x7fdcdc924050, start=1708340609249651, sql=SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1) [2024-02-19 19:03:29.250211] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=12] read failed(ret=-5019) [2024-02-19 19:03:29.250221] WARN [SHARE] get_freeze_info (ob_freeze_info_proxy.cpp:68) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", sql=SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1, tenant_id=1) [2024-02-19 19:03:29.250333] WARN [STORAGE] get_global_frozen_scn_ (ob_tenant_freezer.cpp:1086) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] get_frozen_scn failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:29.250343] WARN [STORAGE] do_major_if_need_ (ob_tenant_freezer.cpp:1188) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] fail to get global frozen version(ret=-5019) [2024-02-19 19:03:29.250377] WARN [STORAGE] check_and_freeze_normal_data_ (ob_tenant_freezer.cpp:379) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] [TenantFreezer] fail to do major freeze(tmp_ret=-5019) [2024-02-19 19:03:29.250409] INFO [STORAGE] check_and_freeze_tx_data_ (ob_tenant_freezer.cpp:419) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=35] [TenantFreezer] Trigger Tx Data Table Self Freeze. (tenant_info_.tenant_id_=1, tenant_tx_data_mem_used=430988896, self_freeze_max_limit_=214748364, hold_memory=1718894592, self_freeze_tenant_hold_limit_=429496729, self_freeze_min_limit_=21474836) [2024-02-19 19:03:29.251012] INFO [STORAGE] do_tx_data_table_freeze_ (ob_tenant_freezer_rpc.cpp:73) [1108357][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D4-0-0] [lt=10] start tx data table self freeze task in rpc handle thread(arg_=freeze_type:3) [2024-02-19 19:03:29.251030] INFO [STORAGE] self_freeze_task (ob_tx_data_table.cpp:794) [1108357][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D4-0-0] [lt=14] start tx data table self freeze task(get_ls_id()={id:1}) [2024-02-19 19:03:29.251042] INFO [STORAGE] freeze (ob_tx_data_memtable_mgr.cpp:192) [1108357][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D4-0-0] [lt=9] start freeze tx data memtable(ls_id_={id:1}) [2024-02-19 19:03:29.251051] INFO [STORAGE] freeze_ (ob_tx_data_memtable_mgr.cpp:228) [1108357][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D4-0-0] [lt=7] There is a freezed memetable existed. Try freeze after flushing it.(ret=-4023, ret="OB_EAGAIN", get_memtable_count_()=2) [2024-02-19 19:03:29.251060] WARN [STORAGE] freeze (ob_tx_data_memtable_mgr.cpp:206) [1108357][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D4-0-0] [lt=8] freeze tx data memtable fail.(ret=-4023, ret="OB_EAGAIN") [2024-02-19 19:03:29.251067] WARN [STORAGE] flush (ob_tx_data_memtable_mgr.cpp:430) [1108357][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D4-0-0] [lt=6] freeze failed(ret=-4023, ret="OB_EAGAIN", this=0x7fdce89de180) [2024-02-19 19:03:29.251075] WARN [STORAGE] self_freeze_task (ob_tx_data_table.cpp:798) [1108357][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D4-0-0] [lt=7] self freeze of tx data memtable failed.(ret=-4023, ret="OB_EAGAIN", ls_id={id:1}, memtable_mgr_={ObIMemtableMgr:{Memtables:this:0x7fdce89de180, ref_cnt:1, is_inited:true, tablet_id:{id:49402}, freezer:0x7fdce89e30d0, table_type:1, memtable_head:0, memtable_tail:2, t3m:0x7fdd18bce030, tables:[0x7fdce5eea080, 0x7fdce5eea360, null, null, null, null, null, null, null, null, null, null, null, null, null, null]}, is_freezing:false, ls_id:{id:1}, tx_data_table:0x7fdce89e4550, ls_tablet_svr:0x7fdce89de160, slice_allocator:0x7fdce89e4590}) [2024-02-19 19:03:29.251095] INFO [STORAGE] self_freeze_task (ob_tx_data_table.cpp:801) [1108357][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D4-0-0] [lt=20] finish tx data table self freeze task(ret=-4023, ret="OB_EAGAIN", get_ls_id()={id:1}) [2024-02-19 19:03:29.251117] WARN [STORAGE] do_tx_data_table_freeze_ (ob_tenant_freezer_rpc.cpp:102) [1108357][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D4-0-0] [lt=20] freeze tx data table failed.(ret=-4023, ret="OB_EAGAIN", arg_=freeze_type:3) [2024-02-19 19:03:29.251127] INFO [STORAGE] do_tx_data_table_freeze_ (ob_tenant_freezer_rpc.cpp:115) [1108357][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D4-0-0] [lt=9] finish self freeze task in rpc handle thread(ret=-4023, ret="OB_EAGAIN", arg_=freeze_type:3) [2024-02-19 19:03:29.251138] WARN [STORAGE] process (ob_tenant_freezer_rpc.cpp:56) [1108357][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D4-0-0] [lt=9] do tx data table freeze failed.(ret=-4023, ret="OB_EAGAIN", arg_=freeze_type:3) [2024-02-19 19:03:29.251503] INFO [STORAGE] rpc_callback (ob_tenant_freezer.cpp:990) [1106789][RpcIO][T0][Y0-0000000000000000-0-0] [lt=23] [TenantFreezer] call back of tenant freezer request [2024-02-19 19:03:29.255262] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.255290] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.257642] INFO [SHARE] run_loop_ (ob_bg_thread_monitor.cpp:331) [1109111][BGThreadMonitor][T0][Y0-0000000000000000-0-0] [lt=29] current monitor number(seq_=-1) [2024-02-19 19:03:29.261616] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=35] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340609261594}) [2024-02-19 19:03:29.261659] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=45] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340609202899}}) [2024-02-19 19:03:29.265439] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.265468] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.275588] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.275634] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.285763] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.285800] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.295940] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.295993] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=55] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.303557] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=20] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.303594] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=39] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340609303543}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:29.303615] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=19] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340609303543}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:29.306143] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.306174] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.316311] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.316349] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.322733] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=6] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:29.322764] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=32] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:29.322777] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=11] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:29.322787] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=9] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:29.322801] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=9] fail to resolve table(ret=-5019) [2024-02-19 19:03:29.322811] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=9] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:29.322823] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=7] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:29.322832] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=7] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:29.322845] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=12] resolve basic table failed(ret=-5019) [2024-02-19 19:03:29.322858] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=13] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:29.322868] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=9] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:29.322886] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=16] resolve normal query failed(ret=-5019) [2024-02-19 19:03:29.322897] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:29.322923] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=17] failed to resolve(ret=-5019) [2024-02-19 19:03:29.322938] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=14] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.322950] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=9] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.322963] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=12] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:29.322975] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=9] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, ret=-5019) [2024-02-19 19:03:29.322990] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=13] executor execute failed(ret=-5019) [2024-02-19 19:03:29.323001] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=9] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, retry_cnt=0) [2024-02-19 19:03:29.323024] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=16] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:29.323046] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=19] result set close failed(ret=-5019) [2024-02-19 19:03:29.323055] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:29.323064] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:29.323094] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=14] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:29.323109] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A020-0-0] [lt=14] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-02-19 19:03:29.323120] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:29.323136] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:29.323145] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:29.323160] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] query failed(ret=-5019, conn=0x7fdd189bc050, start=1708340609322486, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:29.323174] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] read failed(ret=-5019) [2024-02-19 19:03:29.323191] WARN get_my_sql_result_ (ob_table_access_helper.h:329) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x7fdcfffccd78, table=__all_server, condition=where svr_ip='172.1.3.242' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, columns_str="zone") [2024-02-19 19:03:29.323216] WARN read_single_row (ob_table_access_helper.h:178) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=20] get mysql result failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1, columns=0x7fdcfffccd78, table=__all_server, where_condition=where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:29.323293] WARN [COORDINATOR] get_self_zone_name (table_accessor.cpp:517) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=17] get zone from __all_server failed(ret=-4016, ret="OB_ERR_UNEXPECTED", columns=0x7fdcfffccd78, where_condition="where svr_ip='172.1.3.242' and svr_port=2882", zone_name_holder=) [2024-02-19 19:03:29.323310] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:450) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=16] get self zone name failed(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:29.323321] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:459) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] zone name is empty(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:29.323334] WARN [COORDINATOR] refresh (ob_leader_coordinator.cpp:107) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] get all ls election reference info failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1) [2024-02-19 19:03:29.326469] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.326510] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.328326] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC6F-0-0] [lt=103] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:29.328354] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC6F-0-0] [lt=28] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:29.328377] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC6F-0-0] [lt=22] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:29.328397] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC6F-0-0] [lt=18] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:29.328407] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC6F-0-0] [lt=10] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:29.329950] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=72] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:29.330084] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=81] Wash time detail, (compute_wash_size_time=184, refresh_score_time=47, wash_time=5) [2024-02-19 19:03:29.336621] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.336644] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=24] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.345369] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_unit, ret=-5019) [2024-02-19 19:03:29.345404] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=35] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_unit, ret=-5019) [2024-02-19 19:03:29.345419] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:29.345430] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=11] resolve table relation factor failed(ret=-5019, table_name=__all_unit) [2024-02-19 19:03:29.345447] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=13] fail to resolve table(ret=-5019) [2024-02-19 19:03:29.345460] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:29.345478] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=11] Table 'oceanbase.__all_unit' doesn't exist [2024-02-19 19:03:29.345491] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:29.345501] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:29.345511] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=9] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:29.345521] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=9] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:29.345532] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=9] resolve normal query failed(ret=-5019) [2024-02-19 19:03:29.345542] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:29.345562] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=11] failed to resolve(ret=-5019) [2024-02-19 19:03:29.345572] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=9] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.345588] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=13] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.345600] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:29.345615] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=11] fail to handle text query(stmt=SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1), ret=-5019) [2024-02-19 19:03:29.345629] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=13] executor execute failed(ret=-5019) [2024-02-19 19:03:29.345640] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=10] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)"}, retry_cnt=0) [2024-02-19 19:03:29.345661] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=14] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:29.345684] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=19] result set close failed(ret=-5019) [2024-02-19 19:03:29.345697] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] result set close failed(ret=-5019) [2024-02-19 19:03:29.345707] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:29.345733] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:29.345753] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=17] failed to process final(executor={ObIExecutor:, sql:"SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)"}, aret=-5019, ret=-5019) [2024-02-19 19:03:29.345770] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=14] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)) [2024-02-19 19:03:29.345786] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=14] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:29.345799] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:29.345812] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] query failed(ret=-5019, conn=0x7fdcd7d06050, start=1708340609345149, sql=SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)) [2024-02-19 19:03:29.345828] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=14] read failed(ret=-5019) [2024-02-19 19:03:29.345843] WARN [SHARE] read_units (ob_unit_table_operator.cpp:958) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] execute sql failed(sql=SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1), ret=-5019) [2024-02-19 19:03:29.345917] WARN [SHARE] get_units_by_tenant (ob_unit_table_operator.cpp:715) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] read_units failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", sql=SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)) [2024-02-19 19:03:29.345935] WARN [SHARE] get_sys_unit_count (ob_unit_table_operator.cpp:68) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=17] failed to get units by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:29.345947] WARN [SHARE] get_sys_unit_count (ob_unit_getter.cpp:436) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=10] ut_operator get sys unit count failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:29.345961] WARN [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:88) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] get sys unit count fail(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:29.345998] INFO [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:102) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=35] refresh tenant units(sys_unit_cnt=0, units=[], ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:29.346745] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=24] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.346767] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=22] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.346889] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=14] table not exist(tenant_id=1, database_id=201001, table_name=__all_tenant, ret=-5019) [2024-02-19 19:03:29.346913] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=22] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_tenant, ret=-5019) [2024-02-19 19:03:29.346928] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=14] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:29.346942] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] resolve table relation factor failed(ret=-5019, table_name=__all_tenant) [2024-02-19 19:03:29.346949] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:29.346957] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] fail to resolve table(ret=-5019) [2024-02-19 19:03:29.346967] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=9] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:29.346964] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:29.346979] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340609346933) [2024-02-19 19:03:29.346983] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] Table 'oceanbase.__all_tenant' doesn't exist [2024-02-19 19:03:29.346990] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340609146861, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:29.346993] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=9] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:29.347003] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=10] resolve basic table failed(ret=-5019) [2024-02-19 19:03:29.347013] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=8] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:29.347023] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=9] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:29.347032] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=9] resolve normal query failed(ret=-5019) [2024-02-19 19:03:29.347043] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:29.347058] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=10] failed to resolve(ret=-5019) [2024-02-19 19:03:29.347057] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] get wrs ts(ls_id={id:1}, delta_ns=-1706042771805949021, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:29.347068] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=9] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.347070] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:29.347079] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=9] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.347088] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=9] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:29.347103] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] fail to handle text query(stmt=SELECT tenant_id FROM __all_tenant, ret=-5019) [2024-02-19 19:03:29.347116] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=13] executor execute failed(ret=-5019) [2024-02-19 19:03:29.347126] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=9] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT tenant_id FROM __all_tenant"}, retry_cnt=0) [2024-02-19 19:03:29.347145] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=14] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:29.347164] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=17] result set close failed(ret=-5019) [2024-02-19 19:03:29.347177] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=11] result set close failed(ret=-5019) [2024-02-19 19:03:29.347189] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=10] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:29.347213] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] failed to process record(executor={ObIExecutor:, sql:"SELECT tenant_id FROM __all_tenant"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:29.347231] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=15] failed to process final(executor={ObIExecutor:, sql:"SELECT tenant_id FROM __all_tenant"}, aret=-5019, ret=-5019) [2024-02-19 19:03:29.347247] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=14] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT tenant_id FROM __all_tenant) [2024-02-19 19:03:29.347261] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:29.347274] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:29.347288] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] query failed(ret=-5019, conn=0x7fdcdc89a050, start=1708340609346795, sql=SELECT tenant_id FROM __all_tenant) [2024-02-19 19:03:29.347303] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=14] read failed(ret=-5019) [2024-02-19 19:03:29.347316] WARN [SHARE] read_tenants (ob_unit_table_operator.cpp:990) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=11] execute sql failed(sql=SELECT tenant_id FROM __all_tenant, ret=-5019) [2024-02-19 19:03:29.347368] WARN [SHARE] get_tenants (ob_unit_table_operator.cpp:109) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=13] fail read tenants(sql=SELECT tenant_id FROM __all_tenant, ret=-5019) [2024-02-19 19:03:29.347385] WARN [SHARE] get_tenants (ob_unit_getter.cpp:198) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=16] ut_operator get_resource_pools failed(ret=-5019) [2024-02-19 19:03:29.347399] WARN [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:114) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] get cluster tenants fail(ret=-5019) [2024-02-19 19:03:29.347412] INFO [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:119) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A5-0-0] [lt=12] refresh tenant config(tenants=[], ret=-5019) [2024-02-19 19:03:29.348837] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=29] Cache replace map node details(ret=0, replace_node_count=0, replace_time=18408, replace_start_pos=880768, replace_num=15728) [2024-02-19 19:03:29.356891] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=22] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.356942] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=53] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.361677] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=14] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340609361659}) [2024-02-19 19:03:29.361713] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=37] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340609303543}}) [2024-02-19 19:03:29.366831] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=36] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=36704, clean_start_pos=346027, clean_num=31457) [2024-02-19 19:03:29.367101] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.367131] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.377303] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.377349] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.387618] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.387689] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=73] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.394104] INFO [PALF] submit_broadcast_leader_info_ (log_config_mgr.cpp:468) [1107532][T1_LogLoop][T1][Y0-0000000000000000-0-0] [lt=18] submit_prepare_meta_req success(ret=0, palf_id=1, self="172.1.3.242:2882", proposal_id=138) [2024-02-19 19:03:29.397835] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.397882] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.404208] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.404251] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=43] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340609404195}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:29.404279] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=25] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340609404195}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:29.404295] INFO [STORAGE.TRANS] refresh_gts (ob_gts_source.cpp:520) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=13] refresh gts(ret=-4038, ret="OB_NOT_MASTER", tenant_id=1, need_refresh=false, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340609404195}}) [2024-02-19 19:03:29.408116] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=135] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.408156] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.418472] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.418519] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.428660] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.428702] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.438859] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.438906] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.442101] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=18] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.442133] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=29] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.442157] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=22] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:29.442182] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.442191] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=15] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.442204] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.442202] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.442215] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=10] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:29.442235] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:29.442755] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.442784] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=27] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.442797] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:29.442829] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.442836] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.442845] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.442848] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.442858] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:29.442863] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=16] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:29.443457] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=7] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.443481] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=18] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.443481] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.443497] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:29.443497] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.443510] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:29.443551] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=52] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.443561] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=10] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.443571] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=9] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:29.444138] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=21] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.444162] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.444174] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=11] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:29.444219] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.444237] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.444251] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.444252] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=13] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:29.444265] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.444278] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=11] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:29.444853] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.444880] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=27] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.444894] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=13] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:29.445498] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.445525] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.445537] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:29.446278] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.446296] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.446305] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:29.446904] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=7] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.446924] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.446936] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:29.447531] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.448131] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106789][RpcIO][T0][Y0-0000000000000000-0-0] [lt=19] [RPC EASY STAT](log_str=conn count=1/1, request done=19498/19498, request doing=0/0) [2024-02-19 19:03:29.448166] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.448746] WARN [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:287) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-02-19 19:03:29.448753] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.448767] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:29.448781] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:29.448808] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340609448798}) [2024-02-19 19:03:29.448823] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340609448730) [2024-02-19 19:03:29.448833] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340609346998, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:29.448893] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8] get wrs ts(ls_id={id:1}, delta_ns=-1706042771805848908, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:29.448909] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:29.448988] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106788][RpcIO][T0][Y0-0000000000000000-0-0] [lt=24] [RPC EASY STAT](log_str=conn count=1/1, request done=19498/19498, request doing=0/0) [2024-02-19 19:03:29.449075] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.449102] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.449350] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.450231] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.450849] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.450988] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=10] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.451031] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=61] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.451476] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=29] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.452491] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=29] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.453921] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.454246] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.454287] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=27] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.454538] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.454819] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=28] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.454881] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.455153] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.455402] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.455468] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=10] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.455790] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.455999] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.456057] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.456416] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=27] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.456561] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.456653] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.457024] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.457142] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.457258] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.457639] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.457718] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.457859] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.458293] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.458456] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.458862] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.459042] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.459242] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.459276] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.459311] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.459438] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.459636] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.459927] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.460018] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.460233] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.460599] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=34] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.461166] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.461244] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.461271] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.461817] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.461867] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=25] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340609461854}) [2024-02-19 19:03:29.461891] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=23] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340609404195}}) [2024-02-19 19:03:29.461920] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.462486] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.462520] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.462694] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.463108] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.463137] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.463307] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=27] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.463721] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=8] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.463750] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.464105] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.464313] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=7] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.464343] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.464749] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.464951] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=9] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.464987] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.465404] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.465555] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.465581] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.466092] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.466236] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.466190] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=8] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.466709] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.466833] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.466863] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=71] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.467326] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=27] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.467436] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.467456] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.467948] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.468040] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=6] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.468072] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.468594] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.468655] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=8] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.468695] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.469306] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.469333] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=27] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.469407] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.469441] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.470041] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=8] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.470085] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.471019] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:29.479582] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.479629] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.490200] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.490245] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.500407] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.500466] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=61] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.504943] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=15] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.504976] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=53] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340609504911}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:29.504998] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=19] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340609504911}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:29.510625] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.510655] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.516429] INFO [SQL.PC] update_memory_conf (ob_plan_cache.cpp:1499) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=24] update plan cache memory config(ob_plan_cache_percentage=5, ob_plan_cache_evict_high_percentage=90, ob_plan_cache_evict_low_percentage=50, tenant_id=1) [2024-02-19 19:03:29.516471] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1130) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=39] start lib cache evict(tenant_id=1, mem_hold=2097152, mem_limit=107374180, cache_obj_num=2, cache_node_num=2) [2024-02-19 19:03:29.516487] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1147) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=14] end lib cache evict(tenant_id=1, cache_evict_num=0, mem_hold=2097152, mem_limit=107374180, cache_obj_num=2, cache_node_num=2) [2024-02-19 19:03:29.518885] INFO [SQL.PC] runTimerTask (ob_plan_cache_manager.cpp:291) [1106739][PlanCacheEvict][T0][Y0-0000000000000000-0-0] [lt=10] schedule next cache evict task(evict_interval=1000000) [2024-02-19 19:03:29.520774] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.520812] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.521504] INFO [SQL.PC] runTimerTask (ob_plan_cache_manager.cpp:299) [1106739][PlanCacheEvict][T0][Y0-0000000000000000-0-0] [lt=31] schedule next cache evict task(evict_interval=1000000) [2024-02-19 19:03:29.530954] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.531004] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.541151] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.541186] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.542449] INFO [ARCHIVE] stop (ob_archive_scheduler_service.cpp:137) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=9] stop archive scheduler service [2024-02-19 19:03:29.543647] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=6] table not exist(tenant_id=1, database_id=201001, table_name=__all_backup_info, ret=-5019) [2024-02-19 19:03:29.543681] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=30] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_backup_info, ret=-5019) [2024-02-19 19:03:29.543695] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=12] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:29.543706] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=10] resolve table relation factor failed(ret=-5019, table_name=__all_backup_info) [2024-02-19 19:03:29.543726] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=16] fail to resolve table(ret=-5019) [2024-02-19 19:03:29.543740] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=12] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:29.543755] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=8] Table 'oceanbase.__all_backup_info' doesn't exist [2024-02-19 19:03:29.543768] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=12] resolve basic table failed(ret=-5019) [2024-02-19 19:03:29.543780] WARN [SQL.RESV] resolve_table_list (ob_update_resolver.cpp:423) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=11] failed to resolve table(ret=-5019) [2024-02-19 19:03:29.543798] WARN [SQL.RESV] resolve (ob_update_resolver.cpp:76) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=18] resolve table failed(ret=-5019) [2024-02-19 19:03:29.543827] WARN [SQL.RESV] stmt_resolver_func (ob_resolver.cpp:155) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=26] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3074) [2024-02-19 19:03:29.543847] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=11] failed to resolve(ret=-5019) [2024-02-19 19:03:29.543859] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=11] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.543879] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=17] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.543889] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=10] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:29.543907] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=14] fail to handle text query(stmt=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882', ret=-5019) [2024-02-19 19:03:29.543937] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=29] executor execute failed(ret=-5019) [2024-02-19 19:03:29.543948] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=10] execute failed(ret=-5019, executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, retry_cnt=0) [2024-02-19 19:03:29.543991] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=35] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:29.544013] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=19] result set close failed(ret=-5019) [2024-02-19 19:03:29.544024] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=9] result set close failed(ret=-5019) [2024-02-19 19:03:29.544034] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=9] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:29.544061] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA8-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:29.544089] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106733][BackupLease][T0][YB42AC0103F2-000611B923978EA8-0-0] [lt=13] failed to process final(executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, aret=-5019, ret=-5019) [2024-02-19 19:03:29.544099] WARN [SERVER] execute_write_inner (ob_inner_sql_connection.cpp:1818) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=9] execute sql failed(ret=-5019, tenant_id=1, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:29.544108] INFO [SERVER] execute_write_inner (ob_inner_sql_connection.cpp:1900) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=8] execute write sql(ret=-5019, tenant_id=1, affected_rows=0, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:29.544137] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=7] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:29.544144] WARN [SERVER] execute_write (ob_inner_sql_connection.cpp:1786) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=6] execute_write failed(ret=-5019, tenant_id=1, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882', is_user_sql=false) [2024-02-19 19:03:29.544151] WARN [SERVER] execute_write (ob_inner_sql_connection.cpp:1775) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=7] execute_write failed(ret=-5019, tenant_id=1, sql="update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'") [2024-02-19 19:03:29.544159] WARN [COMMON.MYSQLP] write (ob_mysql_proxy.cpp:133) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=7] execute sql failed(ret=-5019, conn=0x7fdcdc9be050, start=1708340609542535, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:29.544196] WARN [SERVER] clean_backup_scheduler_leader (ob_backup_operator.cpp:348) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=8] execute sql failed(ret=-5019, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:29.544205] WARN [SERVER] clean_backup_scheduler_leader (ob_backup_manager.cpp:517) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=9] failed to clean backup scheduler leader(ret=-5019) [2024-02-19 19:03:29.548883] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=41] get wrs ts(ls_id={id:1}, delta_ns=-1706042771805746617, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:29.548915] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=32] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:29.551391] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.551427] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.561668] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.561713] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.561963] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=247] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340609561949}) [2024-02-19 19:03:29.562124] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=160] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340609504911}}) [2024-02-19 19:03:29.566517] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=27] Cache replace map node details(ret=0, replace_node_count=0, replace_time=17510, replace_start_pos=896496, replace_num=15728) [2024-02-19 19:03:29.567461] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=72] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:29.567659] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=31] Wash time detail, (compute_wash_size_time=126, refresh_score_time=161, wash_time=5) [2024-02-19 19:03:29.571847] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.571907] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=62] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.579878] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC70-0-0] [lt=88] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:29.579919] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC70-0-0] [lt=42] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:29.579989] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC70-0-0] [lt=68] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:29.580010] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC70-0-0] [lt=18] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:29.580026] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC70-0-0] [lt=15] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:29.582086] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.582123] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.592346] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=55] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.592408] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=64] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.602574] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.602614] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.605806] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.605831] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=28] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340609605784}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:29.605862] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=29] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340609605784}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:29.606842] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=17] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=39161, clean_start_pos=377484, clean_num=31457) [2024-02-19 19:03:29.612850] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.612898] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.618255] INFO [SHARE] blacklist_loop_ (ob_server_blacklist.cpp:313) [1106781][Blacklist][T0][Y0-0000000000000000-0-0] [lt=30] blacklist_loop exec finished(cost_time=31, is_enabled=true, send_cnt=0) [2024-02-19 19:03:29.623052] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.623104] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=53] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.633242] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.633284] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.633737] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:186) [1108342][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=43] start do ls ha handler(ls_id_array_=[{id:1}]) [2024-02-19 19:03:29.643411] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.643459] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.649310] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=38] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:29.649339] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=31] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:29.649362] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340609649294) [2024-02-19 19:03:29.649381] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340609448841, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:29.649442] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8] get wrs ts(ls_id={id:1}, delta_ns=-1706042771805646334, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:29.649454] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:29.653597] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.653652] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=58] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.662180] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=29] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340609662162}) [2024-02-19 19:03:29.662212] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=35] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340609605784}}) [2024-02-19 19:03:29.663785] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.663824] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.666069] INFO do_work (ob_rl_mgr.cpp:704) [1106705][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=28] swc wakeup.(stat_period_=1000000, ready=false) [2024-02-19 19:03:29.668960] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106798][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=27] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:29.668969] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106796][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=19] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/1, request doing=0/0) [2024-02-19 19:03:29.669050] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106795][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=31] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/6, request doing=0/0) [2024-02-19 19:03:29.669817] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106791][BatchIO][T0][Y0-0000000000000000-0-0] [lt=17] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:29.669847] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106793][BatchIO][T0][Y0-0000000000000000-0-0] [lt=11] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:29.670281] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106792][BatchIO][T0][Y0-0000000000000000-0-0] [lt=19] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:29.670305] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106800][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=11] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:29.674018] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.674092] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=75] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.684232] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.684279] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.697630] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.697670] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.707492] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=11] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.707526] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=34] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340609707479}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:29.707551] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=22] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340609707479}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:29.707908] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.707959] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.712511] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:202) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=12] decide disk size finished(dir="/backup/oceanbase/data/sstable", suggested_disk_size=8589934592, suggested_disk_percentage=0, default_disk_percentage=60, total_space=246944890880, free_space=220974178304, disk_size=8589934592) [2024-02-19 19:03:29.712544] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:202) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=36] decide disk size finished(dir="/backup/oceanbase/data/clog", suggested_disk_size=8589934592, suggested_disk_percentage=0, default_disk_percentage=30, total_space=246944890880, free_space=220974178304, disk_size=8589934592) [2024-02-19 19:03:29.712556] INFO [SERVER] cal_all_part_disk_size (ob_server_utils.cpp:164) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=10] decide_all_disk_size succ(data_dir="/backup/oceanbase/data/sstable", clog_dir="/backup/oceanbase/data/clog", suggested_data_disk_size=8589934592, suggested_data_disk_percentage=0, data_default_disk_percentage=60, clog_default_disk_percentage=30, shared_mode=true, data_disk_size=8589934592, log_disk_size=8589934592) [2024-02-19 19:03:29.718545] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.718600] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=57] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.728732] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.728796] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=65] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.739842] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.739901] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=61] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.740787] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-02-19 19:03:29.740806] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=18] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-02-19 19:03:29.740816] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=9] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:29.740824] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=9] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-02-19 19:03:29.740835] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=8] fail to resolve table(ret=-5019) [2024-02-19 19:03:29.740842] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=7] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:29.740853] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=6] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-02-19 19:03:29.740860] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=7] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:29.740867] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=6] resolve basic table failed(ret=-5019) [2024-02-19 19:03:29.740874] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=6] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:29.740881] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=6] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:29.740888] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=6] resolve normal query failed(ret=-5019) [2024-02-19 19:03:29.740895] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=6] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:29.740909] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=7] failed to resolve(ret=-5019) [2024-02-19 19:03:29.740917] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=8] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.740947] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=28] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.740954] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=7] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:29.740963] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=7] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-02-19 19:03:29.740971] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=7] executor execute failed(ret=-5019) [2024-02-19 19:03:29.740979] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=6] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0) [2024-02-19 19:03:29.740993] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=9] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:29.741007] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=12] result set close failed(ret=-5019) [2024-02-19 19:03:29.741014] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=6] result set close failed(ret=-5019) [2024-02-19 19:03:29.741020] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=6] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:29.741040] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=6] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:29.741052] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=11] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-02-19 19:03:29.741065] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:29.741075] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=9] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:29.741082] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=6] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:29.741090] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=6] query failed(ret=-5019, conn=0x7fdcdc924050, start=1708340609740613, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:29.741102] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=12] read failed(ret=-5019) [2024-02-19 19:03:29.741113] WARN [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:612) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=8] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:29.741173] WARN [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=11] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:29.741181] WARN [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=8] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-02-19 19:03:29.741189] WARN [SHARE] next (ob_ls_table_iterator.cpp:71) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=7] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:29.741196] WARN [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:331) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=6] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:29.741204] WARN [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:213) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=6] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-02-19 19:03:29.741213] WARN [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:193) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=7] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-02-19 19:03:29.741219] WARN [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:43) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DC-0-0] [lt=6] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:29.749412] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:738) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] current server is WRS leader, need start CLUSTER weak read service(tenant_id=1, serve_leader_epoch=0, cur_leader_epoch=138, cluster_service_tablet_id_={id:226}, in_service=false, can_update_version=false, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:29.749487] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:336) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=64] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] begin start service(tenant_id=1, is_in_service()=false, can_update_version=false) [2024-02-19 19:03:29.749502] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:338) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] start TenantWeakReadClusterService(tenant_id=1) [2024-02-19 19:03:29.750463] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.750492] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.751434] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=10] table not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:29.751460] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=25] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:29.751470] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=8] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:29.751477] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=7] resolve table relation factor failed(ret=-5019, table_name=__all_weak_read_service) [2024-02-19 19:03:29.751486] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=5] fail to resolve table(ret=-5019) [2024-02-19 19:03:29.751495] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=8] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:29.751505] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=6] Table 'oceanbase.__all_weak_read_service' doesn't exist [2024-02-19 19:03:29.751517] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=11] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:29.751530] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=11] resolve basic table failed(ret=-5019) [2024-02-19 19:03:29.751544] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=14] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:29.751554] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=9] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:29.751569] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=13] resolve normal query failed(ret=-5019) [2024-02-19 19:03:29.751586] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=15] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:29.751606] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=10] failed to resolve(ret=-5019) [2024-02-19 19:03:29.751615] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=9] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.751628] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=9] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.751638] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=8] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:29.751650] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=9] fail to handle text query(stmt=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '', ret=-5019) [2024-02-19 19:03:29.751664] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=12] executor execute failed(ret=-5019) [2024-02-19 19:03:29.751677] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=12] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, retry_cnt=0) [2024-02-19 19:03:29.751703] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=10] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:29.751729] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=22] result set close failed(ret=-5019) [2024-02-19 19:03:29.751740] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=9] result set close failed(ret=-5019) [2024-02-19 19:03:29.751748] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=7] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:29.751777] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:29.751798] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E7-0-0] [lt=18] failed to process final(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, aret=-5019, ret=-5019) [2024-02-19 19:03:29.751817] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] execute sql failed(ret=-5019, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:29.751832] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:29.751844] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:29.751861] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] query failed(ret=-5019, conn=0x7fdd189bc050, start=1708340609751240, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:29.751878] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] read failed(ret=-5019) [2024-02-19 19:03:29.751889] WARN [STORAGE.TRANS] query_cluster_version_range_ (ob_tenant_weak_read_cluster_service.cpp:192) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8] execute sql read fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", exec_tenant_id=1, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:29.751957] WARN [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:367) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] query cluster version range from WRS table fail(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:29.751981] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:415) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] start service done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, in_service=false, leader_epoch=0, current_version=0, delta=1708340609751978, min_version=0, max_version=0, max_stale_time=5000000000, all_valid_server_count=0, total_time=2509, wlock_time=42, check_leader_time=10, query_version_time=0, persist_version_time=0) [2024-02-19 19:03:29.751999] WARN [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:781) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] start CLUSTER weak read service fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:29.752007] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:791) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, need_start_service=true, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=138, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:29.752062] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] get wrs ts(ls_id={id:1}, delta_ns=-1706042771805545922, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:29.752077] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:29.760616] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=23] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.760656] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.762741] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340609762696}) [2024-02-19 19:03:29.762781] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=34] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340609707479}}) [2024-02-19 19:03:29.767558] WARN [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2113) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=12] get invalid Ethernet speed, use default(devname="ens18") [2024-02-19 19:03:29.767583] WARN [SERVER] runTimerTask (ob_server.cpp:2632) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=27] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4002, ret="OB_INVALID_ARGUMENT") [2024-02-19 19:03:29.770782] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.770826] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.780981] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.781032] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.782587] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=79] Cache replace map node details(ret=0, replace_node_count=0, replace_time=15922, replace_start_pos=912224, replace_num=15728) [2024-02-19 19:03:29.791221] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.791267] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.794775] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:326) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=10] ====== check clog disk timer task ====== [2024-02-19 19:03:29.794847] INFO [PALF] get_disk_usage (palf_env_impl.cpp:820) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=66] get_disk_usage(ret=0, capacity(MB):=2048, used(MB):=1945) [2024-02-19 19:03:29.796848] INFO [STORAGE.TRANS] get_rec_log_ts (ob_trans_ctx_mgr_v4.cpp:1283) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=18] succ to get rec log ts(*this={this:0x7fdce3204030, ls_id:{id:1}, tenant_id:1, state:"F_WORKING", total_tx_ctx_count:0, leader_takeover_ts:{mts:0}, is_leader_serving:false, max_replay_commit_version:1707751112415295196, ls_retain_ctx_mgr:{retain_ctx_list_.size():0}, aggre_rec_log_ts:-1, prev_aggre_rec_log_ts:-1, online_ts:0, uref:1073741825}, aggre_rec_log_ts=9223372036854775807) [2024-02-19 19:03:29.796909] INFO [STORAGE.TRANS] get_rec_log_ts (ob_tx_ctx_memtable.cpp:231) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=62] tx ctx memtable get rec log ts(this={ObITable:{this:0x7fdce5f6e080, key:{tablet_id:{id:49401}, column_group_idx:0, table_type:"TX_CTX_MEMTABLE", log_ts_range:{start_log_ts:1, end_log_ts:1708337131277985}}, ref_cnt:2, upper_trans_version:-4007, timestamp:0}, this:0x7fdce5f6e080, snapshot_version:1708337131277985, ls_id:{id:1}, is_frozen:false}, rec_log_ts=9223372036854775807) [2024-02-19 19:03:29.796968] INFO [STORAGE.TABLELOCK] get_rec_log_ts (ob_lock_memtable.cpp:739) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=49] rec_log_ts of ObLockMemtable is (rec_log_ts_=9223372036854775807, flushed_log_ts_=1707033175148098668, freeze_log_ts_=0, max_committed_log_ts_=-1, is_frozen_=false, ls_id_={id:1}) [2024-02-19 19:03:29.796997] INFO [STORAGE.TRANS] get_rec_log_ts (ob_ls_tx_service.cpp:437) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=26] [CHECKPOINT] ObLSTxService::get_rec_log_ts(common_checkpoint_type="TX_DATA_MEMTABLE_TYPE", common_checkpoints_[min_rec_log_ts_common_checkpoint_type_index]={ObIMemtableMgr:{Memtables:this:0x7fdce89de180, ref_cnt:1, is_inited:true, tablet_id:{id:49402}, freezer:0x7fdce89e30d0, table_type:1, memtable_head:0, memtable_tail:2, t3m:0x7fdd18bce030, tables:[0x7fdce5eea080, 0x7fdce5eea360, null, null, null, null, null, null, null, null, null, null, null, null, null, null]}, is_freezing:false, ls_id:{id:1}, tx_data_table:0x7fdce89e4550, ls_tablet_svr:0x7fdce89de160, slice_allocator:0x7fdce89e4590}, min_rec_log_ts=1707209832548318068, ls_id_={id:1}) [2024-02-19 19:03:29.800185] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=29] get rec log ts(service_type_=0, rec_log_ts=9223372036854775807) [2024-02-19 19:03:29.800239] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=60] get rec log ts(service_type_=1, rec_log_ts=9223372036854775807) [2024-02-19 19:03:29.800252] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=11] get rec log ts(service_type_=2, rec_log_ts=9223372036854775807) [2024-02-19 19:03:29.800271] INFO [STORAGE] update_clog_checkpoint (ob_checkpoint_executor.cpp:158) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=9] [CHECKPOINT] clog checkpoint no change(checkpoint_ts=1707209832548318068, checkpoint_ts_in_ls_meta=1707209832548318068, ls_id={id:1}, service_type="TRANS_SERVICE") [2024-02-19 19:03:29.800308] INFO [STORAGE] cannot_recycle_log_over_threshold_ (ob_checkpoint_service.cpp:239) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=25] cannot_recycle_log_size statistics(cannot_recycle_log_size=1905773194, threshold=644245094) [2024-02-19 19:03:29.803594] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=55] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.803644] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=56] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.803723] INFO [PALF] locate_by_lsn_coarsely (palf_handle_impl.cpp:1605) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=14] locate_by_lsn_coarsely(ret=0, ret="OB_SUCCESS", this={palf_id:1, self:"172.1.3.242:2882", has_set_deleted:false}, lsn={lsn:24563027948}, committed_lsn={lsn:25325337226}, result_ts_ns=1707530339417374084) [2024-02-19 19:03:29.803751] INFO [STORAGE] advance_checkpoint_by_flush (ob_checkpoint_executor.cpp:226) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=26] advance checkpoint by flush to avoid clog disk full(recycle_ts=1707530339417374084, end_lsn={lsn:25325337226}, clog_checkpoint_lsn={lsn:23419564032}, calcu_recycle_lsn={lsn:24563027948}, ls_->get_ls_id()={id:1}) [2024-02-19 19:03:29.803785] INFO [STORAGE] advance_checkpoint_by_flush (ob_checkpoint_executor.cpp:244) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=22] start flush(recycle_ts=1707530339417374084, ls_->get_clog_checkpoint_ts()=1707209832548318068, ls_->get_ls_id()={id:1}) [2024-02-19 19:03:29.804866] INFO [STORAGE.TRANS] get_rec_log_ts (ob_trans_ctx_mgr_v4.cpp:1283) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=13] succ to get rec log ts(*this={this:0x7fdce3204030, ls_id:{id:1}, tenant_id:1, state:"F_WORKING", total_tx_ctx_count:0, leader_takeover_ts:{mts:0}, is_leader_serving:false, max_replay_commit_version:1707751112415295196, ls_retain_ctx_mgr:{retain_ctx_list_.size():0}, aggre_rec_log_ts:-1, prev_aggre_rec_log_ts:-1, online_ts:0, uref:1073741825}, aggre_rec_log_ts=9223372036854775807) [2024-02-19 19:03:29.804891] INFO [STORAGE.TRANS] get_rec_log_ts (ob_tx_ctx_memtable.cpp:231) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=25] tx ctx memtable get rec log ts(this={ObITable:{this:0x7fdce5f6e080, key:{tablet_id:{id:49401}, column_group_idx:0, table_type:"TX_CTX_MEMTABLE", log_ts_range:{start_log_ts:1, end_log_ts:1708337131277985}}, ref_cnt:2, upper_trans_version:-4007, timestamp:0}, this:0x7fdce5f6e080, snapshot_version:1708337131277985, ls_id:{id:1}, is_frozen:false}, rec_log_ts=9223372036854775807) [2024-02-19 19:03:29.804914] INFO [STORAGE] freeze (ob_tx_data_memtable_mgr.cpp:192) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=17] start freeze tx data memtable(ls_id_={id:1}) [2024-02-19 19:03:29.804936] INFO [STORAGE] freeze_ (ob_tx_data_memtable_mgr.cpp:228) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=7] There is a freezed memetable existed. Try freeze after flushing it.(ret=-4023, ret="OB_EAGAIN", get_memtable_count_()=2) [2024-02-19 19:03:29.804944] WARN [STORAGE] freeze (ob_tx_data_memtable_mgr.cpp:206) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=7] freeze tx data memtable fail.(ret=-4023, ret="OB_EAGAIN") [2024-02-19 19:03:29.804950] WARN [STORAGE] flush (ob_tx_data_memtable_mgr.cpp:430) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=6] freeze failed(ret=-4023, ret="OB_EAGAIN", this=0x7fdce89de180) [2024-02-19 19:03:29.804958] WARN [STORAGE.TRANS] flush (ob_ls_tx_service.cpp:451) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=7] obCommonCheckpoint flush failed(tmp_ret=-4023, common_checkpoints_[i]=0x7fdce89de250) [2024-02-19 19:03:29.804968] INFO [STORAGE.TABLELOCK] get_rec_log_ts (ob_lock_memtable.cpp:739) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=7] rec_log_ts of ObLockMemtable is (rec_log_ts_=9223372036854775807, flushed_log_ts_=1707033175148098668, freeze_log_ts_=0, max_committed_log_ts_=-1, is_frozen_=false, ls_id_={id:1}) [2024-02-19 19:03:29.807414] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=32] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:29.807497] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=34] Wash time detail, (compute_wash_size_time=120, refresh_score_time=48, wash_time=4) [2024-02-19 19:03:29.808111] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.808138] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=25] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340609808101}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:29.808162] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=21] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340609808101}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:29.813771] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.813817] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.823008] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=14] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:29.823034] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=25] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:29.823044] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=8] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:29.823052] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=7] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:29.823062] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=7] fail to resolve table(ret=-5019) [2024-02-19 19:03:29.823068] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=7] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:29.823078] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=6] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:29.823086] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=6] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:29.823105] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=18] resolve basic table failed(ret=-5019) [2024-02-19 19:03:29.823112] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=6] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:29.823118] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=6] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:29.823125] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=6] resolve normal query failed(ret=-5019) [2024-02-19 19:03:29.823133] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=7] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:29.823147] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=7] failed to resolve(ret=-5019) [2024-02-19 19:03:29.823155] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=7] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.823165] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=8] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:29.823172] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=6] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:29.823180] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=6] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, ret=-5019) [2024-02-19 19:03:29.823188] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=7] executor execute failed(ret=-5019) [2024-02-19 19:03:29.823197] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=7] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, retry_cnt=0) [2024-02-19 19:03:29.823212] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=9] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:29.823226] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=11] result set close failed(ret=-5019) [2024-02-19 19:03:29.823233] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=6] result set close failed(ret=-5019) [2024-02-19 19:03:29.823239] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=6] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:29.823260] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=7] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:29.823269] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A021-0-0] [lt=7] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-02-19 19:03:29.823278] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:29.823286] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:29.823292] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:29.823301] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] query failed(ret=-5019, conn=0x7fdcd7d06050, start=1708340609822841, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:29.823310] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] read failed(ret=-5019) [2024-02-19 19:03:29.823317] WARN get_my_sql_result_ (ob_table_access_helper.h:329) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=5] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x7fdcfffccd78, table=__all_server, condition=where svr_ip='172.1.3.242' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, columns_str="zone") [2024-02-19 19:03:29.823333] WARN read_single_row (ob_table_access_helper.h:178) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=12] get mysql result failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1, columns=0x7fdcfffccd78, table=__all_server, where_condition=where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:29.823400] WARN [COORDINATOR] get_self_zone_name (table_accessor.cpp:517) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] get zone from __all_server failed(ret=-4016, ret="OB_ERR_UNEXPECTED", columns=0x7fdcfffccd78, where_condition="where svr_ip='172.1.3.242' and svr_port=2882", zone_name_holder=) [2024-02-19 19:03:29.823414] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:450) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=14] get self zone name failed(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:29.823423] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:459) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] zone name is empty(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:29.823431] WARN [COORDINATOR] refresh (ob_leader_coordinator.cpp:107) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6] get all ls election reference info failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1) [2024-02-19 19:03:29.823934] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.823963] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.828568] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC71-0-0] [lt=122] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:29.828598] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC71-0-0] [lt=29] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:29.828639] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC71-0-0] [lt=19] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:29.828657] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC71-0-0] [lt=16] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:29.828673] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC71-0-0] [lt=15] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:29.834126] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=25] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.834186] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=62] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.844340] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.844390] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.849477] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:29.849513] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=38] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:29.849537] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340609849456) [2024-02-19 19:03:29.849556] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340609649389, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:29.849643] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] get wrs ts(ls_id={id:1}, delta_ns=-1706042771805445861, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:29.849665] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=1, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:29.850245] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=13] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=42731, clean_start_pos=408941, clean_num=31457) [2024-02-19 19:03:29.854529] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.854571] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.863069] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=25] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340609863048}) [2024-02-19 19:03:29.863099] WARN [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:485) [1106741][SysLocAsyncUp0][T0][YB42AC0103F2-000611B9212AA0AD-0-0] [lt=52] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, tasks=[{cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340609863048}]) [2024-02-19 19:03:29.863112] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=46] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340609808101}}) [2024-02-19 19:03:29.864694] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.864724] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.866324] INFO [COMMON] print_io_status (ob_io_struct.cpp:619) [1106661][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=33] [IO STATUS](tenant_ids=[1, 500], send_thread_count=2, send_queues=[0, 0]) [2024-02-19 19:03:29.878271] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.878322] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.888490] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.888529] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.898882] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.898921] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.908715] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:29.908744] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=30] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340609908703}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:29.908764] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=18] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340609908703}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:29.908777] WARN [STORAGE.TRANS] operator() (ob_ts_mgr.h:225) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=10] refresh gts failed(ret=-4038, ret="OB_NOT_MASTER", gts_tenant_info={v:1}) [2024-02-19 19:03:29.908787] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:229) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=9] refresh gts functor(ret=-4038, ret="OB_NOT_MASTER", gts_tenant_info={v:1}) [2024-02-19 19:03:29.909072] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.909097] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.915883] WARN [SERVER] batch_process_tasks (ob_ls_table_updater.cpp:333) [1106712][LSSysTblUp0][T0][YB42AC0103F2-000611B9216D2C42-0-0] [lt=46] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1, task={tenant_id:1, ls_id:{id:1}, add_timestamp:1708337390831403}) [2024-02-19 19:03:29.919233] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.919268] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.929434] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.929491] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=58] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.939597] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.939641] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.949764] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.949821] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=59] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.951087] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:29.951111] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:29.951138] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340609951124}) [2024-02-19 19:03:29.951156] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340609951071) [2024-02-19 19:03:29.951167] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340609849571, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:29.951232] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] get wrs ts(ls_id={id:1}, delta_ns=-1706042771805346961, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:29.951247] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:29.960027] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.960078] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.963099] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=32] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340609963063}) [2024-02-19 19:03:29.963130] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=32] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340609908703}}) [2024-02-19 19:03:29.970218] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.970281] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=66] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.980419] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.980460] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.990636] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:29.990742] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=94] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:29.992280] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:129) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=17] ====== checkpoint timer task ====== [2024-02-19 19:03:29.992354] INFO [CLOG] get_min_unapplied_log_ts_ns (ob_log_apply_service.cpp:729) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=21] get_min_unapplied_log_ts_ns(log_ts=1707751112415295197, this={ls_id_:{id:1}, role_:1, proposal_id_:138, palf_committed_end_lsn_:{lsn:0}, last_check_log_ts_ns_:1707751112415295196, max_applied_cb_ts_ns_:1707751112415295196}) [2024-02-19 19:03:29.992388] INFO [CLOG] get_min_unreplayed_log_info (ob_replay_status.cpp:971) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=20] get_min_unreplayed_log_info(lsn={lsn:25325337226}, log_ts=1707751112415295197, this={ls_id_:{id:1}, is_enabled_:true, is_submit_blocked_:false, role_:1, err_info_:{lsn_:{lsn:18446744073709551615}, scn_:0, log_type_:0, is_submit_err_:false, err_ts_:0, err_ret_:0}, ref_cnt_:2, post_barrier_lsn_:{lsn:18446744073709551615}, pending_task_count_:0, submit_log_task_:{ObReplayServiceSubmitTask:{type_:1, enqueue_ts_:1708337375831694, err_info_:{has_fatal_error_:false, fail_ts_:0, fail_cost_:503671052, ret_code_:0}}, next_to_submit_lsn_:{lsn:25325337226}, committed_end_lsn_:{lsn:25325337226}, next_to_submit_log_ts_:1707751112415295197, base_lsn_:{lsn:23419564032}, base_log_ts_:1707209832548318068}}) [2024-02-19 19:03:29.993680] INFO [STORAGE.TRANS] get_rec_log_ts (ob_trans_ctx_mgr_v4.cpp:1283) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=35] succ to get rec log ts(*this={this:0x7fdce3204030, ls_id:{id:1}, tenant_id:1, state:"F_WORKING", total_tx_ctx_count:0, leader_takeover_ts:{mts:0}, is_leader_serving:false, max_replay_commit_version:1707751112415295196, ls_retain_ctx_mgr:{retain_ctx_list_.size():0}, aggre_rec_log_ts:-1, prev_aggre_rec_log_ts:-1, online_ts:0, uref:1073741825}, aggre_rec_log_ts=9223372036854775807) [2024-02-19 19:03:29.993708] INFO [STORAGE.TRANS] get_rec_log_ts (ob_tx_ctx_memtable.cpp:231) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=28] tx ctx memtable get rec log ts(this={ObITable:{this:0x7fdce5f6e080, key:{tablet_id:{id:49401}, column_group_idx:0, table_type:"TX_CTX_MEMTABLE", log_ts_range:{start_log_ts:1, end_log_ts:1708337131277985}}, ref_cnt:2, upper_trans_version:-4007, timestamp:0}, this:0x7fdce5f6e080, snapshot_version:1708337131277985, ls_id:{id:1}, is_frozen:false}, rec_log_ts=9223372036854775807) [2024-02-19 19:03:29.993735] INFO [STORAGE.TABLELOCK] get_rec_log_ts (ob_lock_memtable.cpp:739) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=21] rec_log_ts of ObLockMemtable is (rec_log_ts_=9223372036854775807, flushed_log_ts_=1707033175148098668, freeze_log_ts_=0, max_committed_log_ts_=-1, is_frozen_=false, ls_id_={id:1}) [2024-02-19 19:03:29.993750] INFO [STORAGE.TRANS] get_rec_log_ts (ob_ls_tx_service.cpp:437) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=12] [CHECKPOINT] ObLSTxService::get_rec_log_ts(common_checkpoint_type="TX_DATA_MEMTABLE_TYPE", common_checkpoints_[min_rec_log_ts_common_checkpoint_type_index]={ObIMemtableMgr:{Memtables:this:0x7fdce89de180, ref_cnt:1, is_inited:true, tablet_id:{id:49402}, freezer:0x7fdce89e30d0, table_type:1, memtable_head:0, memtable_tail:2, t3m:0x7fdd18bce030, tables:[0x7fdce5eea080, 0x7fdce5eea360, null, null, null, null, null, null, null, null, null, null, null, null, null, null]}, is_freezing:false, ls_id:{id:1}, tx_data_table:0x7fdce89e4550, ls_tablet_svr:0x7fdce89de160, slice_allocator:0x7fdce89e4590}, min_rec_log_ts=1707209832548318068, ls_id_={id:1}) [2024-02-19 19:03:29.995636] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=27] get rec log ts(service_type_=0, rec_log_ts=9223372036854775807) [2024-02-19 19:03:29.995649] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=15] get rec log ts(service_type_=1, rec_log_ts=9223372036854775807) [2024-02-19 19:03:29.995658] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=6] get rec log ts(service_type_=2, rec_log_ts=9223372036854775807) [2024-02-19 19:03:29.995668] INFO [STORAGE] update_clog_checkpoint (ob_checkpoint_executor.cpp:158) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=5] [CHECKPOINT] clog checkpoint no change(checkpoint_ts=1707209832548318068, checkpoint_ts_in_ls_meta=1707209832548318068, ls_id={id:1}, service_type="TRANS_SERVICE") [2024-02-19 19:03:29.995682] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:166) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=10] succeed to update_clog_checkpoint(ret=0, ls_cnt=1) [2024-02-19 19:03:30.000971] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=59] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.001010] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.001893] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=26] Cache replace map node details(ret=0, replace_node_count=0, replace_time=19181, replace_start_pos=927952, replace_num=15728) [2024-02-19 19:03:30.009362] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=6] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.009392] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=30] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610009351}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.009412] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=18] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610009351}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.011354] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.011397] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.021789] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.021829] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.031955] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.032010] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=58] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.042598] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.042649] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=53] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.051261] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] get wrs ts(ls_id={id:1}, delta_ns=-1706042771805244462, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:30.051299] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=39] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:30.051397] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=20] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:30.051477] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=20] Wash time detail, (compute_wash_size_time=133, refresh_score_time=53, wash_time=7) [2024-02-19 19:03:30.052725] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.052751] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.062852] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.062885] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.063082] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=14] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340610063067}) [2024-02-19 19:03:30.063105] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=23] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340610009351}}) [2024-02-19 19:03:30.072968] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.073003] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.078395] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC72-0-0] [lt=129] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:30.078433] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC72-0-0] [lt=40] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:30.078453] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC72-0-0] [lt=19] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:30.078466] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC72-0-0] [lt=11] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:30.078476] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC72-0-0] [lt=10] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:30.083101] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.083133] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.088370] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=22] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=36864, clean_start_pos=440398, clean_num=31457) [2024-02-19 19:03:30.093258] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.093303] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.103405] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.103490] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.109960] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.110025] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=64] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610109946}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.110049] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=22] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610109946}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.113627] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.113667] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.117952] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:199) [1107573][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=51] sql audit evict task end(evict_high_mem_level=32212254, evict_high_size_level=90000, evict_batch_count=0, elapse_time=1, size_used=14873, mem_used=31196160) [2024-02-19 19:03:30.123904] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.123939] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.134077] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.134114] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.144257] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.144308] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.151358] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:30.151394] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=49] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:30.151415] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340610151326) [2024-02-19 19:03:30.151426] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340609951176, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:30.151501] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] get wrs ts(ls_id={id:1}, delta_ns=-1706042771805144995, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:30.151525] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:30.154452] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.154489] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.163612] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=15] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340610163587}) [2024-02-19 19:03:30.163668] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=62] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340610109946}}) [2024-02-19 19:03:30.164612] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.164654] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.175296] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.175547] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=252] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.185707] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.185748] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.195854] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.195904] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.203693] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_sys_parameter, ret=-5019) [2024-02-19 19:03:30.203717] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=24] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_sys_parameter, ret=-5019) [2024-02-19 19:03:30.203726] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=8] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:30.203734] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=8] resolve table relation factor failed(ret=-5019, table_name=__all_sys_parameter) [2024-02-19 19:03:30.203744] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=7] fail to resolve table(ret=-5019) [2024-02-19 19:03:30.203750] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=6] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:30.203760] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=6] Table 'oceanbase.__all_sys_parameter' doesn't exist [2024-02-19 19:03:30.203767] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=6] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:30.203774] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=6] resolve basic table failed(ret=-5019) [2024-02-19 19:03:30.203780] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=6] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:30.203787] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=6] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:30.203794] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=6] resolve normal query failed(ret=-5019) [2024-02-19 19:03:30.203801] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=6] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:30.203819] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=9] failed to resolve(ret=-5019) [2024-02-19 19:03:30.203830] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=11] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:30.203843] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=9] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:30.203852] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=9] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:30.203865] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=9] fail to handle text query(stmt=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter, ret=-5019) [2024-02-19 19:03:30.203876] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=10] executor execute failed(ret=-5019) [2024-02-19 19:03:30.203887] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=10] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, retry_cnt=0) [2024-02-19 19:03:30.203908] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=13] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:30.203921] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=12] result set close failed(ret=-5019) [2024-02-19 19:03:30.203928] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=5] result set close failed(ret=-5019) [2024-02-19 19:03:30.203934] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=5] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:30.203952] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D4-0-0] [lt=6] failed to process record(executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:30.203962] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D4-0-0] [lt=8] failed to process final(executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, aret=-5019, ret=-5019) [2024-02-19 19:03:30.203971] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D4-0-0] [lt=7] execute sql failed(ret=-5019, tenant_id=1, sql=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter) [2024-02-19 19:03:30.203979] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D4-0-0] [lt=7] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:30.203985] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D4-0-0] [lt=6] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:30.203993] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D4-0-0] [lt=6] query failed(ret=-5019, conn=0x7fdcdc89a050, start=1708340610203513, sql=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter) [2024-02-19 19:03:30.204002] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D4-0-0] [lt=9] read failed(ret=-5019) [2024-02-19 19:03:30.204010] WARN [SHARE] update_local (ob_config_manager.cpp:322) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D4-0-0] [lt=6] read config from __all_sys_parameter failed(sqlstr="select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter", ret=-5019) [2024-02-19 19:03:30.204064] WARN [SHARE] update_local (ob_config_manager.cpp:356) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D4-0-0] [lt=6] Read system config from inner table error(ret=-5019) [2024-02-19 19:03:30.204072] WARN [SHARE] runTimerTask (ob_config_manager.cpp:455) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D4-0-0] [lt=8] Update local config failed(ret=-5019) [2024-02-19 19:03:30.206026] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.206063] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.210645] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.210673] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=26] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610210634}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.210695] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=20] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610210634}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.216236] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=60] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.216280] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.220626] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=31] Cache replace map node details(ret=0, replace_node_count=0, replace_time=18606, replace_start_pos=943680, replace_num=15728) [2024-02-19 19:03:30.226716] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.226770] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=57] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.234886] INFO [LIB] runTimerTask (ob_work_queue.cpp:24) [1106715][ObTimer][T0][Y0-0000000000000000-0-0] [lt=54] add async task(this=tasktype:N9oceanbase10rootserver13ObRootService19ObRefreshServerTaskE) [2024-02-19 19:03:30.235801] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=10] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:30.235827] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=24] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:30.235837] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=8] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:30.235844] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=7] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:30.235853] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=6] fail to resolve table(ret=-5019) [2024-02-19 19:03:30.235859] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=6] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:30.235869] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=5] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:30.235877] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=7] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:30.235889] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=11] resolve basic table failed(ret=-5019) [2024-02-19 19:03:30.235895] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=5] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:30.235902] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=7] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:30.235910] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=6] resolve normal query failed(ret=-5019) [2024-02-19 19:03:30.235919] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=8] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:30.235933] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=7] failed to resolve(ret=-5019) [2024-02-19 19:03:30.235945] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=12] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:30.235954] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=7] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:30.235963] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=7] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:30.235971] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=6] fail to handle text query(stmt=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server, ret=-5019) [2024-02-19 19:03:30.235981] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=9] executor execute failed(ret=-5019) [2024-02-19 19:03:30.235988] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=6] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, retry_cnt=0) [2024-02-19 19:03:30.236007] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=13] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:30.236020] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=11] result set close failed(ret=-5019) [2024-02-19 19:03:30.236028] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=7] result set close failed(ret=-5019) [2024-02-19 19:03:30.236034] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=5] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:30.236064] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2A-0-0] [lt=13] failed to process record(executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:30.236082] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106717][RSAsyncTask0][T0][YB42AC0103F2-000611B922978A2A-0-0] [lt=15] failed to process final(executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, aret=-5019, ret=-5019) [2024-02-19 19:03:30.236096] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=11] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server) [2024-02-19 19:03:30.236114] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=17] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:30.236127] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=12] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:30.236144] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=15] query failed(ret=-5019, conn=0x7fdcdc9be050, start=1708340610235607, sql=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server) [2024-02-19 19:03:30.236162] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=17] read failed(ret=-5019) [2024-02-19 19:03:30.236333] WARN [SHARE] run2 (ob_async_task_queue.cpp:148) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=8] task process failed, start retry(max retry time=0, retry interval=1000000, ret=-5019) [2024-02-19 19:03:30.236896] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.236920] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=24] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.247076] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.247155] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=84] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.251599] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:30.251632] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=32] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:30.251654] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340610251580) [2024-02-19 19:03:30.251669] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340610151435, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:30.251742] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] get wrs ts(ls_id={id:1}, delta_ns=-1706042771805044392, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:30.251763] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:30.259388] INFO [SHARE] run_loop_ (ob_bg_thread_monitor.cpp:331) [1109111][BGThreadMonitor][T0][Y0-0000000000000000-0-0] [lt=27] current monitor number(seq_=-1) [2024-02-19 19:03:30.262619] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.262670] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.263645] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=18] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340610263627}) [2024-02-19 19:03:30.263675] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=29] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340610210634}}) [2024-02-19 19:03:30.272902] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.272967] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=69] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.283081] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.283120] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.289095] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=38] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:30.289205] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=45] Wash time detail, (compute_wash_size_time=220, refresh_score_time=62, wash_time=5) [2024-02-19 19:03:30.293241] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.294108] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=867] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.304248] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.304292] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.311449] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.311491] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=42] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610311435}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.311533] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=23] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610311435}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.314430] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.314465] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.322920] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=6] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:30.322952] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=32] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:30.322967] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=13] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:30.322979] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=11] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:30.322994] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=10] fail to resolve table(ret=-5019) [2024-02-19 19:03:30.323003] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=9] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:30.323019] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=9] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:30.323029] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=10] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:30.323039] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:30.323048] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=9] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:30.323058] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=8] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:30.323266] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=204] resolve normal query failed(ret=-5019) [2024-02-19 19:03:30.323278] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=11] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:30.323300] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=12] failed to resolve(ret=-5019) [2024-02-19 19:03:30.323313] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=12] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:30.323326] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=10] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:30.323336] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=9] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:30.323348] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=10] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, ret=-5019) [2024-02-19 19:03:30.323360] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=11] executor execute failed(ret=-5019) [2024-02-19 19:03:30.323373] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=10] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, retry_cnt=0) [2024-02-19 19:03:30.323393] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=13] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:30.323412] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=16] result set close failed(ret=-5019) [2024-02-19 19:03:30.323422] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=9] result set close failed(ret=-5019) [2024-02-19 19:03:30.323431] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:30.323459] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:30.323473] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A022-0-0] [lt=11] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-02-19 19:03:30.323486] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:30.323498] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:30.323508] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:30.323519] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] query failed(ret=-5019, conn=0x7fdcdc924050, start=1708340610322701, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:30.323532] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=12] read failed(ret=-5019) [2024-02-19 19:03:30.323544] WARN get_my_sql_result_ (ob_table_access_helper.h:329) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x7fdcfffccd78, table=__all_server, condition=where svr_ip='172.1.3.242' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, columns_str="zone") [2024-02-19 19:03:30.323581] WARN read_single_row (ob_table_access_helper.h:178) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=31] get mysql result failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1, columns=0x7fdcfffccd78, table=__all_server, where_condition=where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:30.323666] WARN [COORDINATOR] get_self_zone_name (table_accessor.cpp:517) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] get zone from __all_server failed(ret=-4016, ret="OB_ERR_UNEXPECTED", columns=0x7fdcfffccd78, where_condition="where svr_ip='172.1.3.242' and svr_port=2882", zone_name_holder=) [2024-02-19 19:03:30.323681] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:450) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=14] get self zone name failed(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:30.323695] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:459) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=14] zone name is empty(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:30.323707] WARN [COORDINATOR] refresh (ob_leader_coordinator.cpp:107) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] get all ls election reference info failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1) [2024-02-19 19:03:30.324569] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.324598] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.328679] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC73-0-0] [lt=98] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:30.328715] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC73-0-0] [lt=36] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:30.328738] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC73-0-0] [lt=21] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:30.328756] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC73-0-0] [lt=16] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:30.328771] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC73-0-0] [lt=14] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:30.329523] INFO [CLOG] do_fetch_log_ (ob_remote_fetch_log.cpp:154) [1107644][T1_LogRessvr][T1][YB42AC0103F2-000611B921578198-0-0] [lt=47] print do_fetch_log_(lsn={lsn:18446744073709551615}, max_fetch_lsn={lsn:18446744073709551615}, need_schedule=false, proposal_id=-1, last_fetch_ts=-1, size=0, ls={ls_meta:{tenant_id:1, ls_id:{id:1}, replica_type:0, ls_create_status:1, clog_checkpoint_ts:1707209832548318068, clog_base_lsn:{lsn:23419564032}, rebuild_seq:0, migration_status:0, gc_state_:1, offline_ts_ns_:-1, restore_status:{status:0}, replayable_point:-1, tablet_change_checkpoint_ts:1707751112415295196, all_id_meta:{id_meta:[{limited_id:1707751122157059767, latest_log_ts:1707751105505586716}, {limited_id:46000001, latest_log_ts:1707741702196260609}, {limited_id:290000001, latest_log_ts:1707637636773992411}]}}, log_handler:{role:1, proposal_id:138, palf_env_:0x7fdd02a44030, is_in_stop_state_:false, is_inited_:true}, restore_handler:{is_inited:true, is_in_stop_state:false, id:1, proposal_id:9223372036854775807, role:2, parent:null, context:{issued:false, last_fetch_ts:-1, max_submit_lsn:{lsn:18446744073709551615}, max_fetch_lsn:{lsn:18446744073709551615}, error_context:{ret_code:0, trace_id:Y0-0000000000000000-0-0}}}, is_inited:true, tablet_gc_handler:{tablet_persist_trigger:0, is_inited:true}}) [2024-02-19 19:03:30.334712] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=25] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.334760] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.338139] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=11] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=48917, clean_start_pos=471855, clean_num=31457) [2024-02-19 19:03:30.344904] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.344971] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=69] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.352002] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:30.352053] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=52] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:30.352075] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340610351979) [2024-02-19 19:03:30.352090] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340610251680, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:30.352114] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:738) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] current server is WRS leader, need start CLUSTER weak read service(tenant_id=1, serve_leader_epoch=0, cur_leader_epoch=138, cluster_service_tablet_id_={id:226}, in_service=false, can_update_version=false, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:30.352147] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:336) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] begin start service(tenant_id=1, is_in_service()=false, can_update_version=false) [2024-02-19 19:03:30.352156] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:338) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8] start TenantWeakReadClusterService(tenant_id=1) [2024-02-19 19:03:30.353182] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=6] table not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:30.353220] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=77] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:30.353270] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=37] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:30.353288] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=17] resolve table relation factor failed(ret=-5019, table_name=__all_weak_read_service) [2024-02-19 19:03:30.353331] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=38] fail to resolve table(ret=-5019) [2024-02-19 19:03:30.353342] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=10] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:30.353380] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=31] Table 'oceanbase.__all_weak_read_service' doesn't exist [2024-02-19 19:03:30.353397] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=15] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:30.353406] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=8] resolve basic table failed(ret=-5019) [2024-02-19 19:03:30.353437] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=29] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:30.353453] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=15] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:30.353470] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=11] resolve normal query failed(ret=-5019) [2024-02-19 19:03:30.353500] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=28] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:30.353530] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=19] failed to resolve(ret=-5019) [2024-02-19 19:03:30.353560] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=29] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:30.353569] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=7] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:30.353576] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=6] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:30.353589] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=11] fail to handle text query(stmt=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '', ret=-5019) [2024-02-19 19:03:30.353610] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=19] executor execute failed(ret=-5019) [2024-02-19 19:03:30.353618] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=7] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, retry_cnt=0) [2024-02-19 19:03:30.353637] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=15] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:30.353656] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=15] result set close failed(ret=-5019) [2024-02-19 19:03:30.353662] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=6] result set close failed(ret=-5019) [2024-02-19 19:03:30.353676] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=13] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:30.353717] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=16] failed to process record(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:30.353736] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E8-0-0] [lt=18] failed to process final(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, aret=-5019, ret=-5019) [2024-02-19 19:03:30.353749] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:30.353759] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:30.353772] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:30.353783] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] query failed(ret=-5019, conn=0x7fdd189bc050, start=1708340610352893, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:30.353795] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] read failed(ret=-5019) [2024-02-19 19:03:30.353804] WARN [STORAGE.TRANS] query_cluster_version_range_ (ob_tenant_weak_read_cluster_service.cpp:192) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7] execute sql read fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", exec_tenant_id=1, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:30.353901] WARN [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:367) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] query cluster version range from WRS table fail(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:30.353922] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:415) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] start service done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, in_service=false, leader_epoch=0, current_version=0, delta=1708340610353919, min_version=0, max_version=0, max_stale_time=5000000000, all_valid_server_count=0, total_time=1787, wlock_time=29, check_leader_time=3, query_version_time=0, persist_version_time=0) [2024-02-19 19:03:30.353944] WARN [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:781) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21] start CLUSTER weak read service fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:30.353953] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:791) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, need_start_service=true, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=138, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:30.354072] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] get wrs ts(ls_id={id:1}, delta_ns=-1706042771804943237, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:30.354105] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=30] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:30.355098] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.355132] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.356727] INFO [COMMON] handle (memory_dump.cpp:402) [1106682][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=19] handle dump task(task={type_:2, dump_all_:false, p_context_:null, slot_idx_:0, dump_tenant_ctx_:false, tenant_id_:0, ctx_id_:0, p_chunk_:null}) [2024-02-19 19:03:30.363778] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340610363761}) [2024-02-19 19:03:30.363819] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=42] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340610311435}}) [2024-02-19 19:03:30.365262] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.365304] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.375420] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.375475] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=57] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.376536] WARN [STORAGE.TRANS] acquire_global_snapshot__ (ob_trans_service_v4.cpp:1472) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=24] acquire global snapshot fail(ret=-4012, gts_ahead=0, expire_ts=1708340610375274, now={mts:1708340608446381}, now0={mts:1708340608446381}, snapshot=-1, uncertain_bound=0) [2024-02-19 19:03:30.376633] WARN [STORAGE.TRANS] get_read_snapshot (ob_tx_api.cpp:552) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=95] acquire global snapshot fail(ret=-4012, tx={this:0x7fdcd5abd000, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340608444948, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}) [2024-02-19 19:03:30.376690] WARN [SQL.EXE] stmt_setup_snapshot_ (ob_sql_trans_control.cpp:614) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=46] fail to get snapshot(ret=-4012, local_ls_id={id:1}, session={this:0x7fdcf4ef40c0, id:1, tenant:"sys", tenant_id:1, effective_tenant:"sys", effective_tenant_id:1, database:"oceanbase", user:"root@%", consistency_level:3, session_state:0, tx:0x7fdcd5abd000}) [2024-02-19 19:03:30.376722] WARN [SQL.EXE] start_stmt (ob_sql_trans_control.cpp:481) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=32] fail to exec stmt_setup_snapshot_(session, das_ctx, plan, plan_ctx, txs)(ret=-4012, session_id=1, *tx_desc={this:0x7fdcd5abd000, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340608444948, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}) [2024-02-19 19:03:30.376754] INFO [SQL.EXE] start_stmt (ob_sql_trans_control.cpp:530) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=29] start stmt(ret=-4012, auto_commit=true, session_id=1, snapshot={this:0x7fdd2afcbab0, valid:false, source:0, core:{version:-1, tx_id:{txid:0}, scn:-1}, uncertain_bound:0, snapshot_lsid:{id:-1}, parts:[]}, savepoint=0, tx_desc={this:0x7fdcd5abd000, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340608444948, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}, plan_type=1, stmt_type=1, has_for_update=false, query_start_time=1708340608446160, use_das=false, session={this:0x7fdcf4ef40c0, id:1, tenant:"sys", tenant_id:1, effective_tenant:"sys", effective_tenant_id:1, database:"oceanbase", user:"root@%", consistency_level:3, session_state:0, tx:0x7fdcd5abd000}, plan=0x7fdcda010050, consistency_level_in_plan_ctx=3, trans_result={incomplete:false, parts:[], touched_ls_list:[], cflict_txs:[]}) [2024-02-19 19:03:30.376813] WARN [SQL] start_stmt (ob_result_set.cpp:282) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=55] fail to start stmt(ret=-4012, phy_plan->get_dependency_table()=[{table_id:1, schema_version:0, object_type:1, is_db_explicit:false, is_existed:true}]) [2024-02-19 19:03:30.376839] WARN [SQL] do_open_plan (ob_result_set.cpp:451) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=23] fail start stmt(ret=-4012) [2024-02-19 19:03:30.376852] WARN [SQL] open (ob_result_set.cpp:150) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=11] execute plan failed(ret=-4012) [2024-02-19 19:03:30.376862] WARN [SERVER] open (ob_inner_sql_result.cpp:146) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=7] open result set failed(ret=-4012) [2024-02-19 19:03:30.376874] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:607) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=9] result set open failed(ret=-4012, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}) [2024-02-19 19:03:30.376892] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=17] execute failed(ret=-4012, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=0) [2024-02-19 19:03:30.376907] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=11] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-4012, err_:"OB_TIMEOUT", retry_type:0, client_ret:-4012}, need_retry=false) [2024-02-19 19:03:30.377109] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=40] result set close failed(ret=-4012) [2024-02-19 19:03:30.377270] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=121] result set close failed(ret=-4012) [2024-02-19 19:03:30.377283] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=52] failed to close result(close_ret=-4012, ret=-4012) [2024-02-19 19:03:30.377388] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78582-0-0] [lt=7] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-02-19 19:03:30.377407] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:574) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78582-0-0] [lt=14] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=1931244) [2024-02-19 19:03:30.377460] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78582-0-0] [lt=53] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-02-19 19:03:30.377526] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78582-0-0] [lt=62] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-02-19 19:03:30.377570] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78582-0-0] [lt=42] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-02-19 19:03:30.377578] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78582-0-0] [lt=8] execute_read failed(ret=-4012, cluster_id=1, tenant_id=1) [2024-02-19 19:03:30.377587] WARN [COMMON.MYSQLP] read_without_check_sys_variable (ob_sql_client_decorator.cpp:119) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78582-0-0] [lt=6] failed to read without check sys variable(ret=-4012, sql="SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name", tenant_id=1, check_sys_variable=false, snapshot_timestamp=-1) [2024-02-19 19:03:30.377599] WARN [SHARE] load (ob_core_table_proxy.cpp:436) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78582-0-0] [lt=9] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-02-19 19:03:30.377754] WARN [SHARE] load (ob_core_table_proxy.cpp:368) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78582-0-0] [lt=9] load failed(ret=-4012, for_update=false) [2024-02-19 19:03:30.377765] WARN [SHARE] get (ob_global_stat_proxy.cpp:321) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78582-0-0] [lt=10] core_table load failed(ret=-4012) [2024-02-19 19:03:30.377772] WARN [SHARE] get_baseline_schema_version (ob_global_stat_proxy.cpp:287) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78582-0-0] [lt=7] get failed(ret=-4012) [2024-02-19 19:03:30.377780] WARN [SHARE.SCHEMA] get_baseline_schema_version (ob_schema_service_sql_impl.cpp:795) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78582-0-0] [lt=6] get_baseline_schema_version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-02-19 19:03:30.377790] WARN [SHARE.SCHEMA] get_baseline_schema_version (ob_multi_version_schema_service.cpp:4009) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78582-0-0] [lt=9] get baseline schema version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-02-19 19:03:30.377798] WARN [SERVER] try_load_baseline_schema_version_ (ob_server_schema_updater.cpp:512) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78582-0-0] [lt=7] fail to update baseline schema version(tmp_ret=-4012, tmp_ret="OB_TIMEOUT", *tenant_id=1) [2024-02-19 19:03:30.377810] WARN [SERVER] batch_process_tasks (ob_server_schema_updater.cpp:229) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78582-0-0] [lt=7] fail to process refresh task(ret=-4023, ret="OB_EAGAIN", tasks.at(0)={type:1, did_retry:true, schema_info:{schema_version:-1, tenant_id:0, sequence_id:18446744073709551615}}) [2024-02-19 19:03:30.377822] WARN [SERVER] batch_process_tasks (ob_uniq_task_queue.h:498) [1106708][SerScheQueue1][T0][Y0-0000000000000000-0-0] [lt=10] fail to batch process task(ret=-4023) [2024-02-19 19:03:30.377829] WARN [SERVER] run1 (ob_uniq_task_queue.h:449) [1106708][SerScheQueue1][T0][Y0-0000000000000000-0-0] [lt=6] fail to batch execute task(ret=-4023, tasks.count()=1) [2024-02-19 19:03:30.385614] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.385670] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=58] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.396153] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.396204] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.406359] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.406411] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=56] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.412198] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=16] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.412234] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=37] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610412182}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.412260] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=22] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610412182}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.416551] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.416601] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.426821] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.426877] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=58] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.428546] INFO [STORAGE.TRANS] dump_mapper_info (ob_lock_wait_mgr.h:63) [1108319][T1_LockWaitMgr][T1][Y0-0000000000000000-0-0] [lt=37] report RowHolderMapper summary info(count=0, bkt_cnt=252) [2024-02-19 19:03:30.434011] INFO handle (memory_dump.cpp:512) [1106682][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=30] statistics: tenant_cnt: 9, max_chunk_cnt: 524288 tenant_id ctx_id chunk_cnt label_cnt segv_cnt 1 0 714 77 0 1 7 1 1 0 1 23 1 2 0 1 28 34 2 0 1 32 5 4 0 1 33 1 1 0 1 34 1 1 0 500 0 182 251 0 500 26 43 1 0 500 28 33 2 0 500 29 4 1 0 500 30 6 2 0 506 0 1 1 0 506 28 12 2 0 507 0 1 1 0 507 28 5 2 0 508 0 1 1 0 508 28 15 2 0 509 0 1 1 0 509 28 5 2 0 510 0 1 1 0 510 28 5 2 0 512 0 1 1 0 512 28 5 2 0 999 0 1 1 0 999 28 5 2 0 cost_time: 77200 [2024-02-19 19:03:30.436962] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.436994] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.440014] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=81] Cache replace map node details(ret=0, replace_node_count=0, replace_time=19222, replace_start_pos=959408, replace_num=15728) [2024-02-19 19:03:30.442277] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=18] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.442302] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.442320] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=17] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:30.442569] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.442594] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.442607] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=13] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:30.442924] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.442944] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.442958] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:30.443223] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.443242] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.443253] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:30.444041] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=18] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.444070] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=27] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.444080] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=10] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:30.444223] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=7] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.444236] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.444245] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=7] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:30.444853] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=7] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.444877] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.444890] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:30.444952] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.444970] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.444981] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:30.445501] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=18] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.445523] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.445536] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:30.445738] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=8] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.445757] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.445768] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:30.445843] INFO [SERVER] try_reload_schema (ob_server_schema_updater.cpp:435) [1108363][LeaseHB][T0][Y0-0000000000000000-0-0] [lt=8] schedule fetch new schema task(ret=0, ret="OB_SUCCESS", schema_info={schema_version:-1, tenant_id:0, sequence_id:18446744073709551615}) [2024-02-19 19:03:30.445866] INFO [SERVER] do_heartbeat_event (ob_heartbeat.cpp:188) [1108363][LeaseHB][T0][Y0-0000000000000000-0-0] [lt=23] try reload schema success(schema_version=1, refresh_schema_info={schema_version:-1, tenant_id:0, sequence_id:18446744073709551615}, schema_ret=0) [2024-02-19 19:03:30.445961] INFO [SERVER] process_refresh_task (ob_server_schema_updater.cpp:254) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78583-0-0] [lt=6] [REFRESH_SCHEMA] start to process schema refresh task(ret=0, ret="OB_SUCCESS", schema_info={schema_version:-1, tenant_id:0, sequence_id:18446744073709551615}) [2024-02-19 19:03:30.446046] WARN [SERVER] process_refresh_task (ob_server_schema_updater.cpp:267) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78583-0-0] [lt=83] rootservice is not in full service, try again(ret=-4023, ret="OB_EAGAIN", GCTX.root_service_->in_service()=true, GCTX.root_service_->is_full_service()=false) [2024-02-19 19:03:30.446566] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=18] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.446592] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.446605] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=13] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:30.446650] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=16] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.446661] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.446669] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=7] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:30.447122] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=25] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.447156] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.447226] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=17] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.447252] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.447265] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=11] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:30.447310] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.447327] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.447338] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:30.447866] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=16] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.447885] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.447887] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106788][RpcIO][T0][Y0-0000000000000000-0-0] [lt=19] [RPC EASY STAT](log_str=conn count=1/1, request done=19503/19503, request doing=0/0) [2024-02-19 19:03:30.447897] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=10] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:30.447932] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=15] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.447947] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.447958] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:30.447828] INFO [STORAGE.TRANS] in_leader_serving_state (ob_trans_ctx_mgr_v4.cpp:881) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=13] ObLSTxCtxMgr not master(this={this:0x7fdce3204030, ls_id:{id:1}, tenant_id:1, state:"F_WORKING", total_tx_ctx_count:0, leader_takeover_ts:{mts:0}, is_leader_serving:false, max_replay_commit_version:1707751112415295196, ls_retain_ctx_mgr:{retain_ctx_list_.size():0}, aggre_rec_log_ts:-1, prev_aggre_rec_log_ts:-1, online_ts:0, uref:1073741826}) [2024-02-19 19:03:30.448194] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=357] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.448494] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106789][RpcIO][T0][Y0-0000000000000000-0-0] [lt=19] [RPC EASY STAT](log_str=conn count=1/1, request done=19502/19502, request doing=0/0) [2024-02-19 19:03:30.448507] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.448529] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=8] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.448898] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=74] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.449197] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.449230] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.449522] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.449810] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.449846] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.450127] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.450598] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.450629] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=213] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.450728] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.451207] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.451244] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.451325] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.451831] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.451869] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.451963] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.452458] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.452503] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.452584] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.453182] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.453204] WARN [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:287) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-02-19 19:03:30.453221] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:30.453234] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:30.453260] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340610453247}) [2024-02-19 19:03:30.453275] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340610453190) [2024-02-19 19:03:30.453286] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340610352102, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:30.453349] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] get wrs ts(ls_id={id:1}, delta_ns=-1706042771804844610, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:30.453366] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:30.453809] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=28] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.454425] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=28] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.455040] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.455040] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=30] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.455080] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.455623] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=28] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.455640] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.455668] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.456198] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.456254] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.456250] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.456878] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.457489] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.458239] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.458273] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.458335] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.458358] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.458472] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.459068] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.459834] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.459966] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.460000] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.460548] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.460582] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.460625] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=31] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.461174] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.461208] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.461312] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=35] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.461844] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.461874] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.461924] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.462447] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.462477] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.462585] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.463115] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.463542] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=7] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.464083] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.464135] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=39] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340610464122}) [2024-02-19 19:03:30.464178] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=41] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340610412182}}) [2024-02-19 19:03:30.464350] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.464970] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=28] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.466291] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.466334] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.466391] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.466870] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=28] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.466926] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.467008] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=28] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.467457] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.467545] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.468299] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.468338] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.468419] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.468451] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.468965] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.468980] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.468997] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.469540] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.469579] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=8] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.469579] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.470124] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.470162] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.470178] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.470694] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.470746] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.471287] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.471329] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.471438] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:30.478661] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.478712] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.491825] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.491873] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.502240] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.502290] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.512607] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.512644] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.512895] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.512918] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=22] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610512882}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.512959] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=38] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610512882}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.521876] INFO [SQL.PC] update_memory_conf (ob_plan_cache.cpp:1499) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=18] update plan cache memory config(ob_plan_cache_percentage=5, ob_plan_cache_evict_high_percentage=90, ob_plan_cache_evict_low_percentage=50, tenant_id=1) [2024-02-19 19:03:30.521934] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1130) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=56] start lib cache evict(tenant_id=1, mem_hold=2097152, mem_limit=107374180, cache_obj_num=2, cache_node_num=2) [2024-02-19 19:03:30.521951] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1147) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=15] end lib cache evict(tenant_id=1, cache_evict_num=0, mem_hold=2097152, mem_limit=107374180, cache_obj_num=2, cache_node_num=2) [2024-02-19 19:03:30.522723] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.522751] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.525407] INFO [SQL.PC] runTimerTask (ob_plan_cache_manager.cpp:291) [1106739][PlanCacheEvict][T0][Y0-0000000000000000-0-0] [lt=12] schedule next cache evict task(evict_interval=1000000) [2024-02-19 19:03:30.528767] INFO [SQL.PC] runTimerTask (ob_plan_cache_manager.cpp:299) [1106739][PlanCacheEvict][T0][Y0-0000000000000000-0-0] [lt=25] schedule next cache evict task(evict_interval=1000000) [2024-02-19 19:03:30.532867] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=25] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.532906] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.538850] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=37] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:30.538947] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=38] Wash time detail, (compute_wash_size_time=140, refresh_score_time=56, wash_time=5) [2024-02-19 19:03:30.543038] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.543078] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.544414] INFO [ARCHIVE] stop (ob_archive_scheduler_service.cpp:137) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=12] stop archive scheduler service [2024-02-19 19:03:30.545797] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=8] table not exist(tenant_id=1, database_id=201001, table_name=__all_backup_info, ret=-5019) [2024-02-19 19:03:30.545831] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=34] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_backup_info, ret=-5019) [2024-02-19 19:03:30.545841] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=9] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:30.545850] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=7] resolve table relation factor failed(ret=-5019, table_name=__all_backup_info) [2024-02-19 19:03:30.545859] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=7] fail to resolve table(ret=-5019) [2024-02-19 19:03:30.545866] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=6] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:30.545876] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=5] Table 'oceanbase.__all_backup_info' doesn't exist [2024-02-19 19:03:30.545883] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=6] resolve basic table failed(ret=-5019) [2024-02-19 19:03:30.545890] WARN [SQL.RESV] resolve_table_list (ob_update_resolver.cpp:423) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=6] failed to resolve table(ret=-5019) [2024-02-19 19:03:30.545908] WARN [SQL.RESV] resolve (ob_update_resolver.cpp:76) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=18] resolve table failed(ret=-5019) [2024-02-19 19:03:30.545916] WARN [SQL.RESV] stmt_resolver_func (ob_resolver.cpp:155) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=7] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3074) [2024-02-19 19:03:30.545929] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=8] failed to resolve(ret=-5019) [2024-02-19 19:03:30.545937] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=7] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:30.545946] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=7] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:30.545963] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=17] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:30.545983] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=17] fail to handle text query(stmt=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882', ret=-5019) [2024-02-19 19:03:30.545994] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=9] executor execute failed(ret=-5019) [2024-02-19 19:03:30.546005] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=9] execute failed(ret=-5019, executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, retry_cnt=0) [2024-02-19 19:03:30.546039] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=28] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:30.546059] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=16] result set close failed(ret=-5019) [2024-02-19 19:03:30.546078] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=18] result set close failed(ret=-5019) [2024-02-19 19:03:30.546087] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=9] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:30.546118] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EA9-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:30.546150] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106733][BackupLease][T0][YB42AC0103F2-000611B923978EA9-0-0] [lt=12] failed to process final(executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, aret=-5019, ret=-5019) [2024-02-19 19:03:30.546160] WARN [SERVER] execute_write_inner (ob_inner_sql_connection.cpp:1818) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=8] execute sql failed(ret=-5019, tenant_id=1, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:30.546169] INFO [SERVER] execute_write_inner (ob_inner_sql_connection.cpp:1900) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=8] execute write sql(ret=-5019, tenant_id=1, affected_rows=0, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:30.546217] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=7] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:30.546225] WARN [SERVER] execute_write (ob_inner_sql_connection.cpp:1786) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=8] execute_write failed(ret=-5019, tenant_id=1, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882', is_user_sql=false) [2024-02-19 19:03:30.546233] WARN [SERVER] execute_write (ob_inner_sql_connection.cpp:1775) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=8] execute_write failed(ret=-5019, tenant_id=1, sql="update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'") [2024-02-19 19:03:30.546242] WARN [COMMON.MYSQLP] write (ob_mysql_proxy.cpp:133) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=7] execute sql failed(ret=-5019, conn=0x7fdcdc89a050, start=1708340610544551, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:30.546293] WARN [SERVER] clean_backup_scheduler_leader (ob_backup_operator.cpp:348) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=9] execute sql failed(ret=-5019, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:30.546302] WARN [SERVER] clean_backup_scheduler_leader (ob_backup_manager.cpp:517) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=9] failed to clean backup scheduler leader(ret=-5019) [2024-02-19 19:03:30.553205] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.553250] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.553350] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:30.553382] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=32] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:30.553403] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340610553329) [2024-02-19 19:03:30.553418] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340610453297, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:30.553497] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] get wrs ts(ls_id={id:1}, delta_ns=-1706042771804742388, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:30.553520] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:30.554862] INFO [STORAGE] gc_tables_in_queue (ob_tenant_meta_mem_mgr.cpp:351) [1107037][T1_T3mGC][T1][Y0-0000000000000000-0-0] [lt=52] Recycle 0 table(ret=0, allocator_={used:2532285, total:3058518}, tablet_pool_={typeid(T).name():"N9oceanbase7storage8ObTabletE", sizeof(T):2432, used_obj_cnt:980, free_obj_hold_cnt:1, allocator used:2448576, allocator total:2485504}, sstable_pool_={typeid(T).name():"N9oceanbase12blocksstable9ObSSTableE", sizeof(T):1024, used_obj_cnt:2027, free_obj_hold_cnt:2, allocator used:2207552, allocator total:2289280}, memtable_pool_={typeid(T).name():"N9oceanbase8memtable10ObMemtableE", sizeof(T):1856, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, tablet count=980, min_minor_cnt=0, pinned_tablet_cnt=0) [2024-02-19 19:03:30.563368] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.563412] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.564419] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=21] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340610564408}) [2024-02-19 19:03:30.564437] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=18] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340610512882}}) [2024-02-19 19:03:30.573543] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.573600] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=59] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.579140] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC74-0-0] [lt=118] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:30.579170] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC74-0-0] [lt=31] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:30.579188] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC74-0-0] [lt=18] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:30.579201] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC74-0-0] [lt=11] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:30.579212] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC74-0-0] [lt=10] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:30.583831] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.583872] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.584437] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=10] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=45475, clean_start_pos=503312, clean_num=31457) [2024-02-19 19:03:30.594085] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.594130] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.604286] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.604333] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.613595] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.613634] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=38] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610613573}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.613659] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=24] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610613573}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.614476] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.614509] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.624605] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.624655] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.634995] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:186) [1108342][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=22] start do ls ha handler(ls_id_array_=[{id:1}]) [2024-02-19 19:03:30.644402] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.644439] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.653502] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] get wrs ts(ls_id={id:1}, delta_ns=-1706042771804642753, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:30.653540] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=40] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:30.655651] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.655692] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.659231] INFO [PALF] log_loop_ (log_loop_thread.cpp:106) [1107532][T1_LogLoop][T1][Y0-0000000000000000-0-0] [lt=48] LogLoopThread round_cost_time(round_cost_time=4) [2024-02-19 19:03:30.664282] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=50] Cache replace map node details(ret=0, replace_node_count=0, replace_time=24128, replace_start_pos=975136, replace_num=15728) [2024-02-19 19:03:30.665792] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=12] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340610665770}) [2024-02-19 19:03:30.665822] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=30] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340610613573}}) [2024-02-19 19:03:30.665875] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.665906] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.666174] INFO do_work (ob_rl_mgr.cpp:704) [1106705][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=30] swc wakeup.(stat_period_=1000000, ready=false) [2024-02-19 19:03:30.668972] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106795][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=22] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/6, request doing=0/0) [2024-02-19 19:03:30.668972] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106796][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=18] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/1, request doing=0/0) [2024-02-19 19:03:30.669611] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106791][BatchIO][T0][Y0-0000000000000000-0-0] [lt=20] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:30.669908] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106798][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=21] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:30.670303] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106792][BatchIO][T0][Y0-0000000000000000-0-0] [lt=14] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:30.670329] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106800][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=9] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:30.671665] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106793][BatchIO][T0][Y0-0000000000000000-0-0] [lt=8] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:30.685832] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.686142] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=58] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.696296] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=303] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.696355] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=58] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.706510] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.706559] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.712683] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:202) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=13] decide disk size finished(dir="/backup/oceanbase/data/sstable", suggested_disk_size=8589934592, suggested_disk_percentage=0, default_disk_percentage=60, total_space=246944890880, free_space=220974178304, disk_size=8589934592) [2024-02-19 19:03:30.712758] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:202) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=72] decide disk size finished(dir="/backup/oceanbase/data/clog", suggested_disk_size=8589934592, suggested_disk_percentage=0, default_disk_percentage=30, total_space=246944890880, free_space=220974178304, disk_size=8589934592) [2024-02-19 19:03:30.712772] INFO [SERVER] cal_all_part_disk_size (ob_server_utils.cpp:164) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=21] decide_all_disk_size succ(data_dir="/backup/oceanbase/data/sstable", clog_dir="/backup/oceanbase/data/clog", suggested_data_disk_size=8589934592, suggested_data_disk_percentage=0, data_default_disk_percentage=60, clog_default_disk_percentage=30, shared_mode=true, data_disk_size=8589934592, log_disk_size=8589934592) [2024-02-19 19:03:30.715841] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.715871] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=31] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610715827}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.716601] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=724] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610715827}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.716620] INFO [STORAGE.TRANS] statistics (ob_gts_source.cpp:70) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=17] gts statistics(tenant_id=1, gts_rpc_cnt=0, get_gts_cache_cnt=7291, get_gts_with_stc_cnt=19361, try_get_gts_cache_cnt=0, try_get_gts_with_stc_cnt=0, wait_gts_elapse_cnt=0, try_wait_gts_elapse_cnt=0) [2024-02-19 19:03:30.717286] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.717319] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.727511] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.727556] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.730808] WARN [STORAGE.TRANS] acquire_global_snapshot__ (ob_trans_service_v4.cpp:1472) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=13] acquire global snapshot fail(ret=-4012, gts_ahead=0, expire_ts=1708340610729656, now={mts:1708340608800455}, now0={mts:1708340608800455}, snapshot=-1, uncertain_bound=0) [2024-02-19 19:03:30.730856] WARN [STORAGE.TRANS] get_read_snapshot (ob_tx_api.cpp:552) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=48] acquire global snapshot fail(ret=-4012, tx={this:0x7fdcd591b5c0, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340608799340, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}) [2024-02-19 19:03:30.730908] WARN [SQL.EXE] stmt_setup_snapshot_ (ob_sql_trans_control.cpp:614) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=43] fail to get snapshot(ret=-4012, local_ls_id={id:1}, session={this:0x7fdcf4e200c0, id:1, tenant:"sys", tenant_id:1, effective_tenant:"sys", effective_tenant_id:1, database:"oceanbase", user:"root@%", consistency_level:3, session_state:0, tx:0x7fdcd591b5c0}) [2024-02-19 19:03:30.731011] WARN [SQL.EXE] start_stmt (ob_sql_trans_control.cpp:481) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=98] fail to exec stmt_setup_snapshot_(session, das_ctx, plan, plan_ctx, txs)(ret=-4012, session_id=1, *tx_desc={this:0x7fdcd591b5c0, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340608799340, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}) [2024-02-19 19:03:30.731041] INFO [SQL.EXE] start_stmt (ob_sql_trans_control.cpp:530) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=31] start stmt(ret=-4012, auto_commit=true, session_id=1, snapshot={this:0x7fdce42d3e80, valid:false, source:0, core:{version:-1, tx_id:{txid:0}, scn:-1}, uncertain_bound:0, snapshot_lsid:{id:-1}, parts:[]}, savepoint=0, tx_desc={this:0x7fdcd591b5c0, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340608799340, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}, plan_type=1, stmt_type=1, has_for_update=false, query_start_time=1708340608800317, use_das=false, session={this:0x7fdcf4e200c0, id:1, tenant:"sys", tenant_id:1, effective_tenant:"sys", effective_tenant_id:1, database:"oceanbase", user:"root@%", consistency_level:3, session_state:0, tx:0x7fdcd591b5c0}, plan=0x7fdcda010050, consistency_level_in_plan_ctx=3, trans_result={incomplete:false, parts:[], touched_ls_list:[], cflict_txs:[]}) [2024-02-19 19:03:30.731094] WARN [SQL] start_stmt (ob_result_set.cpp:282) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=51] fail to start stmt(ret=-4012, phy_plan->get_dependency_table()=[{table_id:1, schema_version:0, object_type:1, is_db_explicit:false, is_existed:true}]) [2024-02-19 19:03:30.731111] WARN [SQL] do_open_plan (ob_result_set.cpp:451) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=14] fail start stmt(ret=-4012) [2024-02-19 19:03:30.731124] WARN [SQL] open (ob_result_set.cpp:150) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=12] execute plan failed(ret=-4012) [2024-02-19 19:03:30.731136] WARN [SERVER] open (ob_inner_sql_result.cpp:146) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=9] open result set failed(ret=-4012) [2024-02-19 19:03:30.731148] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:607) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=9] result set open failed(ret=-4012, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}) [2024-02-19 19:03:30.731162] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=13] execute failed(ret=-4012, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=0) [2024-02-19 19:03:30.731176] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=10] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-4012, err_:"OB_TIMEOUT", retry_type:0, client_ret:-4012}, need_retry=false) [2024-02-19 19:03:30.731216] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=20] result set close failed(ret=-4012) [2024-02-19 19:03:30.731226] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=10] result set close failed(ret=-4012) [2024-02-19 19:03:30.731235] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=8] failed to close result(close_ret=-4012, ret=-4012) [2024-02-19 19:03:30.731261] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-02-19 19:03:30.731271] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:574) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=8] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=1930952) [2024-02-19 19:03:30.731279] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C03-0-0] [lt=8] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-02-19 19:03:30.731290] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=7] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-02-19 19:03:30.731300] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-02-19 19:03:30.731307] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=6] execute_read failed(ret=-4012, cluster_id=1, tenant_id=1) [2024-02-19 19:03:30.731316] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=6] query failed(ret=-4012, conn=0x7fdcf4e20050, start=1708340608800302, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-02-19 19:03:30.731326] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=10] read failed(ret=-4012) [2024-02-19 19:03:30.731335] WARN [SHARE] load (ob_core_table_proxy.cpp:436) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=6] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-02-19 19:03:30.731441] WARN [SHARE] load (ob_core_table_proxy.cpp:368) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=8] load failed(ret=-4012, for_update=false) [2024-02-19 19:03:30.731456] WARN [SHARE] get (ob_global_stat_proxy.cpp:321) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=14] core_table load failed(ret=-4012) [2024-02-19 19:03:30.731467] WARN [SHARE] get_snapshot_gc_scn (ob_global_stat_proxy.cpp:165) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=10] get failed(ret=-4012) [2024-02-19 19:03:30.731478] WARN [STORAGE] get_global_info (ob_tenant_freeze_info_mgr.cpp:721) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9] fail to get global info(ret=-4012, tenant_id=1) [2024-02-19 19:03:30.731489] WARN [STORAGE] try_update_info (ob_tenant_freeze_info_mgr.cpp:838) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=10] failed to get global info(ret=-4012) [2024-02-19 19:03:30.731499] WARN [STORAGE] runTimerTask (ob_tenant_freeze_info_mgr.cpp:889) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=8] fail to try update info(tmp_ret=-4012, tmp_ret="OB_TIMEOUT") [2024-02-19 19:03:30.731515] WARN run1 (ob_timer.cpp:396) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9] timer task cost too much time(task="tasktype:N9oceanbase7storage21ObTenantFreezeInfoMgr10ReloadTaskE", start_time=1708340608797933, end_time=1708340610731508, elapsed_time=1933575, this=0x7fdd191ad4f0, thread_id=1107631) [2024-02-19 19:03:30.738959] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.739007] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.742323] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=6] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-02-19 19:03:30.742350] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=28] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-02-19 19:03:30.742365] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=12] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:30.742376] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=10] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-02-19 19:03:30.742391] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=10] fail to resolve table(ret=-5019) [2024-02-19 19:03:30.742401] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=10] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:30.742417] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=9] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-02-19 19:03:30.742427] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=9] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:30.742437] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=8] resolve basic table failed(ret=-5019) [2024-02-19 19:03:30.742446] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=9] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:30.742456] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=8] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:30.742466] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=9] resolve normal query failed(ret=-5019) [2024-02-19 19:03:30.742475] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:30.742494] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=10] failed to resolve(ret=-5019) [2024-02-19 19:03:30.742506] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=11] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:30.742519] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=9] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:30.742529] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=9] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:30.742541] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=9] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-02-19 19:03:30.742553] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=10] executor execute failed(ret=-5019) [2024-02-19 19:03:30.742564] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=10] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0) [2024-02-19 19:03:30.742585] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=14] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:30.742604] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=16] result set close failed(ret=-5019) [2024-02-19 19:03:30.742614] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=9] result set close failed(ret=-5019) [2024-02-19 19:03:30.742623] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:30.742650] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:30.742664] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=11] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-02-19 19:03:30.742676] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:30.742688] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:30.742698] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=9] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:30.742709] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=9] query failed(ret=-5019, conn=0x7fdcdc9be050, start=1708340610742090, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:30.742722] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=12] read failed(ret=-5019) [2024-02-19 19:03:30.742733] WARN [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:612) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=9] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:30.742813] WARN [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=12] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:30.742826] WARN [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=13] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-02-19 19:03:30.742837] WARN [SHARE] next (ob_ls_table_iterator.cpp:71) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=11] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:30.742847] WARN [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:331) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=9] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:30.742858] WARN [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:213) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=9] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-02-19 19:03:30.742871] WARN [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:193) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=10] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-02-19 19:03:30.742880] WARN [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:43) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DD-0-0] [lt=9] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:30.749242] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=126] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.749280] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.753543] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:30.753575] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=33] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:30.753596] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340610753522) [2024-02-19 19:03:30.753610] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340610553431, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:30.753689] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] get wrs ts(ls_id={id:1}, delta_ns=-1706042771804542566, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:30.753706] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:30.759635] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.759673] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.765847] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=28] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340610765826}) [2024-02-19 19:03:30.765897] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=52] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340610715827}}) [2024-02-19 19:03:30.767814] WARN [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2113) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=10] get invalid Ethernet speed, use default(devname="ens18") [2024-02-19 19:03:30.767841] WARN [SERVER] runTimerTask (ob_server.cpp:2632) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=27] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4002, ret="OB_INVALID_ARGUMENT") [2024-02-19 19:03:30.769801] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.769836] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.781221] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.781261] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.785473] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=24] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:30.785582] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=39] Wash time detail, (compute_wash_size_time=136, refresh_score_time=63, wash_time=7) [2024-02-19 19:03:30.791770] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.791813] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.801939] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.801999] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=62] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.812144] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.812197] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=53] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.817231] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.817269] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=38] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610817217}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.817296] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=23] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610817217}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.822330] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.822367] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.822451] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:30.822472] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=20] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:30.822484] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=10] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:30.822494] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=9] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:30.822507] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=9] fail to resolve table(ret=-5019) [2024-02-19 19:03:30.822517] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=9] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:30.822530] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=9] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:30.822539] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=8] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:30.822548] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=8] resolve basic table failed(ret=-5019) [2024-02-19 19:03:30.822557] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=8] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:30.822565] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=7] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:30.822575] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=8] resolve normal query failed(ret=-5019) [2024-02-19 19:03:30.822584] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=8] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:30.822601] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=10] failed to resolve(ret=-5019) [2024-02-19 19:03:30.822612] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=10] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:30.822624] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=9] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:30.822633] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=8] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:30.822643] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=7] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, ret=-5019) [2024-02-19 19:03:30.822654] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=9] executor execute failed(ret=-5019) [2024-02-19 19:03:30.822665] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=8] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, retry_cnt=0) [2024-02-19 19:03:30.822683] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=12] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:30.822702] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=16] result set close failed(ret=-5019) [2024-02-19 19:03:30.822711] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:30.822719] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=7] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:30.822744] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:30.822756] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A023-0-0] [lt=11] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-02-19 19:03:30.822768] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:30.822778] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:30.822787] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:30.822797] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] query failed(ret=-5019, conn=0x7fdcdc924050, start=1708340610822225, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:30.822809] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] read failed(ret=-5019) [2024-02-19 19:03:30.822819] WARN get_my_sql_result_ (ob_table_access_helper.h:329) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x7fdcfffccd78, table=__all_server, condition=where svr_ip='172.1.3.242' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, columns_str="zone") [2024-02-19 19:03:30.822839] WARN read_single_row (ob_table_access_helper.h:178) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=14] get mysql result failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1, columns=0x7fdcfffccd78, table=__all_server, where_condition=where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:30.822915] WARN [COORDINATOR] get_self_zone_name (table_accessor.cpp:517) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] get zone from __all_server failed(ret=-4016, ret="OB_ERR_UNEXPECTED", columns=0x7fdcfffccd78, where_condition="where svr_ip='172.1.3.242' and svr_port=2882", zone_name_holder=) [2024-02-19 19:03:30.822929] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:450) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] get self zone name failed(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:30.822940] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:459) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] zone name is empty(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:30.822950] WARN [COORDINATOR] refresh (ob_leader_coordinator.cpp:107) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] get all ls election reference info failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1) [2024-02-19 19:03:30.828712] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC75-0-0] [lt=112] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:30.828764] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC75-0-0] [lt=53] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:30.828797] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC75-0-0] [lt=31] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:30.828827] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC75-0-0] [lt=26] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:30.828843] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC75-0-0] [lt=16] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:30.831221] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=17] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=45616, clean_start_pos=534769, clean_num=31457) [2024-02-19 19:03:30.832486] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.832539] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.843273] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.843315] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.853617] INFO [STORAGE.TRANS] print_stat_ (ob_tenant_weak_read_service.cpp:524) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21] [WRS] [TENANT_WEAK_READ_SERVICE] [STAT](tenant_id=1, server_version={version:1708336686671726824, total_part_count:1, valid_inner_part_count:1, valid_user_part_count:0}, server_version_delta=3924181879226, in_cluster_service=false, cluster_version=0, min_cluster_version=0, max_cluster_version=0, get_cluster_version_err=0, cluster_version_delta=1708340610853606050, cluster_service_master="0.0.0.0:0", cluster_service_tablet_id={id:226}, post_cluster_heartbeat_count=0, succ_cluster_heartbeat_count=0, cluster_heartbeat_interval=1000000, local_cluster_version=0, local_cluster_delta=1708340610853606050, force_self_check=false, weak_read_refresh_interval=100000) [2024-02-19 19:03:30.853728] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=51] get wrs ts(ls_id={id:1}, delta_ns=-1706042771804441910, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:30.853746] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=1, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:30.853790] INFO [STORAGE.TRANS] generate_new_version (ob_tenant_weak_read_server_version_mgr.cpp:120) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] [WRS] update tenant weak read server version(tenant_id=1, server_version={version:1708336686671726824, total_part_count:1, valid_inner_part_count:1, valid_user_part_count:0, epoch_tstamp:1708340610853664}, version_delta=-1706628346060873037) [2024-02-19 19:03:30.860220] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.860265] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.866073] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340610866049}) [2024-02-19 19:03:30.866105] WARN [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:485) [1106741][SysLocAsyncUp0][T0][YB42AC0103F2-000611B9212AA0B9-0-0] [lt=22] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, tasks=[{cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340610866049}]) [2024-02-19 19:03:30.866179] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=110] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340610817217}}) [2024-02-19 19:03:30.869791] INFO [COMMON] print_io_status (ob_io_struct.cpp:619) [1106661][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=25] [IO STATUS](tenant_ids=[1, 500], send_thread_count=2, send_queues=[0, 0]) [2024-02-19 19:03:30.870643] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.870684] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.882028] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.882080] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=55] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.892686] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.892734] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.899730] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=91] Cache replace map node details(ret=0, replace_node_count=0, replace_time=30951, replace_start_pos=990864, replace_num=15728) [2024-02-19 19:03:30.903038] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.903088] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.913227] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.913277] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=53] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.917996] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=28] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:30.918038] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=67] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610917958}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.918064] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=23] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340610917958}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:30.918088] WARN [STORAGE.TRANS] operator() (ob_ts_mgr.h:225) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=20] refresh gts failed(ret=-4038, ret="OB_NOT_MASTER", gts_tenant_info={v:1}) [2024-02-19 19:03:30.918102] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:229) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=13] refresh gts functor(ret=-4038, ret="OB_NOT_MASTER", gts_tenant_info={v:1}) [2024-02-19 19:03:30.923558] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.923612] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=56] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.925704] WARN [SERVER] batch_process_tasks (ob_ls_table_updater.cpp:333) [1106712][LSSysTblUp0][T0][YB42AC0103F2-000611B9216D2CA2-0-0] [lt=53] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1, task={tenant_id:1, ls_id:{id:1}, add_timestamp:1708337390831403}) [2024-02-19 19:03:30.933915] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.934138] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=56] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.944742] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.944793] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=53] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.954928] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.954973] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.955703] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:30.955728] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:30.955765] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340610955745}) [2024-02-19 19:03:30.955785] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340610955685) [2024-02-19 19:03:30.955800] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340610753621, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:30.955825] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:738) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] current server is WRS leader, need start CLUSTER weak read service(tenant_id=1, serve_leader_epoch=0, cur_leader_epoch=138, cluster_service_tablet_id_={id:226}, in_service=false, can_update_version=false, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:30.955856] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:336) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] begin start service(tenant_id=1, is_in_service()=false, can_update_version=false) [2024-02-19 19:03:30.955868] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:338) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] start TenantWeakReadClusterService(tenant_id=1) [2024-02-19 19:03:30.956689] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:30.956706] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=15] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:30.956716] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=8] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:30.956724] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=7] resolve table relation factor failed(ret=-5019, table_name=__all_weak_read_service) [2024-02-19 19:03:30.956734] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=7] fail to resolve table(ret=-5019) [2024-02-19 19:03:30.956741] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=7] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:30.956752] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=6] Table 'oceanbase.__all_weak_read_service' doesn't exist [2024-02-19 19:03:30.956758] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=6] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:30.956765] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=6] resolve basic table failed(ret=-5019) [2024-02-19 19:03:30.956772] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=6] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:30.956778] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=5] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:30.956785] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=6] resolve normal query failed(ret=-5019) [2024-02-19 19:03:30.956792] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=7] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:30.956806] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=7] failed to resolve(ret=-5019) [2024-02-19 19:03:30.956814] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=7] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:30.956823] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=7] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:30.956830] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=6] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:30.956838] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=6] fail to handle text query(stmt=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '', ret=-5019) [2024-02-19 19:03:30.956846] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=6] executor execute failed(ret=-5019) [2024-02-19 19:03:30.956853] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=7] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, retry_cnt=0) [2024-02-19 19:03:30.956868] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=9] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:30.956881] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=11] result set close failed(ret=-5019) [2024-02-19 19:03:30.956888] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=6] result set close failed(ret=-5019) [2024-02-19 19:03:30.956894] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=5] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:30.956913] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=5] failed to process record(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:30.956942] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797E9-0-0] [lt=8] failed to process final(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, aret=-5019, ret=-5019) [2024-02-19 19:03:30.956955] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=31] execute sql failed(ret=-5019, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:30.956966] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:30.956973] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:30.956981] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7] query failed(ret=-5019, conn=0x7fdd189bc050, start=1708340610956515, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:30.956990] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8] read failed(ret=-5019) [2024-02-19 19:03:30.956998] WARN [STORAGE.TRANS] query_cluster_version_range_ (ob_tenant_weak_read_cluster_service.cpp:192) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6] execute sql read fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", exec_tenant_id=1, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:30.957060] WARN [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:367) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] query cluster version range from WRS table fail(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:30.957070] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:415) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] start service done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, in_service=false, leader_epoch=0, current_version=0, delta=1708340610957067, min_version=0, max_version=0, max_stale_time=5000000000, all_valid_server_count=0, total_time=1227, wlock_time=37, check_leader_time=2, query_version_time=0, persist_version_time=0) [2024-02-19 19:03:30.957083] WARN [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:781) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] start CLUSTER weak read service fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:30.957092] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:791) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, need_start_service=true, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=138, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:30.957143] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] get wrs ts(ls_id={id:1}, delta_ns=-1706042771804338473, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:30.957153] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:30.966159] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=18] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340610966140}) [2024-02-19 19:03:30.966215] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=58] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340610917958}}) [2024-02-19 19:03:30.967048] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.967093] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.977245] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.977307] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=63] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:30.987692] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:30.987741] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.001161] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.001216] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=57] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.012363] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.012420] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=58] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.019629] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.019672] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=44] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611019617}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.019696] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=22] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611019617}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.022580] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.022630] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.032747] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.032798] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=53] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.033054] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=30] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:31.033157] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=24] Wash time detail, (compute_wash_size_time=169, refresh_score_time=71, wash_time=8) [2024-02-19 19:03:31.042947] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.042993] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.054182] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.054229] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.055812] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] get wrs ts(ls_id={id:1}, delta_ns=-1706042771804240105, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:31.055836] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:31.064418] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.064483] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=68] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.066384] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=16] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340611066364}) [2024-02-19 19:03:31.066420] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=36] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340611019617}}) [2024-02-19 19:03:31.073291] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=15] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=40111, clean_start_pos=566226, clean_num=31457) [2024-02-19 19:03:31.074620] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.074653] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.078757] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC76-0-0] [lt=161] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:31.078803] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC76-0-0] [lt=46] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:31.078827] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC76-0-0] [lt=23] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:31.078845] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC76-0-0] [lt=15] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:31.078860] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC76-0-0] [lt=14] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:31.084797] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.084838] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.095377] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.095428] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.105578] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.105618] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.117799] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.117837] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.118096] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:199) [1107573][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=52] sql audit evict task end(evict_high_mem_level=32212254, evict_high_size_level=90000, evict_batch_count=0, elapse_time=1, size_used=14883, mem_used=31196160) [2024-02-19 19:03:31.120306] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=47] Cache replace map node details(ret=0, replace_node_count=0, replace_time=20099, replace_start_pos=1006592, replace_num=15728) [2024-02-19 19:03:31.120327] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.120349] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=28] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611120311}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.120378] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=26] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611120311}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.127910] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.127947] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.138070] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.138128] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.149015] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.149059] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.156171] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:31.156198] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=30] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:31.156214] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340611156151) [2024-02-19 19:03:31.156225] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340610955812, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:31.156309] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] get wrs ts(ls_id={id:1}, delta_ns=-1706042771804139312, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:31.156328] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:31.160234] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.160274] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.166409] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=18] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340611166384}) [2024-02-19 19:03:31.166446] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=39] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340611120311}}) [2024-02-19 19:03:31.170502] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.170548] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.180762] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.180807] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.190894] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.190947] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=56] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.201904] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.201966] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=65] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.205064] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=6] table not exist(tenant_id=1, database_id=201001, table_name=__all_sys_parameter, ret=-5019) [2024-02-19 19:03:31.205116] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=51] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_sys_parameter, ret=-5019) [2024-02-19 19:03:31.205130] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=12] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:31.205142] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=12] resolve table relation factor failed(ret=-5019, table_name=__all_sys_parameter) [2024-02-19 19:03:31.205155] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=9] fail to resolve table(ret=-5019) [2024-02-19 19:03:31.205164] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=8] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:31.205178] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=7] Table 'oceanbase.__all_sys_parameter' doesn't exist [2024-02-19 19:03:31.205197] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=17] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:31.205207] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:31.205222] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=14] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:31.205232] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=8] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:31.205247] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=14] resolve normal query failed(ret=-5019) [2024-02-19 19:03:31.205257] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:31.205282] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=14] failed to resolve(ret=-5019) [2024-02-19 19:03:31.205297] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=15] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.205309] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=8] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.205322] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=12] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:31.205334] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=9] fail to handle text query(stmt=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter, ret=-5019) [2024-02-19 19:03:31.205348] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=13] executor execute failed(ret=-5019) [2024-02-19 19:03:31.205378] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=28] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, retry_cnt=0) [2024-02-19 19:03:31.205404] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=19] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:31.205427] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=20] result set close failed(ret=-5019) [2024-02-19 19:03:31.205439] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=10] result set close failed(ret=-5019) [2024-02-19 19:03:31.205450] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=11] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:31.205475] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D5-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:31.205491] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D5-0-0] [lt=12] failed to process final(executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, aret=-5019, ret=-5019) [2024-02-19 19:03:31.205504] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D5-0-0] [lt=11] execute sql failed(ret=-5019, tenant_id=1, sql=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter) [2024-02-19 19:03:31.205520] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D5-0-0] [lt=13] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:31.205529] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D5-0-0] [lt=9] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:31.205543] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D5-0-0] [lt=13] query failed(ret=-5019, conn=0x7fdcf4ef4050, start=1708340611204829, sql=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter) [2024-02-19 19:03:31.205557] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D5-0-0] [lt=12] read failed(ret=-5019) [2024-02-19 19:03:31.205571] WARN [SHARE] update_local (ob_config_manager.cpp:322) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D5-0-0] [lt=13] read config from __all_sys_parameter failed(sqlstr="select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter", ret=-5019) [2024-02-19 19:03:31.205635] WARN [SHARE] update_local (ob_config_manager.cpp:356) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D5-0-0] [lt=11] Read system config from inner table error(ret=-5019) [2024-02-19 19:03:31.205690] WARN [SHARE] runTimerTask (ob_config_manager.cpp:455) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D5-0-0] [lt=54] Update local config failed(ret=-5019) [2024-02-19 19:03:31.212878] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=63] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.212916] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.220921] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=25] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.220986] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=66] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611220907}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.221011] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=23] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611220907}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.223869] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.223920] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=53] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.234052] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.234106] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=56] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.236397] INFO [LIB] runTimerTask (ob_work_queue.cpp:24) [1106715][ObTimer][T0][Y0-0000000000000000-0-0] [lt=34] add async task(this=tasktype:N9oceanbase10rootserver13ObRootService19ObRefreshServerTaskE) [2024-02-19 19:03:31.237599] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=13] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:31.237619] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=19] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:31.237629] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=8] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:31.237638] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=8] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:31.237647] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=7] fail to resolve table(ret=-5019) [2024-02-19 19:03:31.237654] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=6] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:31.237664] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=5] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:31.237671] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=6] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:31.237678] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=6] resolve basic table failed(ret=-5019) [2024-02-19 19:03:31.237685] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=6] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:31.237691] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=5] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:31.237698] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=6] resolve normal query failed(ret=-5019) [2024-02-19 19:03:31.237705] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=6] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:31.237728] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=7] failed to resolve(ret=-5019) [2024-02-19 19:03:31.237737] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=8] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.237746] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=6] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.237753] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=6] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:31.237764] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=8] fail to handle text query(stmt=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server, ret=-5019) [2024-02-19 19:03:31.237776] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=10] executor execute failed(ret=-5019) [2024-02-19 19:03:31.237786] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=9] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, retry_cnt=0) [2024-02-19 19:03:31.237807] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=14] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:31.237826] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=17] result set close failed(ret=-5019) [2024-02-19 19:03:31.237835] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:31.237844] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:31.237867] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78803-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:31.237901] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106718][RSAsyncTask1][T0][YB42AC0103F2-000611B922A78803-0-0] [lt=30] failed to process final(executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, aret=-5019, ret=-5019) [2024-02-19 19:03:31.237915] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=11] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server) [2024-02-19 19:03:31.237926] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:31.237955] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=9] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:31.237967] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=10] query failed(ret=-5019, conn=0x7fdcdc89a050, start=1708340611237354, sql=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server) [2024-02-19 19:03:31.238038] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=70] read failed(ret=-5019) [2024-02-19 19:03:31.238208] WARN [SHARE] run2 (ob_async_task_queue.cpp:148) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=8] task process failed, start retry(max retry time=0, retry interval=1000000, ret=-5019) [2024-02-19 19:03:31.244255] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.244293] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.254398] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.254437] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.259299] INFO [STORAGE] operator() (ob_tenant_freezer.cpp:124) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=16] ====== tenant freeze timer task ====== [2024-02-19 19:03:31.260471] INFO [SHARE] run_loop_ (ob_bg_thread_monitor.cpp:331) [1109111][BGThreadMonitor][T0][Y0-0000000000000000-0-0] [lt=39] current monitor number(seq_=-1) [2024-02-19 19:03:31.260910] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:31.260945] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=34] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:31.260964] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340611260893) [2024-02-19 19:03:31.260977] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340611156233, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:31.261058] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] get wrs ts(ls_id={id:1}, delta_ns=-1706042771804036552, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:31.261088] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=26] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:31.261589] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=28] table not exist(tenant_id=1, database_id=201001, table_name=__all_freeze_info, ret=-5019) [2024-02-19 19:03:31.261619] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=1235] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_freeze_info, ret=-5019) [2024-02-19 19:03:31.261633] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=12] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:31.261643] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=9] resolve table relation factor failed(ret=-5019, table_name=__all_freeze_info) [2024-02-19 19:03:31.261660] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=10] fail to resolve table(ret=-5019) [2024-02-19 19:03:31.261677] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=17] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:31.261692] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=9] Table 'oceanbase.__all_freeze_info' doesn't exist [2024-02-19 19:03:31.261703] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=10] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:31.261713] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:31.261723] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=9] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:31.261733] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=9] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:31.261744] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=10] resolve normal query failed(ret=-5019) [2024-02-19 19:03:31.261755] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:31.261776] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=11] failed to resolve(ret=-5019) [2024-02-19 19:03:31.261787] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=10] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.261800] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=10] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.261810] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=9] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:31.261821] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=9] fail to handle text query(stmt=SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1, ret=-5019) [2024-02-19 19:03:31.261834] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=10] executor execute failed(ret=-5019) [2024-02-19 19:03:31.261847] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=11] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1"}, retry_cnt=0) [2024-02-19 19:03:31.261871] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=16] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:31.261891] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=17] result set close failed(ret=-5019) [2024-02-19 19:03:31.261901] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:31.261909] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:31.261935] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:31.261950] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D5-0-0] [lt=13] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1"}, aret=-5019, ret=-5019) [2024-02-19 19:03:31.261964] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1) [2024-02-19 19:03:31.262003] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=37] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:31.262015] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:31.262026] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] query failed(ret=-5019, conn=0x7fdcf4e20050, start=1708340611260172, sql=SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1) [2024-02-19 19:03:31.262038] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=12] read failed(ret=-5019) [2024-02-19 19:03:31.262049] WARN [SHARE] get_freeze_info (ob_freeze_info_proxy.cpp:68) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", sql=SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1, tenant_id=1) [2024-02-19 19:03:31.262157] WARN [STORAGE] get_global_frozen_scn_ (ob_tenant_freezer.cpp:1086) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] get_frozen_scn failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:31.262167] WARN [STORAGE] do_major_if_need_ (ob_tenant_freezer.cpp:1188) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] fail to get global frozen version(ret=-5019) [2024-02-19 19:03:31.262176] WARN [STORAGE] check_and_freeze_normal_data_ (ob_tenant_freezer.cpp:379) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] [TenantFreezer] fail to do major freeze(tmp_ret=-5019) [2024-02-19 19:03:31.262201] INFO [STORAGE] check_and_freeze_tx_data_ (ob_tenant_freezer.cpp:419) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] [TenantFreezer] Trigger Tx Data Table Self Freeze. (tenant_info_.tenant_id_=1, tenant_tx_data_mem_used=430988896, self_freeze_max_limit_=214748364, hold_memory=1718894592, self_freeze_tenant_hold_limit_=429496729, self_freeze_min_limit_=21474836) [2024-02-19 19:03:31.263675] INFO [STORAGE] do_tx_data_table_freeze_ (ob_tenant_freezer_rpc.cpp:73) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D6-0-0] [lt=9] start tx data table self freeze task in rpc handle thread(arg_=freeze_type:3) [2024-02-19 19:03:31.263704] INFO [STORAGE] self_freeze_task (ob_tx_data_table.cpp:794) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D6-0-0] [lt=24] start tx data table self freeze task(get_ls_id()={id:1}) [2024-02-19 19:03:31.263720] INFO [STORAGE] freeze (ob_tx_data_memtable_mgr.cpp:192) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D6-0-0] [lt=12] start freeze tx data memtable(ls_id_={id:1}) [2024-02-19 19:03:31.263743] INFO [STORAGE] freeze_ (ob_tx_data_memtable_mgr.cpp:228) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D6-0-0] [lt=21] There is a freezed memetable existed. Try freeze after flushing it.(ret=-4023, ret="OB_EAGAIN", get_memtable_count_()=2) [2024-02-19 19:03:31.263753] WARN [STORAGE] freeze (ob_tx_data_memtable_mgr.cpp:206) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D6-0-0] [lt=10] freeze tx data memtable fail.(ret=-4023, ret="OB_EAGAIN") [2024-02-19 19:03:31.263761] WARN [STORAGE] flush (ob_tx_data_memtable_mgr.cpp:430) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D6-0-0] [lt=7] freeze failed(ret=-4023, ret="OB_EAGAIN", this=0x7fdce89de180) [2024-02-19 19:03:31.263769] WARN [STORAGE] self_freeze_task (ob_tx_data_table.cpp:798) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D6-0-0] [lt=7] self freeze of tx data memtable failed.(ret=-4023, ret="OB_EAGAIN", ls_id={id:1}, memtable_mgr_={ObIMemtableMgr:{Memtables:this:0x7fdce89de180, ref_cnt:1, is_inited:true, tablet_id:{id:49402}, freezer:0x7fdce89e30d0, table_type:1, memtable_head:0, memtable_tail:2, t3m:0x7fdd18bce030, tables:[0x7fdce5eea080, 0x7fdce5eea360, null, null, null, null, null, null, null, null, null, null, null, null, null, null]}, is_freezing:false, ls_id:{id:1}, tx_data_table:0x7fdce89e4550, ls_tablet_svr:0x7fdce89de160, slice_allocator:0x7fdce89e4590}) [2024-02-19 19:03:31.263790] INFO [STORAGE] self_freeze_task (ob_tx_data_table.cpp:801) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D6-0-0] [lt=21] finish tx data table self freeze task(ret=-4023, ret="OB_EAGAIN", get_ls_id()={id:1}) [2024-02-19 19:03:31.263798] WARN [STORAGE] do_tx_data_table_freeze_ (ob_tenant_freezer_rpc.cpp:102) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D6-0-0] [lt=7] freeze tx data table failed.(ret=-4023, ret="OB_EAGAIN", arg_=freeze_type:3) [2024-02-19 19:03:31.263805] INFO [STORAGE] do_tx_data_table_freeze_ (ob_tenant_freezer_rpc.cpp:115) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D6-0-0] [lt=6] finish self freeze task in rpc handle thread(ret=-4023, ret="OB_EAGAIN", arg_=freeze_type:3) [2024-02-19 19:03:31.263813] WARN [STORAGE] process (ob_tenant_freezer_rpc.cpp:56) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D6-0-0] [lt=5] do tx data table freeze failed.(ret=-4023, ret="OB_EAGAIN", arg_=freeze_type:3) [2024-02-19 19:03:31.263940] INFO [STORAGE] rpc_callback (ob_tenant_freezer.cpp:990) [1106789][RpcIO][T0][Y0-0000000000000000-0-0] [lt=17] [TenantFreezer] call back of tenant freezer request [2024-02-19 19:03:31.264629] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.264658] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.266976] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=18] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340611266959}) [2024-02-19 19:03:31.267019] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=47] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340611220907}}) [2024-02-19 19:03:31.274853] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.274901] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.278831] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=77] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:31.279437] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=80] Wash time detail, (compute_wash_size_time=3610, refresh_score_time=72, wash_time=455) [2024-02-19 19:03:31.285268] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.285308] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.295620] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.295663] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.305782] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.305818] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.316249] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.316299] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=53] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.321953] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:31.322012] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=60] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:31.322028] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=13] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:31.322040] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=11] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:31.322053] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=9] fail to resolve table(ret=-5019) [2024-02-19 19:03:31.322064] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=11] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:31.322095] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=24] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:31.322106] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=9] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:31.322117] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:31.322129] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=11] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:31.322141] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=10] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:31.322152] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=11] resolve normal query failed(ret=-5019) [2024-02-19 19:03:31.322163] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:31.322185] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=11] failed to resolve(ret=-5019) [2024-02-19 19:03:31.322200] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=13] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.322212] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=10] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.322223] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=10] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:31.322235] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=8] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, ret=-5019) [2024-02-19 19:03:31.322249] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=12] executor execute failed(ret=-5019) [2024-02-19 19:03:31.322263] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=11] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, retry_cnt=0) [2024-02-19 19:03:31.322285] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=15] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:31.322304] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=16] result set close failed(ret=-5019) [2024-02-19 19:03:31.322312] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=7] result set close failed(ret=-5019) [2024-02-19 19:03:31.322318] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=6] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:31.322340] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=7] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:31.322349] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A024-0-0] [lt=8] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-02-19 19:03:31.322358] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:31.322367] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:31.322374] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:31.322382] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6] query failed(ret=-5019, conn=0x7fdcdc9be050, start=1708340611321770, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:31.322395] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=12] read failed(ret=-5019) [2024-02-19 19:03:31.322405] WARN get_my_sql_result_ (ob_table_access_helper.h:329) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x7fdcfffccd78, table=__all_server, condition=where svr_ip='172.1.3.242' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, columns_str="zone") [2024-02-19 19:03:31.322427] WARN read_single_row (ob_table_access_helper.h:178) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=16] get mysql result failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1, columns=0x7fdcfffccd78, table=__all_server, where_condition=where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:31.322494] WARN [COORDINATOR] get_self_zone_name (table_accessor.cpp:517) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=12] get zone from __all_server failed(ret=-4016, ret="OB_ERR_UNEXPECTED", columns=0x7fdcfffccd78, where_condition="where svr_ip='172.1.3.242' and svr_port=2882", zone_name_holder=) [2024-02-19 19:03:31.322504] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:450) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] get self zone name failed(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:31.322512] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:459) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] zone name is empty(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:31.322521] WARN [COORDINATOR] refresh (ob_leader_coordinator.cpp:107) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] get all ls election reference info failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1) [2024-02-19 19:03:31.322744] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.322757] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=12] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611322738}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.322771] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=13] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611322738}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.327034] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.327081] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.328879] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC77-0-0] [lt=127] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:31.328913] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC77-0-0] [lt=32] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:31.328958] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC77-0-0] [lt=43] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:31.328977] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC77-0-0] [lt=17] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:31.328994] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC77-0-0] [lt=16] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:31.337254] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.337300] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.338875] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=30] Cache replace map node details(ret=0, replace_node_count=0, replace_time=18456, replace_start_pos=1022320, replace_num=15728) [2024-02-19 19:03:31.340893] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=35] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=61415, clean_start_pos=597683, clean_num=31457) [2024-02-19 19:03:31.347438] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.347500] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=63] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.357709] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.357762] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=57] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.361049] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24] get wrs ts(ls_id={id:1}, delta_ns=-1706042771803934800, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:31.361078] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=30] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:31.367613] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=18] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340611367596}) [2024-02-19 19:03:31.367649] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=37] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340611322738}}) [2024-02-19 19:03:31.371974] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.372026] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=53] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.382839] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.382890] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.388580] INFO [CLOG] run1 (ob_garbage_collector.cpp:957) [1108320][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=8] Garbage Collector is running(seq_=393, gc_interval=10000000) [2024-02-19 19:03:31.388645] INFO [CLOG] construct_server_ls_map_for_member_list_ (ob_garbage_collector.cpp:1054) [1108320][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=38] self is leader, skip it(ls->get_ls_id()={id:1}) [2024-02-19 19:03:31.388676] INFO [CLOG] gc_check_member_list_ (ob_garbage_collector.cpp:1014) [1108320][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=17] gc_check_member_list_ cost time(ret=0, time_ns=59329) [2024-02-19 19:03:31.388693] INFO [CLOG] execute_gc_ (ob_garbage_collector.cpp:1255) [1108320][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=11] execute_gc cost time(ret=0, time_ns=1219) [2024-02-19 19:03:31.388706] INFO [CLOG] gc_check_ls_status_ (ob_garbage_collector.cpp:1200) [1108320][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=9] gc_candidates push_back success(ret=0, candidate={ls_id_:{id:1}, ls_status_:1, gc_reason_:0}) [2024-02-19 19:03:31.388727] INFO [CLOG] execute_gc_ (ob_garbage_collector.cpp:1222) [1108320][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=13] ls status is normal, skip(id={id:1}, gc_candidates=[{ls_id_:{id:1}, ls_status_:1, gc_reason_:0}]) [2024-02-19 19:03:31.388741] INFO [CLOG] execute_gc_ (ob_garbage_collector.cpp:1255) [1108320][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=12] execute_gc cost time(ret=0, time_ns=18680) [2024-02-19 19:03:31.393042] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.393116] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.405418] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.405470] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.415604] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.415640] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.423303] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=8] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.423340] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=37] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611423291}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.423359] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=18] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611423291}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.425802] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.425849] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.435988] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.436039] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=53] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.442707] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=17] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.442742] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=36] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.442761] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=18] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:31.443191] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=27] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.443216] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.443229] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:31.444014] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.444033] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.444046] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:31.444389] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.444406] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.444418] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=11] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:31.444973] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.444991] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.445003] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=11] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:31.445309] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.445341] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=31] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.445354] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:31.445594] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.445605] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=10] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.445613] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=7] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:31.446010] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.446028] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.446038] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:31.446169] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.446202] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.446345] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=6] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.446359] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.446371] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=10] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:31.446620] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=7] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.446638] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.446649] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:31.446961] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.446976] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.446987] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=10] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:31.447276] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.447297] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.447310] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:31.447774] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.447787] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.447795] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=8] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:31.447895] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.447910] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.447918] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:31.448395] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=6] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.448420] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.448435] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=14] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:31.448515] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=6] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.448532] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.448540] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=8] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:31.448588] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106788][RpcIO][T0][Y0-0000000000000000-0-0] [lt=22] [RPC EASY STAT](log_str=conn count=1/1, request done=19507/19507, request doing=0/0) [2024-02-19 19:03:31.448615] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106789][RpcIO][T0][Y0-0000000000000000-0-0] [lt=15] [RPC EASY STAT](log_str=conn count=1/1, request done=19507/19507, request doing=0/0) [2024-02-19 19:03:31.449028] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.449136] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=7] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.449630] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.449738] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.450621] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.450609] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.451245] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=29] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.451249] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=42] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.451903] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.452107] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.452750] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.453136] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.453475] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.454335] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.454704] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.455309] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.455924] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.456535] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.457188] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.457804] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.458463] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.458474] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.458521] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.458571] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.459075] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.459230] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.459687] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.459849] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.460032] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=7] TIMER THREAD STAT: (thread_id=1106660, task_cnt=132, avg_time=4) [2024-02-19 19:03:31.460051] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=21] TIMER THREAD STAT: (thread_id=1106689, task_cnt=270, avg_time=21745) [2024-02-19 19:03:31.460060] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=8] TIMER THREAD STAT: (thread_id=1106688, task_cnt=247, avg_time=42519) [2024-02-19 19:03:31.460069] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=8] TIMER THREAD STAT: (thread_id=1106739, task_cnt=59, avg_time=6236) [2024-02-19 19:03:31.460077] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=7] TIMER THREAD STAT: (thread_id=1106653, task_cnt=182, avg_time=3355) [2024-02-19 19:03:31.460086] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=8] TIMER THREAD STAT: (thread_id=1106769, task_cnt=60, avg_time=2) [2024-02-19 19:03:31.460094] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=8] TIMER THREAD STAT: (thread_id=1106757, task_cnt=62, avg_time=1705) [2024-02-19 19:03:31.460102] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=7] TIMER THREAD STAT: (thread_id=1107529, task_cnt=5712, avg_time=103) [2024-02-19 19:03:31.460107] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1106767, task_cnt=30, avg_time=29) [2024-02-19 19:03:31.460114] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=6] TIMER THREAD STAT: (thread_id=1106655, task_cnt=20, avg_time=14) [2024-02-19 19:03:31.460122] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=7] TIMER THREAD STAT: (thread_id=1107574, task_cnt=60, avg_time=2) [2024-02-19 19:03:31.460128] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1107037, task_cnt=66, avg_time=118) [2024-02-19 19:03:31.460133] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1106656, task_cnt=12, avg_time=3708) [2024-02-19 19:03:31.460139] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1108321, task_cnt=60, avg_time=1712) [2024-02-19 19:03:31.460145] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1107631, task_cnt=21, avg_time=1933574) [2024-02-19 19:03:31.460150] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1108333, task_cnt=30, avg_time=11405) [2024-02-19 19:03:31.460156] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1107640, task_cnt=6, avg_time=132) [2024-02-19 19:03:31.460161] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1108331, task_cnt=12, avg_time=4231) [2024-02-19 19:03:31.460167] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1108334, task_cnt=12, avg_time=80) [2024-02-19 19:03:31.460172] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1108332, task_cnt=12, avg_time=206) [2024-02-19 19:03:31.460178] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1106715, task_cnt=62, avg_time=45) [2024-02-19 19:03:31.460183] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1106706, task_cnt=30, avg_time=1) [2024-02-19 19:03:31.460189] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1106658, task_cnt=60, avg_time=1578) [2024-02-19 19:03:31.460194] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1106747, task_cnt=597, avg_time=46) [2024-02-19 19:03:31.460200] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1109105, task_cnt=59, avg_time=22) [2024-02-19 19:03:31.460205] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1109116, task_cnt=60, avg_time=3) [2024-02-19 19:03:31.460211] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1108363, task_cnt=30, avg_time=813) [2024-02-19 19:03:31.460219] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=8] TIMER THREAD STAT: (thread_id=1106760, task_cnt=6, avg_time=37) [2024-02-19 19:03:31.460227] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=8] TIMER THREAD STAT: (thread_id=1106770, task_cnt=6, avg_time=66) [2024-02-19 19:03:31.460237] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=9] TIMER THREAD STAT: (thread_id=1106748, task_cnt=6, avg_time=114) [2024-02-19 19:03:31.460242] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1106758, task_cnt=6, avg_time=9087) [2024-02-19 19:03:31.460248] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1108341, task_cnt=2, avg_time=29841) [2024-02-19 19:03:31.460253] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1106763, task_cnt=1, avg_time=4) [2024-02-19 19:03:31.460268] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=15] TIMER THREAD STAT: (thread_id=1108318, task_cnt=1, avg_time=17) [2024-02-19 19:03:31.460274] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1108340, task_cnt=0, avg_time=0) [2024-02-19 19:03:31.460280] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1108322, task_cnt=0, avg_time=0) [2024-02-19 19:03:31.460285] INFO dump (ob_timer_monitor.cpp:200) [1106770][ObTimer][T0][Y0-0000000000000000-0-0] [lt=5] TIMER THREAD STAT: (thread_id=1109106, task_cnt=0, avg_time=0) [2024-02-19 19:03:31.460373] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.460495] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.460970] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.461073] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.461641] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.461683] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.462303] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.462418] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.462917] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.463026] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.463926] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.464540] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.465041] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.465467] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.465532] WARN [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:287) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-02-19 19:03:31.465567] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=33] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:31.465583] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:31.465630] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=28] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340611465612}) [2024-02-19 19:03:31.465654] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340611465516) [2024-02-19 19:03:31.465670] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340611260989, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:31.465694] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:738) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] current server is WRS leader, need start CLUSTER weak read service(tenant_id=1, serve_leader_epoch=0, cur_leader_epoch=138, cluster_service_tablet_id_={id:226}, in_service=false, can_update_version=false, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:31.465727] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:336) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] begin start service(tenant_id=1, is_in_service()=false, can_update_version=false) [2024-02-19 19:03:31.465739] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:338) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] start TenantWeakReadClusterService(tenant_id=1) [2024-02-19 19:03:31.466141] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.466262] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.466752] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.466881] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.467493] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.467542] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.468474] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.468670] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.468690] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=20] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.468727] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340611468491}) [2024-02-19 19:03:31.468743] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340611423291}}) [2024-02-19 19:03:31.468782] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:31.468803] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=20] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:31.468814] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=11] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:31.468824] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=9] resolve table relation factor failed(ret=-5019, table_name=__all_weak_read_service) [2024-02-19 19:03:31.468837] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=8] fail to resolve table(ret=-5019) [2024-02-19 19:03:31.468846] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=8] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:31.468859] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=7] Table 'oceanbase.__all_weak_read_service' doesn't exist [2024-02-19 19:03:31.468869] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=8] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:31.468878] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:31.468887] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=8] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:31.468896] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=8] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:31.468905] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=8] resolve normal query failed(ret=-5019) [2024-02-19 19:03:31.468916] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:31.469002] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=77] failed to resolve(ret=-5019) [2024-02-19 19:03:31.469015] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=11] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.469026] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=9] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.469036] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=9] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:31.469047] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=8] fail to handle text query(stmt=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '', ret=-5019) [2024-02-19 19:03:31.469059] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=10] executor execute failed(ret=-5019) [2024-02-19 19:03:31.469069] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=10] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, retry_cnt=0) [2024-02-19 19:03:31.469088] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=12] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:31.469106] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=15] result set close failed(ret=-5019) [2024-02-19 19:03:31.469115] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:31.469124] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:31.469152] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:31.469165] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EA-0-0] [lt=11] failed to process final(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, aret=-5019, ret=-5019) [2024-02-19 19:03:31.469176] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:31.469188] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:31.469198] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:31.469208] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8] query failed(ret=-5019, conn=0x7fdcdc924050, start=1708340611468565, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:31.469221] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] read failed(ret=-5019) [2024-02-19 19:03:31.469232] WARN [STORAGE.TRANS] query_cluster_version_range_ (ob_tenant_weak_read_cluster_service.cpp:192) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8] execute sql read fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", exec_tenant_id=1, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:31.469318] WARN [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:367) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] query cluster version range from WRS table fail(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:31.469335] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:415) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] start service done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, in_service=false, leader_epoch=0, current_version=0, delta=1708340611469330, min_version=0, max_version=0, max_stale_time=5000000000, all_valid_server_count=0, total_time=3620, wlock_time=37, check_leader_time=2, query_version_time=0, persist_version_time=0) [2024-02-19 19:03:31.469355] WARN [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:781) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] start CLUSTER weak read service fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:31.469356] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.469367] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:791) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, need_start_service=true, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=138, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:31.469429] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] get wrs ts(ls_id={id:1}, delta_ns=-1706042771803826563, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:31.469444] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:31.470023] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=28] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.470288] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.470632] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.470897] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.471246] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.471508] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.471847] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.472141] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.472450] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.472819] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=79] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.473068] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.473492] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.473886] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.474133] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.475417] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.476028] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.476640] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.477318] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.477934] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.479289] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=25] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.479319] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.479388] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.480348] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.481085] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=29] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.481514] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.481729] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.482191] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.482450] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=38] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.483154] INFO [STORAGE.TRANS] run1 (ob_xa_trans_heartbeat_worker.cpp:82) [1108327][T1_ObXAHbWorker][T1][Y0-0000000000000000-0-0] [lt=36] XA scheduler heartbeat task statistics(avg_time=1) [2024-02-19 19:03:31.483225] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=31] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.483556] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=27] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.484416] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.485802] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.486382] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.487070] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.487683] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.490450] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.491096] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=31] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.491851] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:31.495565] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.495611] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.506163] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.506208] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.516335] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.516371] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.524129] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.524173] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=34] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611524115}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.524196] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=21] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611524115}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.526579] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.526627] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.534645] INFO [SQL.PC] update_memory_conf (ob_plan_cache.cpp:1499) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=1013] update plan cache memory config(ob_plan_cache_percentage=5, ob_plan_cache_evict_high_percentage=90, ob_plan_cache_evict_low_percentage=50, tenant_id=1) [2024-02-19 19:03:31.534708] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1130) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=62] start lib cache evict(tenant_id=1, mem_hold=2097152, mem_limit=107374180, cache_obj_num=2, cache_node_num=2) [2024-02-19 19:03:31.534728] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1147) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=16] end lib cache evict(tenant_id=1, cache_evict_num=0, mem_hold=2097152, mem_limit=107374180, cache_obj_num=2, cache_node_num=2) [2024-02-19 19:03:31.536758] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.536802] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.538872] INFO [SQL.PC] runTimerTask (ob_plan_cache_manager.cpp:291) [1106739][PlanCacheEvict][T0][Y0-0000000000000000-0-0] [lt=12] schedule next cache evict task(evict_interval=1000000) [2024-02-19 19:03:31.542175] INFO [SQL.PC] runTimerTask (ob_plan_cache_manager.cpp:299) [1106739][PlanCacheEvict][T0][Y0-0000000000000000-0-0] [lt=28] schedule next cache evict task(evict_interval=1000000) [2024-02-19 19:03:31.542361] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=47] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:31.542498] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=52] Wash time detail, (compute_wash_size_time=508, refresh_score_time=79, wash_time=7) [2024-02-19 19:03:31.543979] INFO [SHARE.LOCATION] dump_cache (ob_ls_location_service.cpp:1011) [1106748][DumpLSLoc][T0][Y0-0000000000000000-0-0] [lt=53] [LOCATION_CACHE]dump tenant ls location caches(tenant_id=1, tenant_ls_locations=[{cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1}, renew_time:1708340611468580, replica_locations:[{server:"172.1.3.242:2882", role:1, sql_port:2881, replica_type:0, property:{memstore_percent_:100}, restore_status:{status:0}}]}]) [2024-02-19 19:03:31.547143] INFO [ARCHIVE] stop (ob_archive_scheduler_service.cpp:137) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=16] stop archive scheduler service [2024-02-19 19:03:31.549251] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_backup_info, ret=-5019) [2024-02-19 19:03:31.549274] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=23] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_backup_info, ret=-5019) [2024-02-19 19:03:31.549284] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=8] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:31.549292] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=7] resolve table relation factor failed(ret=-5019, table_name=__all_backup_info) [2024-02-19 19:03:31.549306] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=9] fail to resolve table(ret=-5019) [2024-02-19 19:03:31.549316] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=10] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:31.549329] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=9] Table 'oceanbase.__all_backup_info' doesn't exist [2024-02-19 19:03:31.549336] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=6] resolve basic table failed(ret=-5019) [2024-02-19 19:03:31.549347] WARN [SQL.RESV] resolve_table_list (ob_update_resolver.cpp:423) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=9] failed to resolve table(ret=-5019) [2024-02-19 19:03:31.549354] WARN [SQL.RESV] resolve (ob_update_resolver.cpp:76) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=6] resolve table failed(ret=-5019) [2024-02-19 19:03:31.549364] WARN [SQL.RESV] stmt_resolver_func (ob_resolver.cpp:155) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=8] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3074) [2024-02-19 19:03:31.549386] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=10] failed to resolve(ret=-5019) [2024-02-19 19:03:31.549401] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=15] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.549417] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=12] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.549431] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=13] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:31.549445] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=11] fail to handle text query(stmt=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882', ret=-5019) [2024-02-19 19:03:31.549485] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=13] executor execute failed(ret=-5019) [2024-02-19 19:03:31.549501] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=40] execute failed(ret=-5019, executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, retry_cnt=0) [2024-02-19 19:03:31.549524] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=17] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:31.549547] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=19] result set close failed(ret=-5019) [2024-02-19 19:03:31.549557] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=9] result set close failed(ret=-5019) [2024-02-19 19:03:31.549566] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:31.549595] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAA-0-0] [lt=10] failed to process record(executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:31.549609] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106733][BackupLease][T0][YB42AC0103F2-000611B923978EAA-0-0] [lt=12] failed to process final(executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, aret=-5019, ret=-5019) [2024-02-19 19:03:31.549621] WARN [SERVER] execute_write_inner (ob_inner_sql_connection.cpp:1818) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:31.549634] INFO [SERVER] execute_write_inner (ob_inner_sql_connection.cpp:1900) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=11] execute write sql(ret=-5019, tenant_id=1, affected_rows=0, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:31.549672] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:31.549687] WARN [SERVER] execute_write (ob_inner_sql_connection.cpp:1786) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=13] execute_write failed(ret=-5019, tenant_id=1, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882', is_user_sql=false) [2024-02-19 19:03:31.549701] WARN [SERVER] execute_write (ob_inner_sql_connection.cpp:1775) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=12] execute_write failed(ret=-5019, tenant_id=1, sql="update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'") [2024-02-19 19:03:31.549714] WARN [COMMON.MYSQLP] write (ob_mysql_proxy.cpp:133) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=11] execute sql failed(ret=-5019, conn=0x7fdd189bc050, start=1708340611547266, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:31.549768] WARN [SERVER] clean_backup_scheduler_leader (ob_backup_operator.cpp:348) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=15] execute sql failed(ret=-5019, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:31.549786] WARN [SERVER] clean_backup_scheduler_leader (ob_backup_manager.cpp:517) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=15] failed to clean backup scheduler leader(ret=-5019) [2024-02-19 19:03:31.550255] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.550283] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.560405] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=23] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.560456] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=53] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.565653] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] get wrs ts(ls_id={id:1}, delta_ns=-1706042771803730023, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:31.565689] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=37] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:31.568569] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=16] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340611568545}) [2024-02-19 19:03:31.568607] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=39] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340611524115}}) [2024-02-19 19:03:31.570278] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=31] Cache replace map node details(ret=0, replace_node_count=0, replace_time=31280, replace_start_pos=1038048, replace_num=15728) [2024-02-19 19:03:31.570864] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.570914] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.580828] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC78-0-0] [lt=127] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:31.580884] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC78-0-0] [lt=48] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:31.580907] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC78-0-0] [lt=30] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:31.580940] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC78-0-0] [lt=15] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:31.580955] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC78-0-0] [lt=14] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:31.581493] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=62] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.581522] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.591081] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=27] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=48547, clean_start_pos=629140, clean_num=31457) [2024-02-19 19:03:31.591656] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.591696] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.603298] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=58] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.603344] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.613485] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.613856] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=372] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.614518] WARN [PALF] runTimerTask (block_gc_timer_task.cpp:98) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=657] try_recycle_blocks cost too much time(ret=0, cost_ts_ns=1045277, palf_env_impl_={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.625164] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.625206] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=44] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611625148}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.625235] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=26] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611625148}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.625387] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:284) [1108332][T1_Flush][T1][Y0-0000000000000000-0-0] [lt=17] ====== traversal_flush timer task ====== [2024-02-19 19:03:31.625456] INFO [COMMON] inner_add_dag (ob_dag_scheduler.cpp:3156) [1108332][T1_Flush][T1][Y0-0000000000000000-0-0] [lt=15] add dag success(dag=0x7fdcbf492150, start_time=0, id=Y0-0000000000000000-0-0, dag->hash()=-6387791410797819470, dag_cnt=1, dag_type_cnts=1) [2024-02-19 19:03:31.625474] INFO [STORAGE] flush (ob_tx_data_memtable.cpp:460) [1108332][T1_Flush][T1][Y0-0000000000000000-0-0] [lt=17] schedule flush tx data memtable task done(ret=0, ret="OB_SUCCESS", param={merge_type:"MINI_MERGE", merge_version:0, ls_id:{id:1}, tablet_id:{id:49402}, report_:null, for_diagnose:false}, this={ObITable:{this:0x7fdce5eea080, key:{tablet_id:{id:49402}, column_group_idx:0, table_type:"TX_DATA_MEMTABLE", log_ts_range:{start_log_ts:1707209832347541976, end_log_ts:1707211523951845333}}, ref_cnt:4, upper_trans_version:-4007, timestamp:0}, this:0x7fdce5eea080, is_inited:true, is_iterating:false, has_constructed_list:true, min_tx_log_ts:1707209832548318068, max_tx_log_ts:1707211523951845333, min_start_log_ts:1707209832548318068, snapshot_version:1707209832548318068, inserted_cnt:9892, write_ref:0, occupied_size:1345312, state:2, tx_data_map:0x7fdcf4082eb0, memtable_mgr:0x7fdce89de180}) [2024-02-19 19:03:31.625507] INFO [STORAGE.TABLELOCK] get_rec_log_ts (ob_lock_memtable.cpp:739) [1108332][T1_Flush][T1][Y0-0000000000000000-0-0] [lt=30] rec_log_ts of ObLockMemtable is (rec_log_ts_=9223372036854775807, flushed_log_ts_=1707033175148098668, freeze_log_ts_=0, max_committed_log_ts_=-1, is_frozen_=false, ls_id_={id:1}) [2024-02-19 19:03:31.625518] INFO [STORAGE.TABLELOCK] flush (ob_lock_memtable.cpp:803) [1108332][T1_Flush][T1][Y0-0000000000000000-0-0] [lt=11] lock memtable no need to flush(rec_log_ts=9223372036854775807, recycle_log_ts=9223372036854775807, is_frozen_=false, ls_id_={id:1}) [2024-02-19 19:03:31.625529] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:316) [1108332][T1_Flush][T1][Y0-0000000000000000-0-0] [lt=7] succeed to traversal_flush(ret=0, ls_cnt=1) [2024-02-19 19:03:31.625595] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=22] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.625627] INFO [SERVER] add_task (ob_sys_task_stat.cpp:140) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=23] succeed to add sys task(task={start_time:1708340611625616, task_id:YB42AC0103F2-000611B9225784E6-0-0, task_type:4, svr_ip:"172.1.3.242:2882", tenant_id:1, is_cancel:false, comment:"MINI_MERGE dag: ls_id=1 tablet_id=49402"}) [2024-02-19 19:03:31.625641] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.625672] INFO [COMMON] schedule_one (ob_dag_scheduler.cpp:2777) [1107630][T1_DagScheduler][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=42] schedule one task(task={this:0x7fdcbf494150, type:15, status:2, dag:{this:0x7fdcbf492150, type:3, name:"TX_TABLE_MERGE", id:YB42AC0103F2-000611B9225784E6-0-0, dag_ret:0, dag_status:2, start_time:1708340611625667, running_task_cnt:1, indegree:0, hash:-6387791410797819470}}, priority="PRIO_COMPACTION_HIGH", total_running_task_cnt=1, running_task_cnts_[priority]=1, low_limits_[priority]=6, up_limits_[priority]=6, task->get_dag()->get_dag_net()=NULL) [2024-02-19 19:03:31.625911] WARN [STORAGE] inner_get_neighbour_major_freeze (ob_tenant_freeze_info_mgr.cpp:328) [1107605][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=17] no freeze info in curr info_list(ret=-4018, cur_idx_=0, info_list_[0]=[], info_list_[1]=[]) [2024-02-19 19:03:31.625930] WARN [STORAGE] get_neighbour_freeze_info (ob_partition_merge_policy.cpp:69) [1107605][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=20] Failed to get freeze info, use snapshot_gc_ts instead(ret=-4018, snapshot_version=1707182812529881377) [2024-02-19 19:03:31.625944] INFO [STORAGE] ready_for_flush (ob_tx_data_memtable.cpp:431) [1107605][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=10] memtable is frozen yet.(this=0x7fdce5eea080) [2024-02-19 19:03:31.625955] INFO [STORAGE] find_mini_merge_tables (ob_partition_merge_policy.cpp:151) [1107605][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=10] skip active memtable(i=1, memtable={ObITable:{this:0x7fdce5eea360, key:{tablet_id:{id:49402}, column_group_idx:0, table_type:"TX_DATA_MEMTABLE", log_ts_range:{start_log_ts:1707211523951845333, end_log_ts:9223372036854775807}}, ref_cnt:3, upper_trans_version:-4007, timestamp:0}, this:0x7fdce5eea360, is_inited:true, is_iterating:false, has_constructed_list:false, min_tx_log_ts:1707211524163571591, max_tx_log_ts:1707751112415295196, min_start_log_ts:1707211524163571591, snapshot_version:9223372036854775807, inserted_cnt:3168808, write_ref:0, occupied_size:430988896, state:0, tx_data_map:0x7fdcdeaf81b0, memtable_mgr:0x7fdce89de180}, memtable_handles=[{table:0x7fdce5eea080, t3m_:0x7fdd18bce030, allocator_:null, table_type_:1}, {table:0x7fdce5eea360, t3m_:0x7fdd18bce030, allocator_:null, table_type_:1}]) [2024-02-19 19:03:31.626018] INFO [STORAGE.COMPACTION] get_storage_schema_to_merge (ob_tablet_merge_ctx.cpp:1044) [1107605][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=55] get storage schema to merge(ls_id={id:1}, tablet_id={id:49402}, schema_ctx={base_schema_version:1, schema_version:1, merge_schema:{ObIMultiSourceDataUnit:{is_tx_end:false, unsynced_cnt_for_multi_data:0, sync_finish:true}, this:0x7fdce2c57e10, version:0, is_use_bloomfilter:0, compat_mode:0, table_type:3, index_type:0, index_status:1, row_store_type:1, schema_version:1, column_cnt:5, tablet_size:134217728, pctfree:10, block_size:16384, progressive_merge_round:0, master_key_id:18446744073709551615, compressor_type:1, encryption:"", encrypt_key:"", rowkey_array:[{column_idx:16, meta_type:{type:"BIGINT", collation:"binary", coercibility:"NUMERIC"}, order:0}, {column_idx:17, meta_type:{type:"BIGINT", collation:"binary", coercibility:"NUMERIC"}, order:0}], column_array:[{meta_type:{type:"BIGINT", collation:"binary", coercibility:"NUMERIC"}, is_column_stored_in_sstable:1, is_rowkey_column:1, is_generated_column:0, orig_default_value:{"NULL":"NULL"}}, {meta_type:{type:"BIGINT", collation:"binary", coercibility:"NUMERIC"}, is_column_stored_in_sstable:1, is_rowkey_column:1, is_generated_column:0, orig_default_value:{"NULL":"NULL"}}, {meta_type:{type:"BIGINT", collation:"binary", coercibility:"NUMERIC"}, is_column_stored_in_sstable:1, is_rowkey_column:0, is_generated_column:0, orig_default_value:{"NULL":"NULL"}}, {meta_type:{type:"BIGINT", collation:"binary", coercibility:"NUMERIC"}, is_column_stored_in_sstable:1, is_rowkey_column:0, is_generated_column:0, orig_default_value:{"NULL":"NULL"}}, {meta_type:{type:"CHAR", collation:"binary", coercibility:"INVALID"}, is_column_stored_in_sstable:1, is_rowkey_column:0, is_generated_column:0, orig_default_value:{"NULL":"NULL"}}]}}, get_storage_schema_flag=true, get_schema_on_memtable=false) [2024-02-19 19:03:31.626090] INFO [STORAGE] init (ob_partition_parallel_merge_ctx.cpp:104) [1107605][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=66] Succ to init parallel merge ctx(enable_parallel_minor_merge=true, tablet_size=134217728, merge_ctx.param_={merge_type:"MINI_MERGE", merge_version:0, ls_id:{id:1}, tablet_id:{id:49402}, report_:null, for_diagnose:false}) [2024-02-19 19:03:31.626111] INFO [STORAGE.COMPACTION] build_merge_ctx (ob_tx_table_merge_task.cpp:224) [1107605][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=14] succeed to build merge ctx(tablet_id={id:49402}, ctx={param:{merge_type:"MINI_MERGE", merge_version:0, ls_id:{id:1}, tablet_id:{id:49402}, report_:null, for_diagnose:false}, sstable_version_range:{multi_version_start:1707209832548318068, base_version:-1, snapshot_version:1707209832548318068}, create_snapshot_version:0, is_full_merge:true, merge_level:0, progressive_merge_num:0, parallel_merge_ctx:{parallel_type:3, range_array:[{start_key:{datum_cnt:1, group_idx:0, hash:0, [idx=0:MIN_OBJ,]store_rowkey:MIN}, end_key:{datum_cnt:1, group_idx:0, hash:0, [idx=0:MAX_OBJ,]store_rowkey:MAX}, group_idx:0, border_flag:{flag:0}}], concurrent_cnt:1, is_inited:true}, schema_ctx:{base_schema_version:1, schema_version:1, merge_schema:{ObIMultiSourceDataUnit:{is_tx_end:false, unsynced_cnt_for_multi_data:0, sync_finish:true}, this:0x7fdce2c57e10, version:0, is_use_bloomfilter:0, compat_mode:0, table_type:3, index_type:0, index_status:1, row_store_type:1, schema_version:1, column_cnt:5, tablet_size:134217728, pctfree:10, block_size:16384, progressive_merge_round:0, master_key_id:18446744073709551615, compressor_type:1, encryption:"", encrypt_key:"", rowkey_array:[{column_idx:16, meta_type:{type:"BIGINT", collation:"binary", coercibility:"NUMERIC"}, order:0}, {column_idx:17, meta_type:{type:"BIGINT", collation:"binary", coercibility:"NUMERIC"}, order:0}], column_array:[{meta_type:{type:"BIGINT", collation:"binary", coercibility:"NUMERIC"}, is_column_stored_in_sstable:1, is_rowkey_column:1, is_generated_column:0, orig_default_value:{"NULL":"NULL"}}, {meta_type:{type:"BIGINT", collation:"binary", coercibility:"NUMERIC"}, is_column_stored_in_sstable:1, is_rowkey_column:1, is_generated_column:0, orig_default_value:{"NULL":"NULL"}}, {meta_type:{type:"BIGINT", collation:"binary", coercibility:"NUMERIC"}, is_column_stored_in_sstable:1, is_rowkey_column:0, is_generated_column:0, orig_default_value:{"NULL":"NULL"}}, {meta_type:{type:"BIGINT", collation:"binary", coercibility:"NUMERIC"}, is_column_stored_in_sstable:1, is_rowkey_column:0, is_generated_column:0, orig_default_value:{"NULL":"NULL"}}, {meta_type:{type:"CHAR", collation:"binary", coercibility:"INVALID"}, is_column_stored_in_sstable:1, is_rowkey_column:0, is_generated_column:0, orig_default_value:{"NULL":"NULL"}}]}}, tables_handle count:1, progressive_merge_round:0, progressive_merge_step:0, tables_handle:{meta_mem_mgr_:0x7fdd18bce030, allocator_:null, tablet_id:{id:49402}, table_count:1, [{i:0, table_key:{tablet_id:{id:49402}, column_group_idx:0, table_type:"TX_DATA_MEMTABLE", log_ts_range:{start_log_ts:1707209832347541976, end_log_ts:1707211523951845333}}, ref:5}]}, schedule_major:false, log_ts_range:{start_log_ts:1707209832347541976, end_log_ts:1707211523951845333}, merge_log_ts:9223372036854775807, read_base_version:0, ls_handle:{ls_map_:0x7fdd02d92040, ls_:0x7fdce89de150, mod_:1}, tablet_handle:{obj:0x7fdce2c578d0, obj_pool:0x7fdd18bdf8b0, wash_priority:0}, merge_progress:NULL, compaction_filter:NULL, time_guard:total=0us, rebuild_seq:0}) [2024-02-19 19:03:31.626270] INFO [STORAGE.COMPACTION] process (ob_tx_table_merge_task.cpp:164) [1107605][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=98] succeed to generate merge task(task={this:0x7fdcbf494150, type:15, status:2, dag:{this:0x7fdcbf492150, type:3, name:"TX_TABLE_MERGE", id:YB42AC0103F2-000611B9225784E6-0-0, dag_ret:0, dag_status:2, start_time:1708340611625667, running_task_cnt:1, indegree:0, hash:-6387791410797819470}}) [2024-02-19 19:03:31.626291] INFO [COMMON] do_work (ob_dag_scheduler.cpp:244) [1107605][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=19] task finish process(ret=0, start_time=1708340611625886, end_time=1708340611626288, runtime=402, *this={this:0x7fdcbf494150, type:15, status:2, dag:{this:0x7fdcbf492150, type:3, name:"TX_TABLE_MERGE", id:YB42AC0103F2-000611B9225784E6-0-0, dag_ret:0, dag_status:2, start_time:1708340611625667, running_task_cnt:1, indegree:0, hash:-6387791410797819470}}) [2024-02-19 19:03:31.628611] INFO [COMMON] schedule_one (ob_dag_scheduler.cpp:2777) [1107630][T1_DagScheduler][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=53] schedule one task(task={this:0x7fdcbf494290, type:1, status:2, dag:{this:0x7fdcbf492150, type:3, name:"TX_TABLE_MERGE", id:YB42AC0103F2-000611B9225784E6-0-0, dag_ret:0, dag_status:2, start_time:1708340611625667, running_task_cnt:1, indegree:0, hash:-6387791410797819470}}, priority="PRIO_COMPACTION_HIGH", total_running_task_cnt=1, running_task_cnts_[priority]=1, low_limits_[priority]=6, up_limits_[priority]=6, task->get_dag()->get_dag_net()=NULL) [2024-02-19 19:03:31.628848] INFO [STORAGE] prepare_tx_data_list (ob_tx_data_memtable.cpp:245) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=17] construct tx data memtable scan iterator more than once(this={ObITable:{this:0x7fdce5eea080, key:{tablet_id:{id:49402}, column_group_idx:0, table_type:"TX_DATA_MEMTABLE", log_ts_range:{start_log_ts:1707209832347541976, end_log_ts:1707211523951845333}}, ref_cnt:4, upper_trans_version:-4007, timestamp:0}, this:0x7fdce5eea080, is_inited:true, is_iterating:true, has_constructed_list:true, min_tx_log_ts:1707209832548318068, max_tx_log_ts:1707211523951845333, min_start_log_ts:1707209832548318068, snapshot_version:1707209832548318068, inserted_cnt:9892, write_ref:0, occupied_size:1345312, state:2, tx_data_map:0x7fdcf4082eb0, memtable_mgr:0x7fdce89de180}) [2024-02-19 19:03:31.633318] INFO [STORAGE] init_merge_iters (ob_partition_merger.cpp:1350) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=33] Succ to init iter(ret=0, i=0, merge_iter={ObPartitionMinorRowMergeIter:{tablet_id:{id:49402}, iter_end:false, schema_rowkey_column_cnt:2, schema_version:1, merge_range:{start_key:{datum_cnt:1, group_idx:0, hash:0, [idx=0:MIN_OBJ,]store_rowkey:MIN}, end_key:{datum_cnt:1, group_idx:0, hash:0, [idx=0:MAX_OBJ,]store_rowkey:MAX}, group_idx:0, border_flag:{flag:0}}, curr_row_:NULL, store_ctx:{this:0x7fdcd7cd8ef8, ls_id:{id:1}, ls:null, timeout:9223372036854775807, tablet_id:{id:0}, table_iter:null, table_version:9223372036854775807, mvcc_acc_ctx:{type:1, abs_lock_timeout:9223372036854775807, tx_lock_timeout:-1, snapshot:{version:9223372036854775805, tx_id:{txid:0}, scn:-1}, tx_table_guard:{tx_table:0x7fdce89e38d0, epoch:0}, tx_id:{txid:0}, tx_desc:NULL, tx_ctx:null, mem_ctx:null, tx_scn:-1}, log_ts:9223372036854775807}, row_iter_:{type:0, is_sstable_iter:false, block_row_store:null}, iter_row_count:0, iter_idx:0, is_inited:true, last_macro_block_reused:false, is_rowkey_first_row_reused:false, table_:{ObITable:{this:0x7fdce5eea080, key:{tablet_id:{id:49402}, column_group_idx:0, table_type:"TX_DATA_MEMTABLE", log_ts_range:{start_log_ts:1707209832347541976, end_log_ts:1707211523951845333}}, ref_cnt:4, upper_trans_version:-4007, timestamp:0}, this:0x7fdce5eea080, is_inited:true, is_iterating:true, has_constructed_list:true, min_tx_log_ts:1707209832548318068, max_tx_log_ts:1707211523951845333, min_start_log_ts:1707209832548318068, snapshot_version:1707209832548318068, inserted_cnt:9892, write_ref:0, occupied_size:1345312, state:2, tx_data_map:0x7fdcf4082eb0, memtable_mgr:0x7fdce89de180}}, ghost_row_count:0, check_committing_trans_compacted:true, row_queue:{col_cnt:7, cur_pos:0, count:0}}, fuser.get_multi_version_column_ids()=[column_id=16 {type:"BIGINT", collation:"binary", coercibility:"NUMERIC"} order=0, column_id=17 {type:"BIGINT", collation:"binary", coercibility:"NUMERIC"} order=0, column_id=7 {type:"BIGINT", collation:"binary", coercibility:"NUMERIC"} order=0, column_id=8 {type:"BIGINT", collation:"binary", coercibility:"NUMERIC"} order=0, column_id=18 {type:"BIGINT", collation:"binary", coercibility:"NUMERIC"} order=0, column_id=19 {type:"BIGINT", collation:"binary", coercibility:"NUMERIC"} order=0, column_id=20 {type:"CHAR", collation:"binary", coercibility:"INVALID"} order=0]) [2024-02-19 19:03:31.634677] ERROR [SHARE] alloc_block (ob_local_device.cpp:716) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=431] Fail to alloc block, (ret=-4184, free_block_cnt_=0, total_block_cnt_=4096) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3c0be9f 0x3c0bba9 0x3c0b9b0 0x3c0b802 0xa8ec31a 0xa8ec068 0x8bdae8d 0x8c8e928 0x8c891b2 0x8c85c90 0x8c84908 0x8c84729 0x95f221d 0x95e465e 0x95f5e89 0x95f55f7 0x95f3dd3 0x962ed0e 0x99df2ec 0x3b0e734 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.634829] WARN [STORAGE.BLKMGR] alloc_block (ob_block_manager.cpp:305) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=115] Failed to alloc block from io device(ret=-4184) [2024-02-19 19:03:31.634843] WARN [STORAGE] alloc_block (ob_macro_block_writer.cpp:1132) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=14] Fail to pre-alloc block for new macro block(ret=-4184, current_index=0, current_macro_seq=0) [2024-02-19 19:03:31.634855] WARN [STORAGE] write_micro_block (ob_macro_block_writer.cpp:910) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=11] Fail to pre-alloc block(ret=-4184) [2024-02-19 19:03:31.634866] WARN [STORAGE] build_micro_block (ob_macro_block_writer.cpp:766) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=8] fail to write micro block (ret=-4184, micro_block_desc={last_rowkey:{datum_cnt:4, group_idx:0, hash:0, [idx=0:{len: 8, flag: 0, null: 0, ptr: 0x7fdcbf43a070, hex: 3CAD720200000000, int: 41069884},idx=1:{len: 8, flag: 0, null: 0, ptr: 0x7fdcbf43a0a8, hex: 0000000000000000, int: 0},idx=2:{len: 8, flag: 0, null: 0, ptr: 0x7fdcbf43a0e0, hex: 00F0FFFFFFFFFFFF, int: -4096},idx=3:{len: 8, flag: 0, null: 0, ptr: 0x7fdcbf43a118, hex: 0000000000000000, int: 0},]store_rowkey:}, header:{magic:1005, version:1, header_size:64, header_checksum:6347, column_count:7, rowkey_column_count:4, has_column_checksum:0, row_count:164, row_store_type:0, opt:1, var_column_count:0, row_offset:15808, original_length:16404, max_merged_trans_version:4096, data_length:16404, data_zlength:16404, data_checksum:3804533969, column_checksums:null, is_valid():true}, buf:0x7fdca4e04090, buf_size:16404, data_size:16404, row_count:164, column_count:7, max_merged_trans_version:4096, macro_id:[9223372036854775807](ver=0,mode=0,seq=0), block_offset:0, block_checksum:4100783609, row_count_delta:164, contain_uncommitted_row:false, can_mark_deletion:false, has_out_row_column:false, original_size:16404}) [2024-02-19 19:03:31.634915] WARN [STORAGE] append_row (ob_macro_block_writer.cpp:462) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=45] Fail to build micro block, (ret=-4184) [2024-02-19 19:03:31.634925] WARN [STORAGE] append_row (ob_macro_block_writer.cpp:391) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=9] Fail to append row(ret=-4184) [2024-02-19 19:03:31.634933] WARN [STORAGE] inner_process (ob_partition_merger.cpp:1270) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=8] Failed to append row to macro writer(ret=-4184) [2024-02-19 19:03:31.634943] WARN [STORAGE] process (ob_partition_merger.cpp:261) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=9] Failed to inner append row(ret=-4184) [2024-02-19 19:03:31.634953] WARN [STORAGE] merge_single_iter (ob_partition_merger.cpp:1496) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=9] Failed to process row(ret=-4184, cur_row={row_flag:{flag:"INSERT", flag_type:0}, trans_id:{txid:0}, scan_index:0, mvcc_row_flag:{first:1, uncommitted:0, shadow:0, compact:1, ghost:0, last:1, reserved:0, flag:41}, snapshot_version:0, fast_filter_skipped:false, have_uncommited_row:false, group_idx:0, count:7, datum_buffer:{capacity:32, datums:0x7fdcd7cf0260, local_datums:0x7fdcd7cf0260}[col_id=0:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf0270, hex: 3CAD720200000000, int: 41069884},col_id=1:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf02a8, hex: 0000000000000000, int: 0},col_id=2:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf02e0, hex: 00F0FFFFFFFFFFFF, int: -4096},col_id=3:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf0318, hex: 0000000000000000, int: 0},col_id=4:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf0350, hex: 0100000000000000, int: 1, num_digit0: 0},col_id=5:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf0388, hex: 2BC76409533AB117, int: 1707209862064424747},col_id=6:{len: 43, flag: 0, null: 0, ptr: 0x7fdcd1cf0060, hex: 0129018480808000BCDACA1301AB8E93CBB0CACED817AB8E93CBB0CACED817AB8E93CBB0CACED817010100},]}, merge_iter={ObPartitionMinorRowMergeIter:{tablet_id:{id:49402}, iter_end:false, schema_rowkey_column_cnt:2, schema_version:1, merge_range:{start_key:{datum_cnt:1, group_idx:0, hash:0, [idx=0:MIN_OBJ,]store_rowkey:MIN}, end_key:{datum_cnt:1, group_idx:0, hash:0, [idx=0:MAX_OBJ,]store_rowkey:MAX}, group_idx:0, border_flag:{flag:0}}, curr_row_:{row_flag:{flag:"INSERT", flag_type:0}, trans_id:{txid:0}, scan_index:0, mvcc_row_flag:{first:1, uncommitted:0, shadow:0, compact:1, ghost:0, last:1, reserved:0, flag:41}, snapshot_version:0, fast_filter_skipped:false, have_uncommited_row:false, group_idx:0, count:7, datum_buffer:{capacity:32, datums:0x7fdcd7cf0260, local_datums:0x7fdcd7cf0260}[col_id=0:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf0270, hex: 3CAD720200000000, int: 41069884},col_id=1:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf02a8, hex: 0000000000000000, int: 0},col_id=2:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf02e0, hex: 00F0FFFFFFFFFFFF, int: -4096},col_id=3:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf0318, hex: 0000000000000000, int: 0},col_id=4:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf0350, hex: 0100000000000000, int: 1, num_digit0: 0},col_id=5:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf0388, hex: 2BC76409533AB117, int: 1707209862064424747},col_id=6:{len: 43, flag: 0, null: 0, ptr: 0x7fdcd1cf0060, hex: 0129018480808000BCDACA1301AB8E93CBB0CACED817AB8E93CBB0CACED817AB8E93CBB0CACED817010100},]}, store_ctx:{this:0x7fdcd7cd8ef8, ls_id:{id:1}, ls:null, timeout:9223372036854775807, tablet_id:{id:0}, table_iter:null, table_version:9223372036854775807, mvcc_acc_ctx:{type:1, abs_lock_timeout:9223372036854775807, tx_lock_timeout:-1, snapshot:{version:9223372036854775805, tx_id:{txid:0}, scn:-1}, tx_table_guard:{tx_table:0x7fdce89e38d0, epoch:0}, tx_id:{txid:0}, tx_desc:NULL, tx_ctx:null, mem_ctx:null, tx_scn:-1}, log_ts:9223372036854775807}, row_iter_:{type:0, is_sstable_iter:false, block_row_store:null}, iter_row_count:164, iter_idx:0, is_inited:true, last_macro_block_reused:false, is_rowkey_first_row_reused:false, table_:{ObITable:{this:0x7fdce5eea080, key:{tablet_id:{id:49402}, column_group_idx:0, table_type:"TX_DATA_MEMTABLE", log_ts_range:{start_log_ts:1707209832347541976, end_log_ts:1707211523951845333}}, ref_cnt:4, upper_trans_version:-4007, timestamp:0}, this:0x7fdce5eea080, is_inited:true, is_iterating:true, has_constructed_list:true, min_tx_log_ts:1707209832548318068, max_tx_log_ts:1707211523951845333, min_start_log_ts:1707209832548318068, snapshot_version:1707209832548318068, inserted_cnt:9892, write_ref:0, occupied_size:1345312, state:2, tx_data_map:0x7fdcf4082eb0, memtable_mgr:0x7fdce89de180}}, ghost_row_count:0, check_committing_trans_compacted:true, row_queue:{col_cnt:7, cur_pos:0, count:0}}) [2024-02-19 19:03:31.635077] WARN [STORAGE] merge_same_rowkey_iters (ob_partition_merger.cpp:1773) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=121] Failed to merge single merge iter(ret=-4184) [2024-02-19 19:03:31.635150] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:186) [1108342][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=46] start do ls ha handler(ls_id_array_=[{id:1}]) [2024-02-19 19:03:31.635088] WARN [STORAGE] merge_partition (ob_partition_merger.cpp:1431) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=10] Failed to merge iters with same rowkey(ret=-4184, merge_iters=[{ObPartitionMinorRowMergeIter:{tablet_id:{id:49402}, iter_end:false, schema_rowkey_column_cnt:2, schema_version:1, merge_range:{start_key:{datum_cnt:1, group_idx:0, hash:0, [idx=0:MIN_OBJ,]store_rowkey:MIN}, end_key:{datum_cnt:1, group_idx:0, hash:0, [idx=0:MAX_OBJ,]store_rowkey:MAX}, group_idx:0, border_flag:{flag:0}}, curr_row_:{row_flag:{flag:"INSERT", flag_type:0}, trans_id:{txid:0}, scan_index:0, mvcc_row_flag:{first:1, uncommitted:0, shadow:0, compact:1, ghost:0, last:1, reserved:0, flag:41}, snapshot_version:0, fast_filter_skipped:false, have_uncommited_row:false, group_idx:0, count:7, datum_buffer:{capacity:32, datums:0x7fdcd7cf0260, local_datums:0x7fdcd7cf0260}[col_id=0:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf0270, hex: 3CAD720200000000, int: 41069884},col_id=1:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf02a8, hex: 0000000000000000, int: 0},col_id=2:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf02e0, hex: 00F0FFFFFFFFFFFF, int: -4096},col_id=3:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf0318, hex: 0000000000000000, int: 0},col_id=4:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf0350, hex: 0100000000000000, int: 1, num_digit0: 0},col_id=5:{len: 8, flag: 0, null: 0, ptr: 0x7fdcd7cf0388, hex: 2BC76409533AB117, int: 1707209862064424747},col_id=6:{len: 43, flag: 0, null: 0, ptr: 0x7fdcd1cf0060, hex: 0129018480808000BCDACA1301AB8E93CBB0CACED817AB8E93CBB0CACED817AB8E93CBB0CACED817010100},]}, store_ctx:{this:0x7fdcd7cd8ef8, ls_id:{id:1}, ls:null, timeout:9223372036854775807, tablet_id:{id:0}, table_iter:null, table_version:9223372036854775807, mvcc_acc_ctx:{type:1, abs_lock_timeout:9223372036854775807, tx_lock_timeout:-1, snapshot:{version:9223372036854775805, tx_id:{txid:0}, scn:-1}, tx_table_guard:{tx_table:0x7fdce89e38d0, epoch:0}, tx_id:{txid:0}, tx_desc:NULL, tx_ctx:null, mem_ctx:null, tx_scn:-1}, log_ts:9223372036854775807}, row_iter_:{type:0, is_sstable_iter:false, block_row_store:null}, iter_row_count:164, iter_idx:0, is_inited:true, last_macro_block_reused:false, is_rowkey_first_row_reused:false, table_:{ObITable:{this:0x7fdce5eea080, key:{tablet_id:{id:49402}, column_group_idx:0, table_type:"TX_DATA_MEMTABLE", log_ts_range:{start_log_ts:1707209832347541976, end_log_ts:1707211523951845333}}, ref_cnt:4, upper_trans_version:-4007, timestamp:0}, this:0x7fdce5eea080, is_inited:true, is_iterating:true, has_constructed_list:true, min_tx_log_ts:1707209832548318068, max_tx_log_ts:1707211523951845333, min_start_log_ts:1707209832548318068, snapshot_version:1707209832548318068, inserted_cnt:9892, write_ref:0, occupied_size:1345312, state:2, tx_data_map:0x7fdcf4082eb0, memtable_mgr:0x7fdce89de180}}, ghost_row_count:0, check_committing_trans_compacted:true, row_queue:{col_cnt:7, cur_pos:0, count:0}}]) [2024-02-19 19:03:31.635330] INFO [STORAGE.COMPACTION] clean_iters_and_reset (ob_partition_merger.cpp:353) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=238] partition merge iter row count(i=0, row_count=164, ghost_row_count=0, pkey={tablet_id:{id:49402}, column_group_idx:0, table_type:"TX_DATA_MEMTABLE", log_ts_range:{start_log_ts:1707209832347541976, end_log_ts:1707211523951845333}}, table={ObITable:{this:0x7fdce5eea080, key:{tablet_id:{id:49402}, column_group_idx:0, table_type:"TX_DATA_MEMTABLE", log_ts_range:{start_log_ts:1707209832347541976, end_log_ts:1707211523951845333}}, ref_cnt:4, upper_trans_version:-4007, timestamp:0}, this:0x7fdce5eea080, is_inited:true, is_iterating:true, has_constructed_list:true, min_tx_log_ts:1707209832548318068, max_tx_log_ts:1707211523951845333, min_start_log_ts:1707209832548318068, snapshot_version:1707209832548318068, inserted_cnt:9892, write_ref:0, occupied_size:1345312, state:2, tx_data_map:0x7fdcf4082eb0, memtable_mgr:0x7fdce89de180}) [2024-02-19 19:03:31.635436] WARN [STORAGE] process (ob_tablet_merge_task.cpp:1226) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=26] failed to merge partition(ret=-4184) [2024-02-19 19:03:31.635450] WARN [STORAGE] process (ob_tablet_merge_task.cpp:1238) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=13] failed to merge(ret=-4184, ctx_->param_={merge_type:"MINI_MERGE", merge_version:0, ls_id:{id:1}, tablet_id:{id:49402}, report_:null, for_diagnose:false}, idx_=0) [2024-02-19 19:03:31.635467] WARN [COMMON] do_work (ob_dag_scheduler.cpp:238) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=14] failed to process task(ret=-4184) [2024-02-19 19:03:31.635477] INFO [COMMON] do_work (ob_dag_scheduler.cpp:244) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=9] task finish process(ret=-4184, start_time=1708340611628766, end_time=1708340611635475, runtime=6709, *this={this:0x7fdcbf494290, type:1, status:2, dag:{this:0x7fdcbf492150, type:3, name:"TX_TABLE_MERGE", id:YB42AC0103F2-000611B9225784E6-0-0, dag_ret:0, dag_status:2, start_time:1708340611625667, running_task_cnt:1, indegree:0, hash:-6387791410797819470}}) [2024-02-19 19:03:31.635501] WARN [COMMON] run1 (ob_dag_scheduler.cpp:1395) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=21] failed to do work(ret=-4184, *task_={this:0x7fdcbf494290, type:1, status:2, dag:{this:0x7fdcbf492150, type:3, name:"TX_TABLE_MERGE", id:YB42AC0103F2-000611B9225784E6-0-0, dag_ret:0, dag_status:2, start_time:1708340611625667, running_task_cnt:1, indegree:0, hash:-6387791410797819470}}, compat_mode=0) [2024-02-19 19:03:31.635535] INFO [COMMON] finish_dag_ (ob_dag_scheduler.cpp:2351) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=16] dag finished(dag_ret=-4184, runtime=9865, dag_cnt=0, dag_cnts_[dag.get_type()]=0, &dag=0x7fdcbf492150, dag={this:0x7fdcbf492150, type:3, name:"TX_TABLE_MERGE", id:YB42AC0103F2-000611B9225784E6-0-0, dag_ret:-4184, dag_status:5, start_time:1708340611625667, running_task_cnt:0, indegree:0, hash:-6387791410797819470}) [2024-02-19 19:03:31.635561] INFO [SERVER] del_task (ob_sys_task_stat.cpp:169) [1107593][T1_TX_TABLE_MER][T1][YB42AC0103F2-000611B9225784E6-0-0] [lt=19] succeed to del sys task(removed_task={start_time:1708340611625616, task_id:YB42AC0103F2-000611B9225784E6-0-0, task_type:4, svr_ip:"172.1.3.242:2882", tenant_id:1, is_cancel:false, comment:"MINI_MERGE dag: ls_id=1 tablet_id=49402"}) [2024-02-19 19:03:31.635797] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.635833] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.645950] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.646028] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=80] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.646433] INFO [STORAGE] runTimerTask (ob_locality_manager.cpp:634) [1106760][LocalityReload][T0][Y0-0000000000000000-0-0] [lt=17] runTimer to refresh locality_info(ret=0) [2024-02-19 19:03:31.647462] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=7] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:31.647483] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=20] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:31.647494] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=9] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:31.647504] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=9] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:31.647515] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=8] fail to resolve table(ret=-5019) [2024-02-19 19:03:31.647523] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=7] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:31.647535] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=7] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:31.647543] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=7] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:31.647552] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=8] resolve basic table failed(ret=-5019) [2024-02-19 19:03:31.647560] WARN [SQL.RESV] resolve_joined_table_item (ob_dml_resolver.cpp:2504) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=7] resolve table failed(ret=-5019) [2024-02-19 19:03:31.647569] WARN [SQL.RESV] resolve_joined_table (ob_dml_resolver.cpp:2107) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=7] resolve joined table item failed(ret=-5019) [2024-02-19 19:03:31.647577] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1980) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=8] resolve joined table failed(ret=-5019) [2024-02-19 19:03:31.647585] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=7] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:31.647593] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=7] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:31.647601] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=7] resolve normal query failed(ret=-5019) [2024-02-19 19:03:31.647609] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=7] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:31.647625] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=8] failed to resolve(ret=-5019) [2024-02-19 19:03:31.647634] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=9] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.647645] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=8] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.647653] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=7] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:31.647663] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=8] fail to handle text query(stmt=select svr_ip, svr_port, a.zone, info, value, b.name, a.status, a.start_service_time, a.stop_time from __all_server a LEFT JOIN __all_zone b ON a.zone = b.zone WHERE (b.name = 'region' or b.name = 'idc' or b.name = 'status' or b.name = 'zone_type') and a.zone != '' order by svr_ip, svr_port, b.name, ret=-5019) [2024-02-19 19:03:31.647673] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=8] executor execute failed(ret=-5019) [2024-02-19 19:03:31.647682] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=9] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select svr_ip, svr_port, a.zone, info, value, b.name, a.status, a.start_service_time, a.stop_time from __all_server a LEFT JOIN __all_zone b ON a.zone = b.zone WHERE (b.name = 'region' or b.name = 'idc' or b.name = 'status' or b.name = 'zone_type') and a.zone != '' order by svr_ip, svr_port, b.name"}, retry_cnt=0) [2024-02-19 19:03:31.647699] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=12] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:31.647716] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=14] result set close failed(ret=-5019) [2024-02-19 19:03:31.647724] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=7] result set close failed(ret=-5019) [2024-02-19 19:03:31.647731] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=7] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:31.647753] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106759][LocltyRefTask][T1][YB42AC0103F2-000611B92337831E-0-0] [lt=7] failed to process record(executor={ObIExecutor:, sql:"select svr_ip, svr_port, a.zone, info, value, b.name, a.status, a.start_service_time, a.stop_time from __all_server a LEFT JOIN __all_zone b ON a.zone = b.zone WHERE (b.name = 'region' or b.name = 'idc' or b.name = 'status' or b.name = 'zone_type') and a.zone != '' order by svr_ip, svr_port, b.name"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:31.647765] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106759][LocltyRefTask][T0][YB42AC0103F2-000611B92337831E-0-0] [lt=10] failed to process final(executor={ObIExecutor:, sql:"select svr_ip, svr_port, a.zone, info, value, b.name, a.status, a.start_service_time, a.stop_time from __all_server a LEFT JOIN __all_zone b ON a.zone = b.zone WHERE (b.name = 'region' or b.name = 'idc' or b.name = 'status' or b.name = 'zone_type') and a.zone != '' order by svr_ip, svr_port, b.name"}, aret=-5019, ret=-5019) [2024-02-19 19:03:31.647775] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106759][LocltyRefTask][T0][Y0-0000000000000000-0-0] [lt=9] execute sql failed(ret=-5019, tenant_id=1, sql=select svr_ip, svr_port, a.zone, info, value, b.name, a.status, a.start_service_time, a.stop_time from __all_server a LEFT JOIN __all_zone b ON a.zone = b.zone WHERE (b.name = 'region' or b.name = 'idc' or b.name = 'status' or b.name = 'zone_type') and a.zone != '' order by svr_ip, svr_port, b.name) [2024-02-19 19:03:31.647785] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106759][LocltyRefTask][T0][Y0-0000000000000000-0-0] [lt=9] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:31.647793] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106759][LocltyRefTask][T0][Y0-0000000000000000-0-0] [lt=8] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:31.647802] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106759][LocltyRefTask][T0][Y0-0000000000000000-0-0] [lt=8] query failed(ret=-5019, conn=0x7fdcf4ef4050, start=1708340611647163, sql=select svr_ip, svr_port, a.zone, info, value, b.name, a.status, a.start_service_time, a.stop_time from __all_server a LEFT JOIN __all_zone b ON a.zone = b.zone WHERE (b.name = 'region' or b.name = 'idc' or b.name = 'status' or b.name = 'zone_type') and a.zone != '' order by svr_ip, svr_port, b.name) [2024-02-19 19:03:31.647814] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106759][LocltyRefTask][T0][Y0-0000000000000000-0-0] [lt=11] read failed(ret=-5019) [2024-02-19 19:03:31.647822] WARN [SHARE] load_region (ob_locality_table_operator.cpp:158) [1106759][LocltyRefTask][T0][Y0-0000000000000000-0-0] [lt=8] execute sql failed(ret=-5019, sql=select svr_ip, svr_port, a.zone, info, value, b.name, a.status, a.start_service_time, a.stop_time from __all_server a LEFT JOIN __all_zone b ON a.zone = b.zone WHERE (b.name = 'region' or b.name = 'idc' or b.name = 'status' or b.name = 'zone_type') and a.zone != '' order by svr_ip, svr_port, b.name) [2024-02-19 19:03:31.647832] INFO [SHARE] load_region (ob_locality_table_operator.cpp:373) [1106759][LocltyRefTask][T0][Y0-0000000000000000-0-0] [lt=9] load region(ret=-5019, locality_info={version:0, local_region:"", local_zone:"", local_idc:"", local_zone_type:3, local_zone_status:3, locality_region_array:[], locality_zone_array:[]}) [2024-02-19 19:03:31.647905] WARN [STORAGE] load_region (ob_locality_manager.cpp:226) [1106759][LocltyRefTask][T0][Y0-0000000000000000-0-0] [lt=14] localitity operator load region error(ret=-5019) [2024-02-19 19:03:31.647918] WARN [STORAGE] process (ob_locality_manager.cpp:688) [1106759][LocltyRefTask][T0][Y0-0000000000000000-0-0] [lt=10] process refresh locality task fail(ret=-5019) [2024-02-19 19:03:31.656277] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.656328] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.665684] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:31.665722] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=39] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:31.665744] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340611665658) [2024-02-19 19:03:31.665760] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340611465682, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:31.665905] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] get wrs ts(ls_id={id:1}, delta_ns=-1706042771803629940, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:31.665924] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:31.666331] INFO do_work (ob_rl_mgr.cpp:704) [1106705][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=23] swc wakeup.(stat_period_=1000000, ready=false) [2024-02-19 19:03:31.666455] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.666479] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=25] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.668785] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=19] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340611668771}) [2024-02-19 19:03:31.668815] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=29] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340611625148}}) [2024-02-19 19:03:31.668967] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106796][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=27] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/1, request doing=0/0) [2024-02-19 19:03:31.668992] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106798][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=17] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:31.669048] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106795][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=27] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/6, request doing=0/0) [2024-02-19 19:03:31.669672] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106792][BatchIO][T0][Y0-0000000000000000-0-0] [lt=15] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:31.669702] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106800][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=10] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:31.670657] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106791][BatchIO][T0][Y0-0000000000000000-0-0] [lt=17] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:31.670690] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106793][BatchIO][T0][Y0-0000000000000000-0-0] [lt=28] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:31.676615] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.676653] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.686956] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.687001] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.697290] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.697339] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.698217] INFO [ARCHIVE] do_thread_task_ (ob_archive_fetcher.cpp:261) [1108337][T1_ArcFetcher][T1][YB42AC0103F2-000611B921C78198-0-0] [lt=42] no task exist, just skip(ret=-4018) [2024-02-19 19:03:31.710612] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.710747] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=155] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.712703] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:202) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=42] decide disk size finished(dir="/backup/oceanbase/data/sstable", suggested_disk_size=8589934592, suggested_disk_percentage=0, default_disk_percentage=60, total_space=246944890880, free_space=220974178304, disk_size=8589934592) [2024-02-19 19:03:31.712740] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:202) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=38] decide disk size finished(dir="/backup/oceanbase/data/clog", suggested_disk_size=8589934592, suggested_disk_percentage=0, default_disk_percentage=30, total_space=246944890880, free_space=220974178304, disk_size=8589934592) [2024-02-19 19:03:31.712753] INFO [SERVER] cal_all_part_disk_size (ob_server_utils.cpp:164) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=11] decide_all_disk_size succ(data_dir="/backup/oceanbase/data/sstable", clog_dir="/backup/oceanbase/data/clog", suggested_data_disk_size=8589934592, suggested_data_disk_percentage=0, data_default_disk_percentage=60, clog_default_disk_percentage=30, shared_mode=true, data_disk_size=8589934592, log_disk_size=8589934592) [2024-02-19 19:03:31.721824] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=109] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.721878] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=56] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.725853] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.725881] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=28] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611725842}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.725922] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=38] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611725842}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.732230] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.732270] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.733708] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=27] table not exist(tenant_id=1, database_id=201001, table_name=__all_merge_info, ret=-5019) [2024-02-19 19:03:31.733743] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=34] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_merge_info, ret=-5019) [2024-02-19 19:03:31.733757] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=11] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:31.733767] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=10] resolve table relation factor failed(ret=-5019, table_name=__all_merge_info) [2024-02-19 19:03:31.733780] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=8] fail to resolve table(ret=-5019) [2024-02-19 19:03:31.733795] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=15] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:31.733809] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=8] Table 'oceanbase.__all_merge_info' doesn't exist [2024-02-19 19:03:31.733825] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=14] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:31.733835] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=10] resolve basic table failed(ret=-5019) [2024-02-19 19:03:31.733851] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=14] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:31.733873] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=15] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:31.733889] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=20] resolve normal query failed(ret=-5019) [2024-02-19 19:03:31.733899] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=10] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:31.733922] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=14] failed to resolve(ret=-5019) [2024-02-19 19:03:31.733934] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=10] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.733946] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=9] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.733957] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=9] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:31.733978] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=9] fail to handle text query(stmt=SELECT * FROM __all_merge_info WHERE tenant_id = '1', ret=-5019) [2024-02-19 19:03:31.733990] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=10] executor execute failed(ret=-5019) [2024-02-19 19:03:31.734001] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=9] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, retry_cnt=0) [2024-02-19 19:03:31.734020] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=13] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:31.734040] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=16] result set close failed(ret=-5019) [2024-02-19 19:03:31.734050] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=9] result set close failed(ret=-5019) [2024-02-19 19:03:31.734060] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=9] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:31.734087] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:31.734100] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C04-0-0] [lt=12] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, aret=-5019, ret=-5019) [2024-02-19 19:03:31.734112] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-02-19 19:03:31.734123] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:31.734137] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=13] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:31.734150] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=11] query failed(ret=-5019, conn=0x7fdcdc89a050, start=1708340611733495, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-02-19 19:03:31.734165] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=14] read failed(ret=-5019) [2024-02-19 19:03:31.734177] WARN [SHARE] load_global_merge_info (ob_global_merge_table_operator.cpp:48) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, meta_tenant_id=1, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-02-19 19:03:31.734269] WARN [STORAGE] refresh_merge_info (ob_tenant_freeze_info_mgr.cpp:789) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=17] failed to load global merge info(ret=-5019, ret="OB_TABLE_NOT_EXIST", global_merge_info={tenant_id:1, cluster:{name:"cluster", value:0, need_update:false}, frozen_scn:{name:"frozen_scn", value:1, need_update:false}, global_broadcast_scn:{name:"global_broadcast_scn", value:1, need_update:false}, last_merged_scn:{name:"last_merged_scn", value:1, need_update:false}, is_merge_error:{name:"is_merge_error", value:0, need_update:false}, merge_status:{name:"merge_status", value:0, need_update:false}, error_type:{name:"error_type", value:0, need_update:false}, suspend_merging:{name:"suspend_merging", value:0, need_update:false}, merge_start_time:{name:"merge_start_time", value:0, need_update:false}, last_merged_time:{name:"last_merged_time", value:0, need_update:false}}) [2024-02-19 19:03:31.734309] WARN [STORAGE] runTimerTask (ob_tenant_freeze_info_mgr.cpp:884) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=40] fail to refresh merge info(tmp_ret=-5019, tmp_ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:31.735038] INFO [STORAGE.TRANS] in_leader_serving_state (ob_trans_ctx_mgr_v4.cpp:881) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=11] ObLSTxCtxMgr not master(this={this:0x7fdce3204030, ls_id:{id:1}, tenant_id:1, state:"F_WORKING", total_tx_ctx_count:0, leader_takeover_ts:{mts:0}, is_leader_serving:false, max_replay_commit_version:1707751112415295196, ls_retain_ctx_mgr:{retain_ctx_list_.size():0}, aggre_rec_log_ts:-1, prev_aggre_rec_log_ts:-1, online_ts:0, uref:1073741826}) [2024-02-19 19:03:31.737239] INFO [STORAGE] runTimerTask (ob_tablet_gc_service.cpp:102) [1108334][T1_ObTimer][T1][Y0-0000000000000000-0-0] [lt=15] ====== [tabletgc] timer task ======(GC_CHECK_INTERVAL=5000000) [2024-02-19 19:03:31.737279] INFO [STORAGE] runTimerTask (ob_tablet_gc_service.cpp:135) [1108334][T1_ObTimer][T1][Y0-0000000000000000-0-0] [lt=29] [tabletgc] task check ls(ls->get_ls_id()={id:1}, tablet_persist_trigger=0) [2024-02-19 19:03:31.737297] INFO [STORAGE] runTimerTask (ob_tablet_gc_service.cpp:206) [1108334][T1_ObTimer][T1][Y0-0000000000000000-0-0] [lt=15] [tabletgc] succeed to gc_tablet(ret=0, ret="OB_SUCCESS", ls_cnt=1, times=784) [2024-02-19 19:03:31.742401] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.742435] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.743951] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-02-19 19:03:31.743978] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=26] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-02-19 19:03:31.743990] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=10] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:31.744001] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=9] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-02-19 19:03:31.744020] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=15] fail to resolve table(ret=-5019) [2024-02-19 19:03:31.744032] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=11] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:31.744046] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=9] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-02-19 19:03:31.744063] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=16] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:31.744072] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=8] resolve basic table failed(ret=-5019) [2024-02-19 19:03:31.744086] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=12] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:31.744095] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=8] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:31.744119] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=22] resolve normal query failed(ret=-5019) [2024-02-19 19:03:31.744129] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:31.744150] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=13] failed to resolve(ret=-5019) [2024-02-19 19:03:31.744166] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=15] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.744178] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=9] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.744192] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=12] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:31.744204] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=10] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-02-19 19:03:31.744214] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=9] executor execute failed(ret=-5019) [2024-02-19 19:03:31.744230] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=14] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0) [2024-02-19 19:03:31.744251] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=13] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:31.744270] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=16] result set close failed(ret=-5019) [2024-02-19 19:03:31.744280] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:31.744288] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=7] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:31.744314] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:31.744327] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=11] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-02-19 19:03:31.744338] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=9] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:31.744352] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=12] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:31.744361] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=8] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:31.744378] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=15] query failed(ret=-5019, conn=0x7fdcdc9be050, start=1708340611743775, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:31.744391] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=13] read failed(ret=-5019) [2024-02-19 19:03:31.744402] WARN [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:612) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=8] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:31.744477] WARN [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=11] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:31.744495] WARN [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=16] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-02-19 19:03:31.744505] WARN [SHARE] next (ob_ls_table_iterator.cpp:71) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=10] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:31.744514] WARN [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:331) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=8] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:31.744530] WARN [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:213) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=13] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-02-19 19:03:31.744539] WARN [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:193) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=7] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-02-19 19:03:31.744545] WARN [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:43) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DE-0-0] [lt=6] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:31.752557] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.752617] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=62] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.762757] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.762810] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=55] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.769266] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=14] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340611769243}) [2024-02-19 19:03:31.769309] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=48] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340611725842}}) [2024-02-19 19:03:31.771174] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:31.771206] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=33] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:31.771227] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340611771158) [2024-02-19 19:03:31.771242] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340611665772, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:31.771326] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24] get wrs ts(ls_id={id:1}, delta_ns=-1706042771803524875, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:31.771339] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:31.771576] WARN [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2113) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=12] get invalid Ethernet speed, use default(devname="ens18") [2024-02-19 19:03:31.771587] WARN [SERVER] runTimerTask (ob_server.cpp:2632) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=11] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4002, ret="OB_INVALID_ARGUMENT") [2024-02-19 19:03:31.772966] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.773025] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.783180] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.783235] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=57] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.786190] INFO [STORAGE.TRANS] run1 (ob_tx_loop_worker.cpp:100) [1108343][T1_TxLoopWorker][T1][Y0-0000000000000000-0-0] [lt=41] tx gc loop thread is running(MTL_ID()=1) [2024-02-19 19:03:31.786221] INFO [STORAGE.TRANS] run1 (ob_tx_loop_worker.cpp:107) [1108343][T1_TxLoopWorker][T1][Y0-0000000000000000-0-0] [lt=33] try gc retain ctx [2024-02-19 19:03:31.788185] INFO [STORAGE.TRANS] do_tx_gc_ (ob_tx_loop_worker.cpp:226) [1108343][T1_TxLoopWorker][T1][Y0-0000000000000000-0-0] [lt=13] [Tx Loop Worker] check tx scheduler success(MTL_ID()=1, *ls_ptr={ls_meta:{tenant_id:1, ls_id:{id:1}, replica_type:0, ls_create_status:1, clog_checkpoint_ts:1707209832548318068, clog_base_lsn:{lsn:23419564032}, rebuild_seq:0, migration_status:0, gc_state_:1, offline_ts_ns_:-1, restore_status:{status:0}, replayable_point:-1, tablet_change_checkpoint_ts:1707751112415295196, all_id_meta:{id_meta:[{limited_id:1707751122157059767, latest_log_ts:1707751105505586716}, {limited_id:46000001, latest_log_ts:1707741702196260609}, {limited_id:290000001, latest_log_ts:1707637636773992411}]}}, log_handler:{role:1, proposal_id:138, palf_env_:0x7fdd02a44030, is_in_stop_state_:false, is_inited_:true}, restore_handler:{is_inited:true, is_in_stop_state:false, id:1, proposal_id:9223372036854775807, role:2, parent:null, context:{issued:false, last_fetch_ts:-1, max_submit_lsn:{lsn:18446744073709551615}, max_fetch_lsn:{lsn:18446744073709551615}, error_context:{ret_code:0, trace_id:Y0-0000000000000000-0-0}}}, is_inited:true, tablet_gc_handler:{tablet_persist_trigger:0, is_inited:true}}) [2024-02-19 19:03:31.789720] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=21] Cache replace map node details(ret=0, replace_node_count=0, replace_time=17743, replace_start_pos=1053776, replace_num=15728) [2024-02-19 19:03:31.791985] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=44] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:31.792087] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=29] Wash time detail, (compute_wash_size_time=153, refresh_score_time=65, wash_time=8) [2024-02-19 19:03:31.793355] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.793378] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=24] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.803498] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.803543] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.807074] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:326) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=12] ====== check clog disk timer task ====== [2024-02-19 19:03:31.807109] INFO [PALF] get_disk_usage (palf_env_impl.cpp:820) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=32] get_disk_usage(ret=0, capacity(MB):=2048, used(MB):=1945) [2024-02-19 19:03:31.808683] INFO [STORAGE.TRANS] get_rec_log_ts (ob_trans_ctx_mgr_v4.cpp:1283) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=16] succ to get rec log ts(*this={this:0x7fdce3204030, ls_id:{id:1}, tenant_id:1, state:"F_WORKING", total_tx_ctx_count:0, leader_takeover_ts:{mts:0}, is_leader_serving:false, max_replay_commit_version:1707751112415295196, ls_retain_ctx_mgr:{retain_ctx_list_.size():0}, aggre_rec_log_ts:-1, prev_aggre_rec_log_ts:-1, online_ts:0, uref:1073741825}, aggre_rec_log_ts=9223372036854775807) [2024-02-19 19:03:31.808718] INFO [STORAGE.TRANS] get_rec_log_ts (ob_tx_ctx_memtable.cpp:231) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=52] tx ctx memtable get rec log ts(this={ObITable:{this:0x7fdce5f6e080, key:{tablet_id:{id:49401}, column_group_idx:0, table_type:"TX_CTX_MEMTABLE", log_ts_range:{start_log_ts:1, end_log_ts:1708337131277985}}, ref_cnt:2, upper_trans_version:-4007, timestamp:0}, this:0x7fdce5f6e080, snapshot_version:1708337131277985, ls_id:{id:1}, is_frozen:false}, rec_log_ts=9223372036854775807) [2024-02-19 19:03:31.808750] INFO [STORAGE.TABLELOCK] get_rec_log_ts (ob_lock_memtable.cpp:739) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=24] rec_log_ts of ObLockMemtable is (rec_log_ts_=9223372036854775807, flushed_log_ts_=1707033175148098668, freeze_log_ts_=0, max_committed_log_ts_=-1, is_frozen_=false, ls_id_={id:1}) [2024-02-19 19:03:31.808766] INFO [STORAGE.TRANS] get_rec_log_ts (ob_ls_tx_service.cpp:437) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=14] [CHECKPOINT] ObLSTxService::get_rec_log_ts(common_checkpoint_type="TX_DATA_MEMTABLE_TYPE", common_checkpoints_[min_rec_log_ts_common_checkpoint_type_index]={ObIMemtableMgr:{Memtables:this:0x7fdce89de180, ref_cnt:1, is_inited:true, tablet_id:{id:49402}, freezer:0x7fdce89e30d0, table_type:1, memtable_head:0, memtable_tail:2, t3m:0x7fdd18bce030, tables:[0x7fdce5eea080, 0x7fdce5eea360, null, null, null, null, null, null, null, null, null, null, null, null, null, null]}, is_freezing:false, ls_id:{id:1}, tx_data_table:0x7fdce89e4550, ls_tablet_svr:0x7fdce89de160, slice_allocator:0x7fdce89e4590}, min_rec_log_ts=1707209832548318068, ls_id_={id:1}) [2024-02-19 19:03:31.813499] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=28] get rec log ts(service_type_=0, rec_log_ts=9223372036854775807) [2024-02-19 19:03:31.813625] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=128] get rec log ts(service_type_=1, rec_log_ts=9223372036854775807) [2024-02-19 19:03:31.813658] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=31] get rec log ts(service_type_=2, rec_log_ts=9223372036854775807) [2024-02-19 19:03:31.813676] INFO [STORAGE] update_clog_checkpoint (ob_checkpoint_executor.cpp:158) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=12] [CHECKPOINT] clog checkpoint no change(checkpoint_ts=1707209832548318068, checkpoint_ts_in_ls_meta=1707209832548318068, ls_id={id:1}, service_type="TRANS_SERVICE") [2024-02-19 19:03:31.813701] INFO [STORAGE] cannot_recycle_log_over_threshold_ (ob_checkpoint_service.cpp:239) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=19] cannot_recycle_log_size statistics(cannot_recycle_log_size=1905773194, threshold=644245094) [2024-02-19 19:03:31.813655] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.813874] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=218] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.815360] INFO [PALF] locate_by_lsn_coarsely (palf_handle_impl.cpp:1605) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=13] locate_by_lsn_coarsely(ret=0, ret="OB_SUCCESS", this={palf_id:1, self:"172.1.3.242:2882", has_set_deleted:false}, lsn={lsn:24563027948}, committed_lsn={lsn:25325337226}, result_ts_ns=1707530339417374084) [2024-02-19 19:03:31.815399] INFO [STORAGE] advance_checkpoint_by_flush (ob_checkpoint_executor.cpp:226) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=39] advance checkpoint by flush to avoid clog disk full(recycle_ts=1707530339417374084, end_lsn={lsn:25325337226}, clog_checkpoint_lsn={lsn:23419564032}, calcu_recycle_lsn={lsn:24563027948}, ls_->get_ls_id()={id:1}) [2024-02-19 19:03:31.815424] INFO [STORAGE] advance_checkpoint_by_flush (ob_checkpoint_executor.cpp:244) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=18] start flush(recycle_ts=1707530339417374084, ls_->get_clog_checkpoint_ts()=1707209832548318068, ls_->get_ls_id()={id:1}) [2024-02-19 19:03:31.816543] INFO [STORAGE.TRANS] get_rec_log_ts (ob_trans_ctx_mgr_v4.cpp:1283) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=13] succ to get rec log ts(*this={this:0x7fdce3204030, ls_id:{id:1}, tenant_id:1, state:"F_WORKING", total_tx_ctx_count:0, leader_takeover_ts:{mts:0}, is_leader_serving:false, max_replay_commit_version:1707751112415295196, ls_retain_ctx_mgr:{retain_ctx_list_.size():0}, aggre_rec_log_ts:-1, prev_aggre_rec_log_ts:-1, online_ts:0, uref:1073741825}, aggre_rec_log_ts=9223372036854775807) [2024-02-19 19:03:31.816580] INFO [STORAGE.TRANS] get_rec_log_ts (ob_tx_ctx_memtable.cpp:231) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=37] tx ctx memtable get rec log ts(this={ObITable:{this:0x7fdce5f6e080, key:{tablet_id:{id:49401}, column_group_idx:0, table_type:"TX_CTX_MEMTABLE", log_ts_range:{start_log_ts:1, end_log_ts:1708337131277985}}, ref_cnt:2, upper_trans_version:-4007, timestamp:0}, this:0x7fdce5f6e080, snapshot_version:1708337131277985, ls_id:{id:1}, is_frozen:false}, rec_log_ts=9223372036854775807) [2024-02-19 19:03:31.816624] INFO [STORAGE] freeze (ob_tx_data_memtable_mgr.cpp:192) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=37] start freeze tx data memtable(ls_id_={id:1}) [2024-02-19 19:03:31.816640] INFO [STORAGE] freeze_ (ob_tx_data_memtable_mgr.cpp:228) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=15] There is a freezed memetable existed. Try freeze after flushing it.(ret=-4023, ret="OB_EAGAIN", get_memtable_count_()=2) [2024-02-19 19:03:31.816655] WARN [STORAGE] freeze (ob_tx_data_memtable_mgr.cpp:206) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=14] freeze tx data memtable fail.(ret=-4023, ret="OB_EAGAIN") [2024-02-19 19:03:31.816675] WARN [STORAGE] flush (ob_tx_data_memtable_mgr.cpp:430) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=18] freeze failed(ret=-4023, ret="OB_EAGAIN", this=0x7fdce89de180) [2024-02-19 19:03:31.816696] WARN [STORAGE.TRANS] flush (ob_ls_tx_service.cpp:451) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=20] obCommonCheckpoint flush failed(tmp_ret=-4023, common_checkpoints_[i]=0x7fdce89de250) [2024-02-19 19:03:31.816782] INFO [STORAGE.TABLELOCK] get_rec_log_ts (ob_lock_memtable.cpp:739) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=83] rec_log_ts of ObLockMemtable is (rec_log_ts_=9223372036854775807, flushed_log_ts_=1707033175148098668, freeze_log_ts_=0, max_committed_log_ts_=-1, is_frozen_=false, ls_id_={id:1}) [2024-02-19 19:03:31.822838] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=6] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:31.822871] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=34] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:31.822886] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=12] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:31.822897] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=10] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:31.822911] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=10] fail to resolve table(ret=-5019) [2024-02-19 19:03:31.822921] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=9] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:31.822934] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=9] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:31.822944] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=8] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:31.822953] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:31.822963] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=8] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:31.822971] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=8] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:31.822980] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=8] resolve normal query failed(ret=-5019) [2024-02-19 19:03:31.822991] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=8] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:31.823009] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=10] failed to resolve(ret=-5019) [2024-02-19 19:03:31.823022] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=11] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.823042] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=17] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.823054] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=11] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:31.823073] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=13] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, ret=-5019) [2024-02-19 19:03:31.823085] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=10] executor execute failed(ret=-5019) [2024-02-19 19:03:31.823095] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=9] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, retry_cnt=0) [2024-02-19 19:03:31.823114] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=13] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:31.823147] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=29] result set close failed(ret=-5019) [2024-02-19 19:03:31.823156] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=9] result set close failed(ret=-5019) [2024-02-19 19:03:31.823165] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:31.823206] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:31.823218] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A025-0-0] [lt=25] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-02-19 19:03:31.823242] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=21] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:31.823254] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:31.823263] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:31.823274] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] query failed(ret=-5019, conn=0x7fdcdc924050, start=1708340611822618, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:31.823286] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] read failed(ret=-5019) [2024-02-19 19:03:31.823296] WARN get_my_sql_result_ (ob_table_access_helper.h:329) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x7fdcfffccd78, table=__all_server, condition=where svr_ip='172.1.3.242' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, columns_str="zone") [2024-02-19 19:03:31.823318] WARN read_single_row (ob_table_access_helper.h:178) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=16] get mysql result failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1, columns=0x7fdcfffccd78, table=__all_server, where_condition=where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:31.823403] WARN [COORDINATOR] get_self_zone_name (table_accessor.cpp:517) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] get zone from __all_server failed(ret=-4016, ret="OB_ERR_UNEXPECTED", columns=0x7fdcfffccd78, where_condition="where svr_ip='172.1.3.242' and svr_port=2882", zone_name_holder=) [2024-02-19 19:03:31.823417] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:450) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] get self zone name failed(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:31.823429] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:459) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] zone name is empty(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:31.823441] WARN [COORDINATOR] refresh (ob_leader_coordinator.cpp:107) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] get all ls election reference info failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1) [2024-02-19 19:03:31.823998] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.824022] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=24] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.826681] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=15] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.826717] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=34] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611826670}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.826742] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=22] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611826670}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.830276] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC79-0-0] [lt=133] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:31.830315] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC79-0-0] [lt=39] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:31.830339] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC79-0-0] [lt=22] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:31.830359] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC79-0-0] [lt=17] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:31.830373] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC79-0-0] [lt=14] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:31.833379] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=21] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=41265, clean_start_pos=660597, clean_num=31457) [2024-02-19 19:03:31.834114] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.834154] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.844314] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.844383] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=72] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.854582] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.854643] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=55] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.864765] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.864805] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.869554] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=26] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340611869535}) [2024-02-19 19:03:31.869584] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=32] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340611826670}}) [2024-02-19 19:03:31.869579] WARN [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:485) [1106741][SysLocAsyncUp0][T0][YB42AC0103F2-000611B9212AA0C5-0-0] [lt=33] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, tasks=[{cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340611869535}]) [2024-02-19 19:03:31.871288] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] get wrs ts(ls_id={id:1}, delta_ns=-1706042771803424125, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:31.871318] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=29] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=1, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:31.873077] INFO [COMMON] print_io_status (ob_io_struct.cpp:619) [1106661][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=48] [IO STATUS](tenant_ids=[1, 500], send_thread_count=2, send_queues=[0, 0]) [2024-02-19 19:03:31.874940] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.874992] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.885791] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.885828] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.895954] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.896094] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=141] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.906250] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.906297] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.917777] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.917824] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.925922] WARN [SERVER] batch_process_tasks (ob_ls_table_updater.cpp:333) [1106713][LSMetaTblUp0][T0][YB42AC0103F2-000611B9217D2DD0-0-0] [lt=57] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1, task={tenant_id:1, ls_id:{id:1}, add_timestamp:1708337390831403}) [2024-02-19 19:03:31.927391] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=15] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:31.927428] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=37] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611927383}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.927446] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=17] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340611927383}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:31.927459] WARN [STORAGE.TRANS] operator() (ob_ts_mgr.h:225) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=9] refresh gts failed(ret=-4038, ret="OB_NOT_MASTER", gts_tenant_info={v:1}) [2024-02-19 19:03:31.927479] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:229) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=20] refresh gts functor(ret=-4038, ret="OB_NOT_MASTER", gts_tenant_info={v:1}) [2024-02-19 19:03:31.927982] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.928006] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=23] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.938137] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.938173] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.948768] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.948817] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.958960] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.959010] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.969145] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.969184] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.969845] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340611969828}) [2024-02-19 19:03:31.969875] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=30] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340611927383}}) [2024-02-19 19:03:31.971336] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=38] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:31.971368] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=31] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:31.971397] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340611971386}) [2024-02-19 19:03:31.971413] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340611971318) [2024-02-19 19:03:31.971431] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340611771265, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:31.971457] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:738) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] current server is WRS leader, need start CLUSTER weak read service(tenant_id=1, serve_leader_epoch=0, cur_leader_epoch=138, cluster_service_tablet_id_={id:226}, in_service=false, can_update_version=false, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:31.971491] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:336) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] begin start service(tenant_id=1, is_in_service()=false, can_update_version=false) [2024-02-19 19:03:31.971509] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:338) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] start TenantWeakReadClusterService(tenant_id=1) [2024-02-19 19:03:31.972474] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:31.972508] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=32] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:31.972521] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=11] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:31.972541] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=18] resolve table relation factor failed(ret=-5019, table_name=__all_weak_read_service) [2024-02-19 19:03:31.972557] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=11] fail to resolve table(ret=-5019) [2024-02-19 19:03:31.972572] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=15] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:31.972589] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=10] Table 'oceanbase.__all_weak_read_service' doesn't exist [2024-02-19 19:03:31.972606] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=16] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:31.972617] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=10] resolve basic table failed(ret=-5019) [2024-02-19 19:03:31.972633] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=14] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:31.972643] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=9] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:31.972659] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=14] resolve normal query failed(ret=-5019) [2024-02-19 19:03:31.972676] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=16] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:31.972705] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=19] failed to resolve(ret=-5019) [2024-02-19 19:03:31.972722] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=16] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.972735] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=10] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:31.972751] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=14] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:31.972762] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=9] fail to handle text query(stmt=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '', ret=-5019) [2024-02-19 19:03:31.972780] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=16] executor execute failed(ret=-5019) [2024-02-19 19:03:31.972797] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=16] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, retry_cnt=0) [2024-02-19 19:03:31.972823] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=20] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:31.972842] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=16] result set close failed(ret=-5019) [2024-02-19 19:03:31.972852] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:31.972858] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=5] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:31.972878] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=6] failed to process record(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:31.972893] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EB-0-0] [lt=13] failed to process final(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, aret=-5019, ret=-5019) [2024-02-19 19:03:31.972902] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7] execute sql failed(ret=-5019, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:31.972913] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:31.972920] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:31.972947] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25] query failed(ret=-5019, conn=0x7fdd189bc050, start=1708340611972249, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:31.972956] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] read failed(ret=-5019) [2024-02-19 19:03:31.972964] WARN [STORAGE.TRANS] query_cluster_version_range_ (ob_tenant_weak_read_cluster_service.cpp:192) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6] execute sql read fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", exec_tenant_id=1, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:31.973022] WARN [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:367) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] query cluster version range from WRS table fail(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:31.973031] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:415) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] start service done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, in_service=false, leader_epoch=0, current_version=0, delta=1708340611973028, min_version=0, max_version=0, max_stale_time=5000000000, all_valid_server_count=0, total_time=1550, wlock_time=39, check_leader_time=3, query_version_time=0, persist_version_time=0) [2024-02-19 19:03:31.973045] WARN [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:781) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] start CLUSTER weak read service fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:31.973058] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:791) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, need_start_service=true, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=138, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:31.973104] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] get wrs ts(ls_id={id:1}, delta_ns=-1706042771803322595, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:31.973115] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:31.979315] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.979350] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.989474] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.989533] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=63] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:31.999734] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:31.999793] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=61] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.006367] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=25] Cache replace map node details(ret=0, replace_node_count=0, replace_time=16531, replace_start_pos=1069504, replace_num=15728) [2024-02-19 19:03:32.009929] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.010010] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=80] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.020155] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.020202] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.028070] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.028113] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=43] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612028058}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.028137] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=22] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612028058}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.030346] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.030387] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.034025] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=37] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:32.034142] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=42] Wash time detail, (compute_wash_size_time=189, refresh_score_time=68, wash_time=7) [2024-02-19 19:03:32.040519] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.040582] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=62] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.050721] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.050769] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.060890] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.060957] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=69] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.070253] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=17] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340612070238}) [2024-02-19 19:03:32.070285] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=34] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340612028058}}) [2024-02-19 19:03:32.071034] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.071078] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.071461] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] get wrs ts(ls_id={id:1}, delta_ns=-1706042771803224060, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:32.071490] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=28] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:32.074889] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=31] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=40712, clean_start_pos=692054, clean_num=31457) [2024-02-19 19:03:32.078779] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7A-0-0] [lt=137] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:32.078821] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7A-0-0] [lt=42] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:32.078838] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7A-0-0] [lt=17] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:32.078869] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7A-0-0] [lt=27] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:32.078887] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7A-0-0] [lt=17] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:32.084852] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.084900] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.095060] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.095113] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=56] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.105345] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.105406] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=64] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.115547] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.115595] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.118272] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:199) [1107573][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=42] sql audit evict task end(evict_high_mem_level=32212254, evict_high_size_level=90000, evict_batch_count=0, elapse_time=1, size_used=14894, mem_used=31196160) [2024-02-19 19:03:32.125745] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.125787] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.128735] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.128762] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=27] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612128724}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.128782] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=16] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612128724}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.135909] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.135953] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.146073] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.146120] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.156258] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.156305] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.166744] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.166781] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.170354] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=14] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340612170341}) [2024-02-19 19:03:32.170382] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=30] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340612128724}}) [2024-02-19 19:03:32.171532] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:32.171575] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=43] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:32.171595] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340612171516) [2024-02-19 19:03:32.171614] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340611971444, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:32.171704] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] get wrs ts(ls_id={id:1}, delta_ns=-1706042771803123915, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:32.171723] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:32.176919] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.176969] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.187122] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.187168] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.197313] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.197369] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=59] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.208585] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_sys_parameter, ret=-5019) [2024-02-19 19:03:32.208614] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=29] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_sys_parameter, ret=-5019) [2024-02-19 19:03:32.208629] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=12] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:32.208640] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=11] resolve table relation factor failed(ret=-5019, table_name=__all_sys_parameter) [2024-02-19 19:03:32.208655] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=10] fail to resolve table(ret=-5019) [2024-02-19 19:03:32.208665] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=10] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:32.208681] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=8] Table 'oceanbase.__all_sys_parameter' doesn't exist [2024-02-19 19:03:32.208691] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=9] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:32.208701] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:32.208711] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=9] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:32.208722] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=9] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:32.208849] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=125] resolve normal query failed(ret=-5019) [2024-02-19 19:03:32.208861] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=11] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:32.208881] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=11] failed to resolve(ret=-5019) [2024-02-19 19:03:32.208894] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=11] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.208907] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=10] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.208917] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=9] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:32.208946] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=27] fail to handle text query(stmt=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter, ret=-5019) [2024-02-19 19:03:32.208960] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=11] executor execute failed(ret=-5019) [2024-02-19 19:03:32.208971] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=10] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, retry_cnt=0) [2024-02-19 19:03:32.208991] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=15] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:32.209040] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=45] result set close failed(ret=-5019) [2024-02-19 19:03:32.209051] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=9] result set close failed(ret=-5019) [2024-02-19 19:03:32.209060] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:32.209087] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D6-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:32.209103] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D6-0-0] [lt=12] failed to process final(executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, aret=-5019, ret=-5019) [2024-02-19 19:03:32.209117] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D6-0-0] [lt=11] execute sql failed(ret=-5019, tenant_id=1, sql=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter) [2024-02-19 19:03:32.209130] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D6-0-0] [lt=11] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:32.209141] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D6-0-0] [lt=10] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:32.209152] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D6-0-0] [lt=10] query failed(ret=-5019, conn=0x7fdcf4ef4050, start=1708340612208355, sql=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter) [2024-02-19 19:03:32.209165] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D6-0-0] [lt=13] read failed(ret=-5019) [2024-02-19 19:03:32.209176] WARN [SHARE] update_local (ob_config_manager.cpp:322) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D6-0-0] [lt=9] read config from __all_sys_parameter failed(sqlstr="select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter", ret=-5019) [2024-02-19 19:03:32.209252] WARN [SHARE] update_local (ob_config_manager.cpp:356) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D6-0-0] [lt=10] Read system config from inner table error(ret=-5019) [2024-02-19 19:03:32.209264] WARN [SHARE] runTimerTask (ob_config_manager.cpp:455) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D6-0-0] [lt=12] Update local config failed(ret=-5019) [2024-02-19 19:03:32.209694] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.209721] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.220410] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.220458] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.224386] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=30] Cache replace map node details(ret=0, replace_node_count=0, replace_time=17944, replace_start_pos=1085232, replace_num=15728) [2024-02-19 19:03:32.229341] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.229385] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=44] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612229329}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.229405] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=18] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612229329}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.230537] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.230563] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.238874] INFO [LIB] runTimerTask (ob_work_queue.cpp:24) [1106715][ObTimer][T0][Y0-0000000000000000-0-0] [lt=27] add async task(this=tasktype:N9oceanbase10rootserver13ObRootService19ObRefreshServerTaskE) [2024-02-19 19:03:32.239801] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=14] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:32.239827] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=25] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:32.239841] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=11] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:32.239852] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=10] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:32.239864] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=8] fail to resolve table(ret=-5019) [2024-02-19 19:03:32.239881] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=17] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:32.239896] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=8] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:32.239909] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=12] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:32.239918] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:32.239926] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=7] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:32.239934] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=7] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:32.239947] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=11] resolve normal query failed(ret=-5019) [2024-02-19 19:03:32.239958] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:32.239977] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=10] failed to resolve(ret=-5019) [2024-02-19 19:03:32.239996] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=18] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.240010] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=10] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.240021] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=11] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:32.240032] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=7] fail to handle text query(stmt=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server, ret=-5019) [2024-02-19 19:03:32.240043] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=10] executor execute failed(ret=-5019) [2024-02-19 19:03:32.240054] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=10] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, retry_cnt=0) [2024-02-19 19:03:32.240073] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=12] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:32.240092] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=16] result set close failed(ret=-5019) [2024-02-19 19:03:32.240100] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:32.240109] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:32.240134] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2C-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:32.240150] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106717][RSAsyncTask0][T0][YB42AC0103F2-000611B922978A2C-0-0] [lt=12] failed to process final(executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, aret=-5019, ret=-5019) [2024-02-19 19:03:32.240164] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=11] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server) [2024-02-19 19:03:32.240177] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=12] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:32.240187] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=10] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:32.240197] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=8] query failed(ret=-5019, conn=0x7fdcdc89a050, start=1708340612239594, sql=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server) [2024-02-19 19:03:32.240210] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=12] read failed(ret=-5019) [2024-02-19 19:03:32.240376] WARN [SHARE] run2 (ob_async_task_queue.cpp:148) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=8] task process failed, start retry(max retry time=0, retry interval=1000000, ret=-5019) [2024-02-19 19:03:32.240664] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.240687] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=23] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.250824] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=24] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.250866] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.260638] INFO [SHARE] run_loop_ (ob_bg_thread_monitor.cpp:331) [1109111][BGThreadMonitor][T0][Y0-0000000000000000-0-0] [lt=47] current monitor number(seq_=-1) [2024-02-19 19:03:32.260984] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.261020] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.270766] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=24] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340612270748}) [2024-02-19 19:03:32.270805] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=41] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340612229329}}) [2024-02-19 19:03:32.270852] INFO [DETECT] record_summary_info_and_logout_when_necessary_ (ob_lcl_batch_sender_thread.cpp:159) [1108324][T1_LCLSender][T1][Y0-0000000000000000-0-0] [lt=55] ObLCLBatchSenderThread periodic report summary info(total_constructed_detector=0, total_destructed_detector=0, total_alived_detector=0, duty_ratio=3.051896207584830004e-02, int64_t(ObServerConfig::get_instance()._lcl_op_interval)=30000, *this={this:0x7fdd02d94eb0, is_inited:true, is_running:true, total_record_time:5010000, over_night_times:0}) [2024-02-19 19:03:32.271274] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=60] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.271304] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.272518] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:32.272543] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:32.272565] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340612272500) [2024-02-19 19:03:32.272586] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340612171625, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:32.272667] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] get wrs ts(ls_id={id:1}, delta_ns=-1706042771803022588, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:32.272691] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:32.275432] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=27] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:32.275528] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=38] Wash time detail, (compute_wash_size_time=144, refresh_score_time=53, wash_time=7) [2024-02-19 19:03:32.281426] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.281471] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.291602] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.291645] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.301764] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.301812] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.311932] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.311982] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.317747] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=242] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=41970, clean_start_pos=723511, clean_num=31457) [2024-02-19 19:03:32.322992] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=8] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:32.323030] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=39] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:32.323044] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=13] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:32.323055] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=10] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:32.323068] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=9] fail to resolve table(ret=-5019) [2024-02-19 19:03:32.323078] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=9] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:32.323093] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=8] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:32.323102] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=8] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:32.323111] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=8] resolve basic table failed(ret=-5019) [2024-02-19 19:03:32.323120] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=7] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:32.323129] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=7] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:32.323139] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=8] resolve normal query failed(ret=-5019) [2024-02-19 19:03:32.323149] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:32.323169] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=11] failed to resolve(ret=-5019) [2024-02-19 19:03:32.323181] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=10] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.323193] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=10] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.323203] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=8] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:32.323215] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=10] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, ret=-5019) [2024-02-19 19:03:32.323229] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=12] executor execute failed(ret=-5019) [2024-02-19 19:03:32.323240] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=9] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, retry_cnt=0) [2024-02-19 19:03:32.323259] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=13] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:32.323278] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=17] result set close failed(ret=-5019) [2024-02-19 19:03:32.323287] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:32.323296] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:32.323322] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:32.323335] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A026-0-0] [lt=11] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-02-19 19:03:32.323348] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:32.323359] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:32.323369] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:32.323379] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] query failed(ret=-5019, conn=0x7fdcdc9be050, start=1708340612322779, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:32.323413] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] read failed(ret=-5019) [2024-02-19 19:03:32.323424] WARN get_my_sql_result_ (ob_table_access_helper.h:329) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x7fdcfffccd78, table=__all_server, condition=where svr_ip='172.1.3.242' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, columns_str="zone") [2024-02-19 19:03:32.323445] WARN read_single_row (ob_table_access_helper.h:178) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=15] get mysql result failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1, columns=0x7fdcfffccd78, table=__all_server, where_condition=where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:32.323537] WARN [COORDINATOR] get_self_zone_name (table_accessor.cpp:517) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] get zone from __all_server failed(ret=-4016, ret="OB_ERR_UNEXPECTED", columns=0x7fdcfffccd78, where_condition="where svr_ip='172.1.3.242' and svr_port=2882", zone_name_holder=) [2024-02-19 19:03:32.323552] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:450) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=15] get self zone name failed(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:32.323565] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:459) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] zone name is empty(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:32.323577] WARN [COORDINATOR] refresh (ob_leader_coordinator.cpp:107) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] get all ls election reference info failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1) [2024-02-19 19:03:32.323621] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.323644] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=21] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.328731] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7B-0-0] [lt=172] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:32.328761] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7B-0-0] [lt=32] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:32.328778] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7B-0-0] [lt=15] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:32.328833] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7B-0-0] [lt=52] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:32.328849] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7B-0-0] [lt=15] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:32.330334] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.330361] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=26] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612330325}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.330377] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=15] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612330325}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.334234] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.334282] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.346816] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=185] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.346861] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.356972] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.357017] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.368299] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.368340] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.370798] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340612370770}) [2024-02-19 19:03:32.370830] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=32] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340612330325}}) [2024-02-19 19:03:32.372648] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=35] get wrs ts(ls_id={id:1}, delta_ns=-1706042771802923686, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:32.372678] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=31] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:32.377199] WARN [STORAGE.TRANS] acquire_global_snapshot__ (ob_trans_service_v4.cpp:1472) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=17] acquire global snapshot fail(ret=-4012, gts_ahead=0, expire_ts=1708340612376061, now={mts:1708340610448186}, now0={mts:1708340610448186}, snapshot=-1, uncertain_bound=0) [2024-02-19 19:03:32.377236] WARN [STORAGE.TRANS] get_read_snapshot (ob_tx_api.cpp:552) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=39] acquire global snapshot fail(ret=-4012, tx={this:0x7fdcd5ac14a0, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340610447159, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}) [2024-02-19 19:03:32.377282] WARN [SQL.EXE] stmt_setup_snapshot_ (ob_sql_trans_control.cpp:614) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=38] fail to get snapshot(ret=-4012, local_ls_id={id:1}, session={this:0x7fdcd7d060c0, id:1, tenant:"sys", tenant_id:1, effective_tenant:"sys", effective_tenant_id:1, database:"oceanbase", user:"root@%", consistency_level:3, session_state:0, tx:0x7fdcd5ac14a0}) [2024-02-19 19:03:32.377304] WARN [SQL.EXE] start_stmt (ob_sql_trans_control.cpp:481) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=22] fail to exec stmt_setup_snapshot_(session, das_ctx, plan, plan_ctx, txs)(ret=-4012, session_id=1, *tx_desc={this:0x7fdcd5ac14a0, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340610447159, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}) [2024-02-19 19:03:32.377332] INFO [SQL.EXE] start_stmt (ob_sql_trans_control.cpp:530) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=24] start stmt(ret=-4012, auto_commit=true, session_id=1, snapshot={this:0x7fdd2afcbab0, valid:false, source:0, core:{version:-1, tx_id:{txid:0}, scn:-1}, uncertain_bound:0, snapshot_lsid:{id:-1}, parts:[]}, savepoint=0, tx_desc={this:0x7fdcd5ac14a0, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340610447159, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}, plan_type=1, stmt_type=1, has_for_update=false, query_start_time=1708340610447156, use_das=false, session={this:0x7fdcd7d060c0, id:1, tenant:"sys", tenant_id:1, effective_tenant:"sys", effective_tenant_id:1, database:"oceanbase", user:"root@%", consistency_level:3, session_state:0, tx:0x7fdcd5ac14a0}, plan=0x7fdcda010050, consistency_level_in_plan_ctx=3, trans_result={incomplete:false, parts:[], touched_ls_list:[], cflict_txs:[]}) [2024-02-19 19:03:32.377380] WARN [SQL] start_stmt (ob_result_set.cpp:282) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=47] fail to start stmt(ret=-4012, phy_plan->get_dependency_table()=[{table_id:1, schema_version:0, object_type:1, is_db_explicit:false, is_existed:true}]) [2024-02-19 19:03:32.377396] WARN [SQL] do_open_plan (ob_result_set.cpp:451) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=13] fail start stmt(ret=-4012) [2024-02-19 19:03:32.377407] WARN [SQL] open (ob_result_set.cpp:150) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=9] execute plan failed(ret=-4012) [2024-02-19 19:03:32.377416] WARN [SERVER] open (ob_inner_sql_result.cpp:146) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=8] open result set failed(ret=-4012) [2024-02-19 19:03:32.377427] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:607) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=7] result set open failed(ret=-4012, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}) [2024-02-19 19:03:32.377439] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=11] execute failed(ret=-4012, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=0) [2024-02-19 19:03:32.377452] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=9] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-4012, err_:"OB_TIMEOUT", retry_type:0, client_ret:-4012}, need_retry=false) [2024-02-19 19:03:32.377485] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=14] result set close failed(ret=-4012) [2024-02-19 19:03:32.377494] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=9] result set close failed(ret=-4012) [2024-02-19 19:03:32.377503] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=8] failed to close result(close_ret=-4012, ret=-4012) [2024-02-19 19:03:32.377534] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78583-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-02-19 19:03:32.377550] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:574) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78583-0-0] [lt=12] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=1930392) [2024-02-19 19:03:32.377558] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78583-0-0] [lt=7] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-02-19 19:03:32.377570] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78583-0-0] [lt=7] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-02-19 19:03:32.377579] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78583-0-0] [lt=7] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-02-19 19:03:32.377586] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78583-0-0] [lt=7] execute_read failed(ret=-4012, cluster_id=1, tenant_id=1) [2024-02-19 19:03:32.377594] WARN [COMMON.MYSQLP] read_without_check_sys_variable (ob_sql_client_decorator.cpp:119) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78583-0-0] [lt=6] failed to read without check sys variable(ret=-4012, sql="SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name", tenant_id=1, check_sys_variable=false, snapshot_timestamp=-1) [2024-02-19 19:03:32.377607] WARN [SHARE] load (ob_core_table_proxy.cpp:436) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78583-0-0] [lt=8] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-02-19 19:03:32.377691] WARN [SHARE] load (ob_core_table_proxy.cpp:368) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78583-0-0] [lt=9] load failed(ret=-4012, for_update=false) [2024-02-19 19:03:32.377702] WARN [SHARE] get (ob_global_stat_proxy.cpp:321) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78583-0-0] [lt=10] core_table load failed(ret=-4012) [2024-02-19 19:03:32.377710] WARN [SHARE] get_baseline_schema_version (ob_global_stat_proxy.cpp:287) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78583-0-0] [lt=7] get failed(ret=-4012) [2024-02-19 19:03:32.377717] WARN [SHARE.SCHEMA] get_baseline_schema_version (ob_schema_service_sql_impl.cpp:795) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78583-0-0] [lt=6] get_baseline_schema_version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-02-19 19:03:32.377728] WARN [SHARE.SCHEMA] get_baseline_schema_version (ob_multi_version_schema_service.cpp:4009) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78583-0-0] [lt=9] get baseline schema version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-02-19 19:03:32.377737] WARN [SERVER] try_load_baseline_schema_version_ (ob_server_schema_updater.cpp:512) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78583-0-0] [lt=8] fail to update baseline schema version(tmp_ret=-4012, tmp_ret="OB_TIMEOUT", *tenant_id=1) [2024-02-19 19:03:32.377748] WARN [SERVER] batch_process_tasks (ob_server_schema_updater.cpp:229) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78583-0-0] [lt=6] fail to process refresh task(ret=-4023, ret="OB_EAGAIN", tasks.at(0)={type:1, did_retry:true, schema_info:{schema_version:-1, tenant_id:0, sequence_id:18446744073709551615}}) [2024-02-19 19:03:32.377760] WARN [SERVER] batch_process_tasks (ob_uniq_task_queue.h:498) [1106708][SerScheQueue1][T0][Y0-0000000000000000-0-0] [lt=9] fail to batch process task(ret=-4023) [2024-02-19 19:03:32.377766] WARN [SERVER] run1 (ob_uniq_task_queue.h:449) [1106708][SerScheQueue1][T0][Y0-0000000000000000-0-0] [lt=6] fail to batch execute task(ret=-4023, tasks.count()=1) [2024-02-19 19:03:32.382124] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.382167] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.394387] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.394427] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.404834] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.404879] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.415112] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=61] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.415150] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.425478] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.425519] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.434751] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.434786] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=36] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612434736}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.434830] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=40] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612434736}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.434847] INFO [STORAGE.TRANS] refresh_gts (ob_gts_source.cpp:520) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=14] refresh gts(ret=-4038, ret="OB_NOT_MASTER", tenant_id=1, need_refresh=false, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340612434736}}) [2024-02-19 19:03:32.435714] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.435760] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.443023] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=30] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.443067] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=44] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.443086] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:32.443205] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=28] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.443241] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=36] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.443257] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=14] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:32.443679] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.443702] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.443716] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:32.444302] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.444316] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.444329] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:32.444393] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=41] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.444411] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.444426] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:32.444879] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.444902] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.444915] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:32.445055] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=11] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.445073] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.445084] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=11] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:32.445516] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=25] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.445529] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.445539] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:32.445726] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=17] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.445739] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.445749] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=9] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:32.445922] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.446043] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=117] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.446345] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=7] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.446363] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.446373] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=9] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:32.446958] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=7] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.446973] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.446983] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=9] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:32.447468] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.447490] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.447505] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:32.447588] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=7] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.447597] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=8] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.447606] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=8] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:32.447732] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106788][RpcIO][T0][Y0-0000000000000000-0-0] [lt=18] [RPC EASY STAT](log_str=conn count=1/1, request done=19511/19512, request doing=1/0) [2024-02-19 19:03:32.447857] INFO [SERVER] try_reload_schema (ob_server_schema_updater.cpp:435) [1108363][LeaseHB][T0][Y0-0000000000000000-0-0] [lt=12] schedule fetch new schema task(ret=0, ret="OB_SUCCESS", schema_info={schema_version:-1, tenant_id:0, sequence_id:18446744073709551615}) [2024-02-19 19:03:32.447876] INFO [SERVER] do_heartbeat_event (ob_heartbeat.cpp:188) [1108363][LeaseHB][T0][Y0-0000000000000000-0-0] [lt=20] try reload schema success(schema_version=1, refresh_schema_info={schema_version:-1, tenant_id:0, sequence_id:18446744073709551615}, schema_ret=0) [2024-02-19 19:03:32.448097] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.448111] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.448124] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:32.448577] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=6] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.448598] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.448611] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=13] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:32.449234] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.449235] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.449251] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.449255] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:32.449862] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.450309] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106789][RpcIO][T0][Y0-0000000000000000-0-0] [lt=15] [RPC EASY STAT](log_str=conn count=1/1, request done=19511/19511, request doing=0/0) [2024-02-19 19:03:32.450473] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.450479] INFO [SERVER] process_refresh_task (ob_server_schema_updater.cpp:254) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78584-0-0] [lt=7] [REFRESH_SCHEMA] start to process schema refresh task(ret=0, ret="OB_SUCCESS", schema_info={schema_version:-1, tenant_id:0, sequence_id:18446744073709551615}) [2024-02-19 19:03:32.450507] WARN [SERVER] process_refresh_task (ob_server_schema_updater.cpp:267) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78584-0-0] [lt=25] rootservice is not in full service, try again(ret=-4023, ret="OB_EAGAIN", GCTX.root_service_->in_service()=true, GCTX.root_service_->is_full_service()=false) [2024-02-19 19:03:32.451066] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.451470] INFO [SQL.PC] need_check_schema_version (ob_plan_cache_value.cpp:1834) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=12] need check schema(new_schema_version=1, cached_tenant_schema_version=1, contain_sys_name_table_=true, is_contain_tmp_tbl()=false, is_contain_sys_pl_object()=false, need_check=true, constructed_sql_=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = ? ORDER BY row_id, column_name) [2024-02-19 19:03:32.451542] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.451591] INFO [STORAGE.TRANS] refresh_elr_tenant_config_ (ob_tx_elr_util.cpp:45) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=21] refresh tenant config success(tenant_id=1, *this={last_refresh_ts:0, can_tenant_elr:false}) [2024-02-19 19:03:32.451615] INFO [STORAGE.TRANS] in_leader_serving_state (ob_trans_ctx_mgr_v4.cpp:881) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=17] ObLSTxCtxMgr not master(this={this:0x7fdce3204030, ls_id:{id:1}, tenant_id:1, state:"F_WORKING", total_tx_ctx_count:0, leader_takeover_ts:{mts:0}, is_leader_serving:false, max_replay_commit_version:1707751112415295196, ls_retain_ctx_mgr:{retain_ctx_list_.size():0}, aggre_rec_log_ts:-1, prev_aggre_rec_log_ts:-1, online_ts:0, uref:1073741826}) [2024-02-19 19:03:32.451648] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.451678] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.452196] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.453319] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.453878] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.453906] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.454493] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.454699] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=35] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.455081] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.456964] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.457328] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.457570] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.457940] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.458155] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.458547] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.458798] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.459152] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.459384] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.459756] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.459963] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.459986] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=34] Cache replace map node details(ret=0, replace_node_count=0, replace_time=35488, replace_start_pos=1100960, replace_num=15728) [2024-02-19 19:03:32.460214] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.460244] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.460367] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.460685] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.460826] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.461000] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.461286] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.461438] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.461628] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.462247] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.462862] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.463457] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.463526] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.464056] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.464115] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.464659] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.464700] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.464867] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.465287] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.465333] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.465477] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=27] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.466086] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.466153] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.466203] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=9] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.466730] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.466794] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.466864] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.467518] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.468077] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.468290] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.468371] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.468801] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.468911] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.469020] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.469400] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.469535] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=39] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.469646] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.470165] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.470236] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.470335] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.470401] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=25] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.470427] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=25] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.470815] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=31] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.470851] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340612470832}) [2024-02-19 19:03:32.470869] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340612434736}}) [2024-02-19 19:03:32.470916] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.470999] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.471468] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.471528] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.471742] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.472295] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.472343] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.472902] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.472965] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.473096] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.473595] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=30] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.473782] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.474474] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=62] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.474508] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.475057] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.475180] WARN [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:287) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-02-19 19:03:32.475200] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:32.475217] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:32.475230] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.475249] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340612475234}) [2024-02-19 19:03:32.475269] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340612475163) [2024-02-19 19:03:32.475284] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340612272597, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:32.475321] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:738) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] current server is WRS leader, need start CLUSTER weak read service(tenant_id=1, serve_leader_epoch=0, cur_leader_epoch=138, cluster_service_tablet_id_={id:226}, in_service=false, can_update_version=false, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:32.475352] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:336) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] begin start service(tenant_id=1, is_in_service()=false, can_update_version=false) [2024-02-19 19:03:32.475364] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:338) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] start TenantWeakReadClusterService(tenant_id=1) [2024-02-19 19:03:32.475458] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.476295] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=233] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.476394] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:32.476421] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=25] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:32.476433] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=11] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:32.476443] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=9] resolve table relation factor failed(ret=-5019, table_name=__all_weak_read_service) [2024-02-19 19:03:32.476442] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.476454] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=8] fail to resolve table(ret=-5019) [2024-02-19 19:03:32.476462] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=7] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:32.476476] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=8] Table 'oceanbase.__all_weak_read_service' doesn't exist [2024-02-19 19:03:32.476484] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=8] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:32.476492] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=7] resolve basic table failed(ret=-5019) [2024-02-19 19:03:32.476501] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=8] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:32.476509] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=7] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:32.476517] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=6] resolve normal query failed(ret=-5019) [2024-02-19 19:03:32.476525] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=7] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:32.476542] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=9] failed to resolve(ret=-5019) [2024-02-19 19:03:32.476551] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=9] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.476562] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=8] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.476570] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=7] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:32.476580] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=7] fail to handle text query(stmt=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '', ret=-5019) [2024-02-19 19:03:32.476590] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=8] executor execute failed(ret=-5019) [2024-02-19 19:03:32.476598] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=8] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, retry_cnt=0) [2024-02-19 19:03:32.476616] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=12] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:32.476632] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=14] result set close failed(ret=-5019) [2024-02-19 19:03:32.476641] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=7] result set close failed(ret=-5019) [2024-02-19 19:03:32.476648] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=6] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:32.476670] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=6] failed to process record(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:32.476681] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EC-0-0] [lt=10] failed to process final(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, aret=-5019, ret=-5019) [2024-02-19 19:03:32.476691] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] execute sql failed(ret=-5019, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:32.476701] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:32.476709] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:32.476718] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7] query failed(ret=-5019, conn=0x7fdd189bc050, start=1708340612476183, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:32.476729] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] read failed(ret=-5019) [2024-02-19 19:03:32.476739] WARN [STORAGE.TRANS] query_cluster_version_range_ (ob_tenant_weak_read_cluster_service.cpp:192) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7] execute sql read fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", exec_tenant_id=1, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:32.476818] WARN [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:367) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] query cluster version range from WRS table fail(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:32.476833] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:415) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] start service done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, in_service=false, leader_epoch=0, current_version=0, delta=1708340612476829, min_version=0, max_version=0, max_stale_time=5000000000, all_valid_server_count=0, total_time=1492, wlock_time=35, check_leader_time=2, query_version_time=0, persist_version_time=0) [2024-02-19 19:03:32.476851] WARN [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:781) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] start CLUSTER weak read service fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:32.476863] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:791) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, need_start_service=true, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=138, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:32.476958] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] get wrs ts(ls_id={id:1}, delta_ns=-1706042771802818984, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:32.476975] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=50] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:32.477065] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:32.480541] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.480586] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.481078] WARN [RPC.OBRPC] rpc_call (ob_rpc_proxy.ipp:367) [1106656][ServerTracerTim][T0][YB42AC0103F2-000611B9219784A8-0-0] [lt=6] execute rpc fail(ret=-8001, dst="172.1.3.242:2882") [2024-02-19 19:03:32.481103] WARN log_user_error_and_warn (ob_rpc_proxy.cpp:320) [1106656][ServerTracerTim][T0][YB42AC0103F2-000611B9219784A8-0-0] [lt=23] [2024-02-19 19:03:32.481147] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:361) [1106656][ServerTracerTim][T0][YB42AC0103F2-000611B9219784A8-0-0] [lt=10] [RS_MGR] new master rootserver found(rootservice="172.1.3.242:2882", cluster_id=1) [2024-02-19 19:03:32.490731] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.490773] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.496864] WARN [RPC.OBRPC] rpc_call (ob_rpc_proxy.ipp:367) [1106656][ServerTracerTim][T0][YB42AC0103F2-000611B9219784A8-0-0] [lt=13] execute rpc fail(ret=-8001, dst="172.1.3.242:2882") [2024-02-19 19:03:32.496903] WARN log_user_error_and_warn (ob_rpc_proxy.cpp:320) [1106656][ServerTracerTim][T0][YB42AC0103F2-000611B9219784A8-0-0] [lt=39] [2024-02-19 19:03:32.497171] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:361) [1106656][ServerTracerTim][T0][YB42AC0103F2-000611B9219784A8-0-0] [lt=10] [RS_MGR] new master rootserver found(rootservice="172.1.3.242:2882", cluster_id=1) [2024-02-19 19:03:32.501012] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.501038] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.501733] WARN [RPC.OBRPC] rpc_call (ob_rpc_proxy.ipp:367) [1106656][ServerTracerTim][T0][YB42AC0103F2-000611B9219784A8-0-0] [lt=20] execute rpc fail(ret=-8001, dst="172.1.3.242:2882") [2024-02-19 19:03:32.501762] WARN log_user_error_and_warn (ob_rpc_proxy.cpp:320) [1106656][ServerTracerTim][T0][YB42AC0103F2-000611B9219784A8-0-0] [lt=27] [2024-02-19 19:03:32.501802] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:361) [1106656][ServerTracerTim][T0][YB42AC0103F2-000611B9219784A8-0-0] [lt=9] [RS_MGR] new master rootserver found(rootservice="172.1.3.242:2882", cluster_id=1) [2024-02-19 19:03:32.503398] WARN [RPC.OBRPC] rpc_call (ob_rpc_proxy.ipp:367) [1106656][ServerTracerTim][T0][YB42AC0103F2-000611B9219784A8-0-0] [lt=12] execute rpc fail(ret=-8001, dst="172.1.3.242:2882") [2024-02-19 19:03:32.503418] WARN log_user_error_and_warn (ob_rpc_proxy.cpp:320) [1106656][ServerTracerTim][T0][YB42AC0103F2-000611B9219784A8-0-0] [lt=20] [2024-02-19 19:03:32.503439] WARN [SHARE] refresh (ob_alive_server_tracer.cpp:375) [1106656][ServerTracerTim][T0][YB42AC0103F2-000611B9219784A8-0-0] [lt=10] fetch alive server failed(ret=-8001) [2024-02-19 19:03:32.503451] WARN [SHARE] runTimerTask (ob_alive_server_tracer.cpp:247) [1106656][ServerTracerTim][T0][YB42AC0103F2-000611B9219784A8-0-0] [lt=11] refresh alive server list failed(ret=-8001) [2024-02-19 19:03:32.512802] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.512859] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=62] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.518459] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=58] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:32.518573] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=45] Wash time detail, (compute_wash_size_time=157, refresh_score_time=68, wash_time=6) [2024-02-19 19:03:32.522992] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.523035] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.535578] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.535516] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=29] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.535621] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.535655] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=138] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612535501}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.535680] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=21] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612535501}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.542448] INFO [SQL.PC] update_memory_conf (ob_plan_cache.cpp:1499) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=31] update plan cache memory config(ob_plan_cache_percentage=5, ob_plan_cache_evict_high_percentage=90, ob_plan_cache_evict_low_percentage=50, tenant_id=1) [2024-02-19 19:03:32.542487] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1130) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=36] start lib cache evict(tenant_id=1, mem_hold=2097152, mem_limit=107374180, cache_obj_num=2, cache_node_num=2) [2024-02-19 19:03:32.542503] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1147) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=13] end lib cache evict(tenant_id=1, cache_evict_num=0, mem_hold=2097152, mem_limit=107374180, cache_obj_num=2, cache_node_num=2) [2024-02-19 19:03:32.545747] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.545780] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.550097] INFO [ARCHIVE] stop (ob_archive_scheduler_service.cpp:137) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=14] stop archive scheduler service [2024-02-19 19:03:32.551449] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=8] table not exist(tenant_id=1, database_id=201001, table_name=__all_backup_info, ret=-5019) [2024-02-19 19:03:32.551474] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=24] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_backup_info, ret=-5019) [2024-02-19 19:03:32.551487] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=12] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:32.551498] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=10] resolve table relation factor failed(ret=-5019, table_name=__all_backup_info) [2024-02-19 19:03:32.551511] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=8] fail to resolve table(ret=-5019) [2024-02-19 19:03:32.551521] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=9] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:32.551535] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=7] Table 'oceanbase.__all_backup_info' doesn't exist [2024-02-19 19:03:32.551544] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:32.551554] WARN [SQL.RESV] resolve_table_list (ob_update_resolver.cpp:423) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=8] failed to resolve table(ret=-5019) [2024-02-19 19:03:32.551563] WARN [SQL.RESV] resolve (ob_update_resolver.cpp:76) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=8] resolve table failed(ret=-5019) [2024-02-19 19:03:32.551575] WARN [SQL.RESV] stmt_resolver_func (ob_resolver.cpp:155) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3074) [2024-02-19 19:03:32.551672] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=11] failed to resolve(ret=-5019) [2024-02-19 19:03:32.551688] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=91] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.551719] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=10] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.551730] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=11] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:32.551743] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=9] fail to handle text query(stmt=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882', ret=-5019) [2024-02-19 19:03:32.551755] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=10] executor execute failed(ret=-5019) [2024-02-19 19:03:32.551765] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=9] execute failed(ret=-5019, executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, retry_cnt=0) [2024-02-19 19:03:32.551786] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=14] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:32.551806] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=16] result set close failed(ret=-5019) [2024-02-19 19:03:32.551816] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:32.551849] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=32] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:32.551876] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAB-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:32.551891] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106733][BackupLease][T0][YB42AC0103F2-000611B923978EAB-0-0] [lt=12] failed to process final(executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, aret=-5019, ret=-5019) [2024-02-19 19:03:32.551904] WARN [SERVER] execute_write_inner (ob_inner_sql_connection.cpp:1818) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:32.551915] INFO [SERVER] execute_write_inner (ob_inner_sql_connection.cpp:1900) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=9] execute write sql(ret=-5019, tenant_id=1, affected_rows=0, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:32.552368] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:32.552383] WARN [SERVER] execute_write (ob_inner_sql_connection.cpp:1786) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=15] execute_write failed(ret=-5019, tenant_id=1, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882', is_user_sql=false) [2024-02-19 19:03:32.552396] WARN [SERVER] execute_write (ob_inner_sql_connection.cpp:1775) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=11] execute_write failed(ret=-5019, tenant_id=1, sql="update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'") [2024-02-19 19:03:32.552407] WARN [COMMON.MYSQLP] write (ob_mysql_proxy.cpp:133) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=9] execute sql failed(ret=-5019, conn=0x7fdcf4ef4050, start=1708340612550245, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:32.552501] WARN [SERVER] clean_backup_scheduler_leader (ob_backup_operator.cpp:348) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=12] execute sql failed(ret=-5019, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:32.552516] WARN [SERVER] clean_backup_scheduler_leader (ob_backup_manager.cpp:517) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=13] failed to clean backup scheduler leader(ret=-5019) [2024-02-19 19:03:32.555921] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.555970] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.558889] INFO [SQL.PC] runTimerTask (ob_plan_cache_manager.cpp:291) [1106739][PlanCacheEvict][T0][Y0-0000000000000000-0-0] [lt=10] schedule next cache evict task(evict_interval=1000000) [2024-02-19 19:03:32.566094] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.566139] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.572897] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340612572867}) [2024-02-19 19:03:32.572950] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=55] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340612535501}}) [2024-02-19 19:03:32.573363] INFO [SQL.PC] runTimerTask (ob_plan_cache_manager.cpp:299) [1106739][PlanCacheEvict][T0][Y0-0000000000000000-0-0] [lt=34] schedule next cache evict task(evict_interval=1000000) [2024-02-19 19:03:32.576235] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.576277] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.578329] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:32.578356] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=29] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:32.578377] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340612578307) [2024-02-19 19:03:32.578391] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340612475295, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:32.578477] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] get wrs ts(ls_id={id:1}, delta_ns=-1706042771802717803, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:32.578495] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:32.586534] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7C-0-0] [lt=162] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:32.586575] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7C-0-0] [lt=43] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:32.586599] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7C-0-0] [lt=22] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:32.586617] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7C-0-0] [lt=15] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:32.586634] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7C-0-0] [lt=17] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:32.587673] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.587702] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.597841] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.597892] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.606131] INFO [STORAGE] gc_tables_in_queue (ob_tenant_meta_mem_mgr.cpp:351) [1107037][T1_T3mGC][T1][Y0-0000000000000000-0-0] [lt=44] Recycle 0 table(ret=0, allocator_={used:2532285, total:3058518}, tablet_pool_={typeid(T).name():"N9oceanbase7storage8ObTabletE", sizeof(T):2432, used_obj_cnt:980, free_obj_hold_cnt:1, allocator used:2448576, allocator total:2485504}, sstable_pool_={typeid(T).name():"N9oceanbase12blocksstable9ObSSTableE", sizeof(T):1024, used_obj_cnt:2027, free_obj_hold_cnt:2, allocator used:2207552, allocator total:2289280}, memtable_pool_={typeid(T).name():"N9oceanbase8memtable10ObMemtableE", sizeof(T):1856, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, tablet count=980, min_minor_cnt=0, pinned_tablet_cnt=0) [2024-02-19 19:03:32.608586] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.608635] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.611109] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=13] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=92517, clean_start_pos=754968, clean_num=31457) [2024-02-19 19:03:32.619497] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.619543] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.629716] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.629762] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.635843] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:186) [1108342][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=16] start do ls ha handler(ls_id_array_=[{id:1}]) [2024-02-19 19:03:32.636201] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1841) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=63] dump_dag_status(dag_cnt=0, map_size=0) [2024-02-19 19:03:32.636231] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1851) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=29] dump_dag_status(running_dag_net_map_size=0, blocking_dag_net_list_size=0) [2024-02-19 19:03:32.636246] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1860) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=13] dump_dag_status(priority="PRIO_COMPACTION_HIGH", low_limit=6, up_limit=6, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-02-19 19:03:32.636263] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1860) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=16] dump_dag_status(priority="PRIO_HA_HIGH", low_limit=8, up_limit=8, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-02-19 19:03:32.636273] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1860) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=10] dump_dag_status(priority="PRIO_COMPACTION_MID", low_limit=6, up_limit=6, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-02-19 19:03:32.636307] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1860) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=32] dump_dag_status(priority="PRIO_HA_MID", low_limit=5, up_limit=5, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-02-19 19:03:32.636317] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1860) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(priority="PRIO_COMPACTION_LOW", low_limit=6, up_limit=6, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-02-19 19:03:32.636327] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1860) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=9] dump_dag_status(priority="PRIO_HA_LOW", low_limit=2, up_limit=2, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-02-19 19:03:32.636336] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1860) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=9] dump_dag_status(priority="PRIO_DDL", low_limit=2, up_limit=2, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-02-19 19:03:32.636345] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1860) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(priority="PRIO_DDL_HIGH", low_limit=6, up_limit=6, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-02-19 19:03:32.636355] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.636383] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=20] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612636346}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.636408] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=28] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612636346}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.636359] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=13] dump_dag_status(type={init_dag_prio:0, sys_task_type:3, dag_type_str:"MINI_MERGE", dag_module_str:"COMPACTION"}, dag_count=0) [2024-02-19 19:03:32.636448] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=88] dump_dag_status(type={init_dag_prio:2, sys_task_type:5, dag_type_str:"MINOR_MERGE", dag_module_str:"COMPACTION"}, dag_count=0) [2024-02-19 19:03:32.636456] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=7] dump_dag_status(type={init_dag_prio:4, sys_task_type:6, dag_type_str:"MAJOR_MERGE", dag_module_str:"COMPACTION"}, dag_count=0) [2024-02-19 19:03:32.636462] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status(type={init_dag_prio:0, sys_task_type:4, dag_type_str:"TX_TABLE_MERGE", dag_module_str:"COMPACTION"}, dag_count=0) [2024-02-19 19:03:32.636469] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status(type={init_dag_prio:4, sys_task_type:7, dag_type_str:"WRITE_CKPT", dag_module_str:"COMPACTION"}, dag_count=0) [2024-02-19 19:03:32.636475] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status(type={init_dag_prio:6, sys_task_type:2, dag_type_str:"DDL", dag_module_str:"DDL"}, dag_count=0) [2024-02-19 19:03:32.636482] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status(type={init_dag_prio:6, sys_task_type:2, dag_type_str:"UNIQUE_CHECK", dag_module_str:"DDL"}, dag_count=0) [2024-02-19 19:03:32.636488] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status(type={init_dag_prio:6, sys_task_type:2, dag_type_str:"SQL_BUILD_INDEX", dag_module_str:"DDL"}, dag_count=0) [2024-02-19 19:03:32.636494] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=5] dump_dag_status(type={init_dag_prio:7, sys_task_type:12, dag_type_str:"DDL_KV_MERGE", dag_module_str:"DDL"}, dag_count=0) [2024-02-19 19:03:32.636500] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"MIGRATE", dag_module_str:"MIGRATE"}, dag_count=0) [2024-02-19 19:03:32.636506] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status(type={init_dag_prio:3, sys_task_type:1, dag_type_str:"FAST_MIGRATE", dag_module_str:"MIGRATE"}, dag_count=0) [2024-02-19 19:03:32.636512] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status(type={init_dag_prio:5, sys_task_type:1, dag_type_str:"VALIDATE", dag_module_str:"MIGRATE"}, dag_count=0) [2024-02-19 19:03:32.636518] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=5] dump_dag_status(type={init_dag_prio:1, sys_task_type:16, dag_type_str:"BACKFILL_TX", dag_module_str:"BACKFILL_TX"}, dag_count=0) [2024-02-19 19:03:32.636525] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP", dag_module_str:"BACKUP"}, dag_count=0) [2024-02-19 19:03:32.636531] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status(type={init_dag_prio:5, sys_task_type:10, dag_type_str:"BACKUP_BACKUPSET", dag_module_str:"BACKUP"}, dag_count=0) [2024-02-19 19:03:32.636538] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=7] dump_dag_status(type={init_dag_prio:5, sys_task_type:11, dag_type_str:"BACKUP_ARCHIVELOG", dag_module_str:"BACKUP"}, dag_count=0) [2024-02-19 19:03:32.636545] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status(type={init_dag_prio:5, sys_task_type:14, dag_type_str:"RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-02-19 19:03:32.636551] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status(type={init_dag_prio:5, sys_task_type:15, dag_type_str:"BACKUP_CLEAN", dag_module_str:"BACKUP_CLEAN"}, dag_count=0) [2024-02-19 19:03:32.636557] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1863) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status(type={init_dag_prio:3, sys_task_type:17, dag_type_str:"REMOVE_MEMBER", dag_module_str:"REMOVE_MEMBER"}, dag_count=0) [2024-02-19 19:03:32.636564] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1867) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=7] dump_dag_status[DAG_NET](type="DAG_NET_MIGRATION", dag_count=0) [2024-02-19 19:03:32.636571] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1867) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status[DAG_NET](type="DAG_NET_PREPARE_MIGRATION", dag_count=0) [2024-02-19 19:03:32.636577] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1867) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=5] dump_dag_status[DAG_NET](type="DAG_NET_COMPLETE_MIGRATION", dag_count=0) [2024-02-19 19:03:32.636595] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1867) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=18] dump_dag_status[DAG_NET](type="DAG_NET_TRANSFER", dag_count=0) [2024-02-19 19:03:32.636601] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1867) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status[DAG_NET](type="DAG_NET_BACKUP", dag_count=0) [2024-02-19 19:03:32.636607] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1867) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=5] dump_dag_status[DAG_NET](type="DAG_NET_RESTORE", dag_count=0) [2024-02-19 19:03:32.636612] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1867) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=5] dump_dag_status[DAG_NET](type="DAG_NET_TYPE_BACKUP_CLEAN", dag_count=0) [2024-02-19 19:03:32.636619] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1870) [1107630][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=5] dump_dag_status(total_worker_cnt=41, total_running_task_cnt=0, work_thread_num=41) [2024-02-19 19:03:32.640466] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.640508] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.650650] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.650693] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.660817] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.660864] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.666425] INFO do_work (ob_rl_mgr.cpp:704) [1106705][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=22] swc wakeup.(stat_period_=1000000, ready=false) [2024-02-19 19:03:32.668969] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106798][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=24] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:32.669035] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106796][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=21] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/1, request doing=0/0) [2024-02-19 19:03:32.670134] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106795][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=14] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/6, request doing=0/0) [2024-02-19 19:03:32.671012] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106792][BatchIO][T0][Y0-0000000000000000-0-0] [lt=18] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:32.671033] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106791][BatchIO][T0][Y0-0000000000000000-0-0] [lt=16] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:32.671051] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106800][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=10] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:32.671062] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106793][BatchIO][T0][Y0-0000000000000000-0-0] [lt=11] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:32.671573] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.671623] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.673316] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=16] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340612673298}) [2024-02-19 19:03:32.673346] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=30] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340612636346}}) [2024-02-19 19:03:32.678443] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21] get wrs ts(ls_id={id:1}, delta_ns=-1706042771802617593, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:32.678471] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=30] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:32.681780] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.681818] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.684331] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=16] Cache replace map node details(ret=0, replace_node_count=0, replace_time=23522, replace_start_pos=1116688, replace_num=15728) [2024-02-19 19:03:32.693622] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=25] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.693658] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.703869] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.703928] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=60] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.716145] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.716217] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=77] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.716498] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:202) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=15] decide disk size finished(dir="/backup/oceanbase/data/sstable", suggested_disk_size=8589934592, suggested_disk_percentage=0, default_disk_percentage=60, total_space=246944890880, free_space=220974178304, disk_size=8589934592) [2024-02-19 19:03:32.716522] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:202) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=27] decide disk size finished(dir="/backup/oceanbase/data/clog", suggested_disk_size=8589934592, suggested_disk_percentage=0, default_disk_percentage=30, total_space=246944890880, free_space=220974178304, disk_size=8589934592) [2024-02-19 19:03:32.716534] INFO [SERVER] cal_all_part_disk_size (ob_server_utils.cpp:164) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=10] decide_all_disk_size succ(data_dir="/backup/oceanbase/data/sstable", clog_dir="/backup/oceanbase/data/clog", suggested_data_disk_size=8589934592, suggested_data_disk_percentage=0, data_default_disk_percentage=60, clog_default_disk_percentage=30, shared_mode=true, data_disk_size=8589934592, log_disk_size=8589934592) [2024-02-19 19:03:32.726983] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.727032] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.737180] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.737228] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.738783] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.738813] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=29] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612738771}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.738837] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=21] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612738771}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.746219] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=10] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-02-19 19:03:32.746257] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=40] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-02-19 19:03:32.746273] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=13] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:32.746286] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=11] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-02-19 19:03:32.746302] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=10] fail to resolve table(ret=-5019) [2024-02-19 19:03:32.746312] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=10] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:32.746330] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=9] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-02-19 19:03:32.746340] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=9] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:32.746350] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:32.746360] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=8] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:32.746370] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=9] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:32.746381] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=9] resolve normal query failed(ret=-5019) [2024-02-19 19:03:32.746392] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:32.746411] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=11] failed to resolve(ret=-5019) [2024-02-19 19:03:32.746423] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=11] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.746436] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=9] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.746445] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=9] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:32.746457] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=8] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-02-19 19:03:32.746469] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=9] executor execute failed(ret=-5019) [2024-02-19 19:03:32.746482] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=10] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0) [2024-02-19 19:03:32.746505] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=15] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:32.746526] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=18] result set close failed(ret=-5019) [2024-02-19 19:03:32.746536] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=9] result set close failed(ret=-5019) [2024-02-19 19:03:32.746545] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:32.746588] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:32.746603] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=13] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-02-19 19:03:32.746617] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:32.746630] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=11] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:32.746640] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=9] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:32.746651] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=11] query failed(ret=-5019, conn=0x7fdcdc89a050, start=1708340612745932, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:32.746665] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=12] read failed(ret=-5019) [2024-02-19 19:03:32.746676] WARN [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:612) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=9] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:32.746756] WARN [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=13] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:32.746769] WARN [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=12] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-02-19 19:03:32.746781] WARN [SHARE] next (ob_ls_table_iterator.cpp:71) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=11] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:32.746791] WARN [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:331) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=9] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:32.746804] WARN [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:213) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=9] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-02-19 19:03:32.746817] WARN [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:193) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=11] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-02-19 19:03:32.746827] WARN [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:43) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790DF-0-0] [lt=10] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:32.747881] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.747915] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.758109] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.758266] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=161] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.768408] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.768472] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.773616] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340612773587}) [2024-02-19 19:03:32.773664] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=50] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340612738771}}) [2024-02-19 19:03:32.778473] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:32.778514] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=42] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:32.778536] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340612778452) [2024-02-19 19:03:32.778551] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340612578403, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:32.778615] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] get wrs ts(ls_id={id:1}, delta_ns=-1706042771802517562, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:32.778611] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.778627] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:32.778640] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.789732] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.789775] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.799912] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.799969] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=61] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.810105] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.810149] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.811732] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=31] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:32.811823] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=37] Wash time detail, (compute_wash_size_time=165, refresh_score_time=49, wash_time=5) [2024-02-19 19:03:32.820341] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.820394] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.829443] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:32.829471] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=28] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:32.829483] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=11] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:32.829493] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=9] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:32.829506] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=9] fail to resolve table(ret=-5019) [2024-02-19 19:03:32.829514] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=8] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:32.829527] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=8] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:32.829536] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=8] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:32.829544] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=8] resolve basic table failed(ret=-5019) [2024-02-19 19:03:32.829553] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=8] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:32.829561] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=7] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:32.829570] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=7] resolve normal query failed(ret=-5019) [2024-02-19 19:03:32.829579] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=8] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:32.829595] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=9] failed to resolve(ret=-5019) [2024-02-19 19:03:32.829605] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=9] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.829616] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=8] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.829624] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=7] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:32.829635] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=8] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, ret=-5019) [2024-02-19 19:03:32.829645] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=8] executor execute failed(ret=-5019) [2024-02-19 19:03:32.829654] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=8] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, retry_cnt=0) [2024-02-19 19:03:32.829671] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=12] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:32.829689] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=14] result set close failed(ret=-5019) [2024-02-19 19:03:32.829697] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:32.829705] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=7] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:32.829728] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=7] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:32.829739] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A027-0-0] [lt=9] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-02-19 19:03:32.829749] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:32.829759] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:32.829768] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:32.829777] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] query failed(ret=-5019, conn=0x7fdcdc9be050, start=1708340612829232, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:32.829788] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] read failed(ret=-5019) [2024-02-19 19:03:32.829797] WARN get_my_sql_result_ (ob_table_access_helper.h:329) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x7fdcfffccd78, table=__all_server, condition=where svr_ip='172.1.3.242' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, columns_str="zone") [2024-02-19 19:03:32.829815] WARN read_single_row (ob_table_access_helper.h:178) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=14] get mysql result failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1, columns=0x7fdcfffccd78, table=__all_server, where_condition=where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:32.829882] WARN [COORDINATOR] get_self_zone_name (table_accessor.cpp:517) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] get zone from __all_server failed(ret=-4016, ret="OB_ERR_UNEXPECTED", columns=0x7fdcfffccd78, where_condition="where svr_ip='172.1.3.242' and svr_port=2882", zone_name_holder=) [2024-02-19 19:03:32.829895] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:450) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=12] get self zone name failed(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:32.829906] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:459) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] zone name is empty(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:32.829916] WARN [COORDINATOR] refresh (ob_leader_coordinator.cpp:107) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] get all ls election reference info failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1) [2024-02-19 19:03:32.839437] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.839472] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=36] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612839424}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.839496] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=21] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612839424}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.846964] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.846995] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.853945] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=13] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=42101, clean_start_pos=786425, clean_num=31457) [2024-02-19 19:03:32.857170] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.857210] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.858751] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7D-0-0] [lt=286] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:32.858789] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7D-0-0] [lt=38] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:32.858813] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7D-0-0] [lt=22] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:32.858831] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7D-0-0] [lt=16] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:32.858846] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7D-0-0] [lt=14] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:32.861302] INFO [SQL.DTL] runTimerTask (ob_dtl_interm_result_manager.cpp:44) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=8] dump dtl interm result cost(us)(dump_cost=100407, ret=0, interm count=0, dump count=389) [2024-02-19 19:03:32.871640] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.871681] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.874964] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=16] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340612874937}) [2024-02-19 19:03:32.875005] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=43] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340612839424}}) [2024-02-19 19:03:32.874994] WARN [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:485) [1106741][SysLocAsyncUp0][T0][YB42AC0103F2-000611B9212AA0D1-0-0] [lt=31] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, tasks=[{cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340612874937}]) [2024-02-19 19:03:32.879584] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:32.879611] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=29] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:32.879627] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340612879566) [2024-02-19 19:03:32.879638] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340612778560, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:32.879708] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] get wrs ts(ls_id={id:1}, delta_ns=-1706042771802416113, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:32.879724] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=1, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:32.880251] INFO [COMMON] print_io_status (ob_io_struct.cpp:619) [1106661][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=31] [IO STATUS](tenant_ids=[1, 500], send_thread_count=2, send_queues=[0, 0]) [2024-02-19 19:03:32.881797] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.881829] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.891953] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.892000] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.902153] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.902202] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.911358] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=26] Cache replace map node details(ret=0, replace_node_count=0, replace_time=26907, replace_start_pos=1132416, replace_num=15728) [2024-02-19 19:03:32.912367] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=70] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.912406] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.923360] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.923405] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.926621] WARN [SERVER] batch_process_tasks (ob_ls_table_updater.cpp:333) [1106712][LSSysTblUp0][T0][YB42AC0103F2-000611B9216D2D5F-0-0] [lt=38] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1, task={tenant_id:1, ls_id:{id:1}, add_timestamp:1708337390831403}) [2024-02-19 19:03:32.932905] INFO [SQL.DTL] runTimerTask (ob_dtl_interm_result_manager.cpp:59) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=47] clear dtl interm result cost(us)(clear_cost=71550, ret=0, expire_keys_.count()=0, dump count=389, interm count=0, clean count=0) [2024-02-19 19:03:32.933057] INFO [STORAGE] runTimerTask (ob_tenant_memory_printer.cpp:31) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=113] === Run print tenant memory usage task === [2024-02-19 19:03:32.933118] INFO [LIB] operator() (ob_malloc_allocator.cpp:397) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=25] [MEMORY] tenant: 512, limit: 1,073,741,824 hold: 12,582,912 rpc_hold: 0 cache_hold: 0 cache_used: 0 cache_item_count: 0 [MEMORY] ctx_id= DEFAULT_CTX_ID hold_bytes= 2,097,152 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= CO_STACK hold_bytes= 10,485,760 limit= 9,223,372,036,854,775,807 [2024-02-19 19:03:32.933175] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=23] [MEMORY] tenant_id= 512 ctx_id= DEFAULT_CTX_ID hold= 2,097,152 used= 32,768 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=thread_factor [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 mod=SUMMARY [2024-02-19 19:03:32.933340] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=25] [MEMORY] tenant_id= 512 ctx_id= CO_STACK hold= 10,485,760 used= 9,146,304 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 9,142,272 used= 9,124,848 count= 18 avg_used= 506,936 block_cnt= 18 chunk_cnt= 5 mod=CoStack [MEMORY] hold= 4,032 used= 1,440 count= 36 avg_used= 40 block_cnt= 1 chunk_cnt= 1 mod=Coro [MEMORY] hold= 9,146,304 used= 9,126,288 count= 54 avg_used= 169,005 mod=SUMMARY [2024-02-19 19:03:32.933539] INFO [LIB] operator() (ob_malloc_allocator.cpp:397) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=52] [MEMORY] tenant: 500, limit: 9,223,372,036,854,775,807 hold: 1,158,397,952 rpc_hold: 0 cache_hold: 0 cache_used: 0 cache_item_count: 0 [MEMORY] ctx_id= DEFAULT_CTX_ID hold_bytes= 959,025,152 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= GLIBC hold_bytes= 96,571,392 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= CO_STACK hold_bytes= 69,206,016 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= LIBEASY hold_bytes= 21,012,480 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= LOGGER_CTX_ID hold_bytes= 12,582,912 limit= 9,223,372,036,854,775,807 [2024-02-19 19:03:32.933551] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=56] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.933587] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.934008] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=28] [MEMORY] tenant_id= 500 ctx_id= DEFAULT_CTX_ID hold= 959,025,152 used= 898,494,576 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 113,709,056 used= 113,680,448 count= 2 avg_used= 56,840,224 block_cnt= 2 chunk_cnt= 2 mod=CACHE_INST [MEMORY] hold= 67,141,632 used= 67,107,456 count= 2 avg_used= 33,553,728 block_cnt= 2 chunk_cnt= 2 mod=SstablMergeInfo [MEMORY] hold= 67,129,424 used= 67,108,872 count= 2 avg_used= 33,554,436 block_cnt= 2 chunk_cnt= 2 mod=CACHE_MAP_BKT [MEMORY] hold= 61,599,744 used= 61,573,104 count= 2 avg_used= 30,786,552 block_cnt= 2 chunk_cnt= 2 mod=ObDagWarningHis [MEMORY] hold= 47,943,680 used= 47,925,120 count= 1 avg_used= 47,925,120 block_cnt= 1 chunk_cnt= 1 mod=CACHE_MB_HANDLE [MEMORY] hold= 42,496,000 used= 38,273,024 count= 65,538 avg_used= 583 block_cnt= 4,685 chunk_cnt= 21 mod=TabletLSMap [MEMORY] hold= 34,631,424 used= 34,104,832 count= 132 avg_used= 258,369 block_cnt= 132 chunk_cnt= 32 mod=ModulePageAlloc [MEMORY] hold= 33,603,520 used= 33,562,560 count= 3 avg_used= 11,187,520 block_cnt= 3 chunk_cnt= 3 mod=FixeSizeBlocAll [MEMORY] hold= 33,574,912 used= 33,554,464 count= 1 avg_used= 33,554,464 block_cnt= 1 chunk_cnt= 1 mod=BloomFilter [MEMORY] hold= 33,558,528 used= 33,534,240 count= 5 avg_used= 6,706,848 block_cnt= 5 chunk_cnt= 5 mod=IoControl [MEMORY] hold= 31,518,720 used= 31,457,368 count= 3 avg_used= 10,485,789 block_cnt= 3 chunk_cnt= 3 mod=RebalaTaskMgr [MEMORY] hold= 31,477,760 used= 31,457,664 count= 1 avg_used= 31,457,664 block_cnt= 1 chunk_cnt= 1 mod=ash_list [MEMORY] hold= 26,425,472 used= 26,219,416 count= 42 avg_used= 624,271 block_cnt= 38 chunk_cnt= 31 mod=OccamThreadPool [MEMORY] hold= 25,206,784 used= 25,165,904 count= 2 avg_used= 12,582,952 block_cnt= 2 chunk_cnt= 2 mod=DRTaskQ [MEMORY] hold= 22,888,448 used= 22,724,608 count= 11 avg_used= 2,065,873 block_cnt= 11 chunk_cnt= 9 mod=LightyQueue [MEMORY] hold= 22,865,488 used= 22,666,632 count= 29 avg_used= 781,608 block_cnt= 27 chunk_cnt= 22 mod=OmtTenant [MEMORY] hold= 16,218,192 used= 16,122,080 count= 226 avg_used= 71,336 block_cnt= 34 chunk_cnt= 14 mod=Omt [MEMORY] hold= 14,641,600 used= 13,905,984 count= 652 avg_used= 21,328 block_cnt= 652 chunk_cnt= 45 mod=TsiFactory [MEMORY] hold= 13,054,144 used= 12,922,320 count= 42 avg_used= 307,674 block_cnt= 22 chunk_cnt= 8 mod=PartitTableTask [MEMORY] hold= 12,603,392 used= 12,582,952 count= 1 avg_used= 12,582,952 block_cnt= 1 chunk_cnt= 1 mod=backupTaskSched [MEMORY] hold= 12,603,392 used= 12,582,928 count= 1 avg_used= 12,582,928 block_cnt= 1 chunk_cnt= 1 mod=HashBuckDTLINT [MEMORY] hold= 10,403,840 used= 10,398,720 count= 5 avg_used= 2,079,744 block_cnt= 5 chunk_cnt= 5 mod=ObTxDesc [MEMORY] hold= 9,787,200 used= 9,778,560 count= 57 avg_used= 171,553 block_cnt= 57 chunk_cnt= 31 mod=LinearHashMap [MEMORY] hold= 9,170,944 used= 9,109,888 count= 6 avg_used= 1,518,314 block_cnt= 6 chunk_cnt= 3 mod=KvstCachWashStr [MEMORY] hold= 9,158,656 used= 9,141,616 count= 1 avg_used= 9,141,616 block_cnt= 1 chunk_cnt= 1 mod=MemDumpContext [MEMORY] hold= 8,337,472 used= 8,272,704 count= 6 avg_used= 1,378,784 block_cnt= 6 chunk_cnt= 6 mod=HashBuckTaskMap [MEMORY] hold= 7,098,368 used= 7,078,824 count= 1 avg_used= 7,078,824 block_cnt= 1 chunk_cnt= 1 mod=SchemaIdVersion [MEMORY] hold= 6,426,560 used= 6,408,096 count= 2 avg_used= 3,204,048 block_cnt= 2 chunk_cnt= 2 mod=TenantInfo [MEMORY] hold= 4,739,072 used= 4,718,712 count= 1 avg_used= 4,718,712 block_cnt= 1 chunk_cnt= 1 mod=HashBucTenComMo [MEMORY] hold= 4,677,632 used= 4,651,872 count= 5 avg_used= 930,374 block_cnt= 5 chunk_cnt= 4 mod=DedupQueue [MEMORY] hold= 4,414,400 used= 4,377,591 count= 4 avg_used= 1,094,397 block_cnt= 4 chunk_cnt= 3 mod=SqlDtlMgr [MEMORY] hold= 4,161,536 used= 4,159,488 count= 2 avg_used= 2,079,744 block_cnt= 2 chunk_cnt= 2 mod=SstaMicrBlocAll [MEMORY] hold= 3,566,992 used= 3,450,680 count= 28 avg_used= 123,238 block_cnt= 27 chunk_cnt= 18 mod=HashBucket [MEMORY] hold= 3,166,208 used= 3,145,808 count= 1 avg_used= 3,145,808 block_cnt= 1 chunk_cnt= 1 mod=HashPxBlooFilKe [MEMORY] hold= 3,145,728 used= 2,195,456 count= 128 avg_used= 17,152 block_cnt= 128 chunk_cnt= 3 mod=OB_SQL_TASK [MEMORY] hold= 2,718,976 used= 2,701,864 count= 141 avg_used= 19,162 block_cnt= 141 chunk_cnt= 7 mod=TabletMap [MEMORY] hold= 2,580,480 used= 2,560,192 count= 1 avg_used= 2,560,192 block_cnt= 1 chunk_cnt= 1 mod=GEleTimer [MEMORY] hold= 2,322,432 used= 2,301,952 count= 1 avg_used= 2,301,952 block_cnt= 1 chunk_cnt= 1 mod=ServerObjecPool [MEMORY] hold= 2,276,672 used= 2,247,439 count= 12 avg_used= 187,286 block_cnt= 12 chunk_cnt= 6 mod=FixedQueue [MEMORY] hold= 2,117,728 used= 2,097,176 count= 2 avg_used= 1,048,588 block_cnt= 2 chunk_cnt= 2 mod=SerFuncRegHT [MEMORY] hold= 2,080,768 used= 2,079,744 count= 1 avg_used= 2,079,744 block_cnt= 1 chunk_cnt= 1 mod=CachLineSegrArr [MEMORY] hold= 2,080,768 used= 2,079,744 count= 1 avg_used= 2,079,744 block_cnt= 1 chunk_cnt= 1 mod=TenantSchemMgr [MEMORY] hold= 2,080,768 used= 2,079,744 count= 1 avg_used= 2,079,744 block_cnt= 1 chunk_cnt= 1 mod=ConFifoAlloc [MEMORY] hold= 2,034,560 used= 2,020,352 count= 86 avg_used= 23,492 block_cnt= 86 chunk_cnt= 27 mod=CommonArray [MEMORY] hold= 1,581,056 used= 1,572,904 count= 1 avg_used= 1,572,904 block_cnt= 1 chunk_cnt= 1 mod=DInsSstMgr [MEMORY] hold= 1,581,056 used= 1,572,904 count= 1 avg_used= 1,572,904 block_cnt= 1 chunk_cnt= 1 mod=IdConnMap [MEMORY] hold= 1,581,056 used= 1,573,072 count= 1 avg_used= 1,573,072 block_cnt= 1 chunk_cnt= 1 mod=HashBucIdUnitMa [MEMORY] hold= 1,575,488 used= 1,554,368 count= 330 avg_used= 4,710 block_cnt= 328 chunk_cnt= 13 mod=TenantCtxAlloca [MEMORY] hold= 1,081,344 used= 1,064,960 count= 2 avg_used= 532,480 block_cnt= 2 chunk_cnt= 2 mod=SlogWriteBuffer [MEMORY] hold= 1,056,768 used= 1,048,608 count= 1 avg_used= 1,048,608 block_cnt= 1 chunk_cnt= 1 mod=TenantResCtrl [MEMORY] hold= 1,056,768 used= 1,048,608 count= 1 avg_used= 1,048,608 block_cnt= 1 chunk_cnt= 1 mod=ConcurHashMap [MEMORY] hold= 1,056,768 used= 1,048,608 count= 1 avg_used= 1,048,608 block_cnt= 1 chunk_cnt= 1 mod=TsSourceInfoMap [MEMORY] hold= 1,056,768 used= 1,048,608 count= 1 avg_used= 1,048,608 block_cnt= 1 chunk_cnt= 1 mod=PxResMgr [MEMORY] hold= 1,056,768 used= 1,048,608 count= 1 avg_used= 1,048,608 block_cnt= 1 chunk_cnt= 1 mod=HashBucParTraCt [MEMORY] hold= 941,824 used= 925,440 count= 3 avg_used= 308,480 block_cnt= 3 chunk_cnt= 3 mod=CACHE_INST_MAP [MEMORY] hold= 794,624 used= 786,436 count= 1 avg_used= 786,436 block_cnt= 1 chunk_cnt= 1 mod=CACHE_MAP_LOCK [MEMORY] hold= 777,088 used= 747,200 count= 467 avg_used= 1,600 block_cnt= 121 chunk_cnt= 25 mod=CreateContext [MEMORY] hold= 622,592 used= 620,928 count= 1 avg_used= 620,928 block_cnt= 1 chunk_cnt= 1 mod=SysTaskStatus [MEMORY] hold= 514,544 used= 460,688 count= 52 avg_used= 8,859 block_cnt= 52 chunk_cnt= 20 mod=HashNode [MEMORY] hold= 472,240 used= 455,736 count= 246 avg_used= 1,852 block_cnt= 122 chunk_cnt= 19 mod=SstableReader [MEMORY] hold= 401,408 used= 393,256 count= 1 avg_used= 393,256 block_cnt= 1 chunk_cnt= 1 mod=TablStorStatMgr [MEMORY] hold= 364,496 used= 107,485 count= 3,429 avg_used= 31 block_cnt= 88 chunk_cnt= 1 mod=Buffer [MEMORY] hold= 335,776 used= 278,512 count= 8 avg_used= 34,814 block_cnt= 8 chunk_cnt= 5 mod=ConcurObjPool [MEMORY] hold= 305,728 used= 281,152 count= 14 avg_used= 20,082 block_cnt= 12 chunk_cnt= 6 mod=BucketLock [MEMORY] hold= 303,104 used= 294,936 count= 1 avg_used= 294,936 block_cnt= 1 chunk_cnt= 1 mod=HashBuckInteChe [MEMORY] hold= 294,912 used= 256,248 count= 9 avg_used= 28,472 block_cnt= 9 chunk_cnt= 5 mod=InneSqlConnPool [MEMORY] hold= 229,376 used= 224,000 count= 1 avg_used= 224,000 block_cnt= 1 chunk_cnt= 1 mod=BGTMonitor [MEMORY] hold= 221,184 used= 212,992 count= 1 avg_used= 212,992 block_cnt= 1 chunk_cnt= 1 mod=TSWorker [MEMORY] hold= 221,184 used= 214,432 count= 1 avg_used= 214,432 block_cnt= 1 chunk_cnt= 1 mod=CompSuggestMgr [MEMORY] hold= 212,992 used= 196,624 count= 2 avg_used= 98,312 block_cnt= 2 chunk_cnt= 1 mod=DdlQue [MEMORY] hold= 196,608 used= 196,224 count= 3 avg_used= 65,408 block_cnt= 3 chunk_cnt= 3 mod=TransAuditRecor [MEMORY] hold= 174,144 used= 149,504 count= 258 avg_used= 579 block_cnt= 21 chunk_cnt= 1 mod=LSLocationMap [MEMORY] hold= 163,840 used= 147,792 count= 2 avg_used= 73,896 block_cnt= 2 chunk_cnt= 1 mod=HashBucNexWaiMa [MEMORY] hold= 155,648 used= 148,032 count= 1 avg_used= 148,032 block_cnt= 1 chunk_cnt= 1 mod=CompEventMgr [MEMORY] hold= 122,624 used= 118,968 count= 2 avg_used= 59,484 block_cnt= 2 chunk_cnt= 2 mod=RefrFullScheMap [MEMORY] hold= 122,624 used= 118,968 count= 2 avg_used= 59,484 block_cnt= 2 chunk_cnt= 2 mod=MemMgrMap [MEMORY] hold= 122,624 used= 118,968 count= 2 avg_used= 59,484 block_cnt= 2 chunk_cnt= 2 mod=MemMgrForLiboMa [MEMORY] hold= 122,624 used= 118,968 count= 2 avg_used= 59,484 block_cnt= 2 chunk_cnt= 1 mod=TenaSchForCacMa [MEMORY] hold= 122,224 used= 121,192 count= 16 avg_used= 7,574 block_cnt= 16 chunk_cnt= 3 mod=TenaSpaTabIdSet [MEMORY] hold= 114,432 used= 106,176 count= 2 avg_used= 53,088 block_cnt= 2 chunk_cnt= 2 mod=RetryCtrl [MEMORY] hold= 106,496 used= 98,416 count= 1 avg_used= 98,416 block_cnt= 1 chunk_cnt= 1 mod=OB_DISK_REP [MEMORY] hold= 106,496 used= 98,312 count= 1 avg_used= 98,312 block_cnt= 1 chunk_cnt= 1 mod=TmpFileManager [MEMORY] hold= 106,496 used= 98,416 count= 1 avg_used= 98,416 block_cnt= 1 chunk_cnt= 1 mod=UsrRuleMap [MEMORY] hold= 106,496 used= 103,552 count= 1 avg_used= 103,552 block_cnt= 1 chunk_cnt= 1 mod=TenantMutilAllo [MEMORY] hold= 106,352 used= 105,448 count= 14 avg_used= 7,532 block_cnt= 14 chunk_cnt= 2 mod=SysTableNameMap [MEMORY] hold= 98,304 used= 97,512 count= 3 avg_used= 32,504 block_cnt= 3 chunk_cnt= 3 mod=CommSysVarFac [MEMORY] hold= 95,728 used= 74,232 count= 207 avg_used= 358 block_cnt= 68 chunk_cnt= 14 mod=tg [MEMORY] hold= 85,088 used= 82,172 count= 5 avg_used= 16,434 block_cnt= 3 chunk_cnt= 1 mod=SchemaSysCache [MEMORY] hold= 81,920 used= 61,720 count= 4 avg_used= 15,430 block_cnt= 4 chunk_cnt= 1 mod=HashBuckConfCon [MEMORY] hold= 78,928 used= 78,288 count= 10 avg_used= 7,828 block_cnt= 10 chunk_cnt= 1 mod=HashNodeConfCon [MEMORY] hold= 73,728 used= 69,664 count= 1 avg_used= 69,664 block_cnt= 1 chunk_cnt= 1 mod=SuperBlockBuffe [MEMORY] hold= 73,728 used= 65,600 count= 1 avg_used= 65,600 block_cnt= 1 chunk_cnt= 1 mod=TCREF [MEMORY] hold= 71,072 used= 59,904 count= 47 avg_used= 1,274 block_cnt= 29 chunk_cnt= 16 mod=timer [MEMORY] hold= 65,536 used= 65,408 count= 1 avg_used= 65,408 block_cnt= 1 chunk_cnt= 1 mod=ObLSLocation [MEMORY] hold= 65,536 used= 65,408 count= 1 avg_used= 65,408 block_cnt= 1 chunk_cnt= 1 mod=IO_MGR [MEMORY] hold= 65,536 used= 65,408 count= 1 avg_used= 65,408 block_cnt= 1 chunk_cnt= 1 mod=KVCACHE_HAZARD [MEMORY] hold= 65,536 used= 49,376 count= 2 avg_used= 24,688 block_cnt= 2 chunk_cnt= 2 mod=io_trace_map [MEMORY] hold= 65,024 used= 64,512 count= 8 avg_used= 8,064 block_cnt= 8 chunk_cnt= 6 mod=SqlSession [MEMORY] hold= 65,024 used= 64,512 count= 8 avg_used= 8,064 block_cnt= 8 chunk_cnt= 1 mod=ScheLabeSeCompo [MEMORY] hold= 57,344 used= 55,368 count= 1 avg_used= 55,368 block_cnt= 1 chunk_cnt= 1 mod=ScheCacSysCacMa [MEMORY] hold= 57,024 used= 56,448 count= 6 avg_used= 9,408 block_cnt= 6 chunk_cnt= 5 mod=SeArray [MEMORY] hold= 49,152 used= 32,768 count= 2 avg_used= 16,384 block_cnt= 2 chunk_cnt= 1 mod=CACHE_TNT_LST [MEMORY] hold= 48,768 used= 48,384 count= 6 avg_used= 8,064 block_cnt= 6 chunk_cnt= 1 mod=ScheLabeSeLabel [MEMORY] hold= 48,768 used= 48,384 count= 6 avg_used= 8,064 block_cnt= 6 chunk_cnt= 1 mod=ScheLabeSePolic [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=SqlPlanMonMap [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=TmpFileStoreMap [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=SqlLoadData [MEMORY] hold= 33,040 used= 24,888 count= 2 avg_used= 12,444 block_cnt= 2 chunk_cnt= 2 mod=TaskRunnerSer [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=Autoincrement [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=HashBucNamPooMa [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=HashBucIdPoolMa [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=HashBucConPooMa [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=HasBucConRefCoM [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=HashBucNamConMa [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=HashBucSerUniMa [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=HashBucTenPooMa [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=HasBucSerMigUnM [MEMORY] hold= 32,768 used= 24,608 count= 2 avg_used= 12,304 block_cnt= 2 chunk_cnt= 2 mod=HashBuckPlanCac [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=SessHoldMapBuck [MEMORY] hold= 32,768 used= 16,384 count= 2 avg_used= 8,192 block_cnt= 2 chunk_cnt= 2 mod=SlogNopLog [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=HashBucPooUniMa [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=HashBucIdConfMa [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=scheSuspectInfo [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=DDLSpeedCtrl [MEMORY] hold= 32,768 used= 24,608 count= 2 avg_used= 12,304 block_cnt= 2 chunk_cnt= 1 mod=HashBuckSysConf [MEMORY] hold= 32,640 used= 32,320 count= 4 avg_used= 8,080 block_cnt= 4 chunk_cnt= 3 mod=UpgProcSet [MEMORY] hold= 32,512 used= 32,256 count= 4 avg_used= 8,064 block_cnt= 4 chunk_cnt= 1 mod=DIRECTORY_MGR [MEMORY] hold= 32,512 used= 32,256 count= 4 avg_used= 8,064 block_cnt= 4 chunk_cnt= 1 mod=DBLINK_MGR [MEMORY] hold= 32,512 used= 32,256 count= 4 avg_used= 8,064 block_cnt= 4 chunk_cnt= 1 mod=ScheLabSeUserLe [MEMORY] hold= 32,512 used= 32,256 count= 4 avg_used= 8,064 block_cnt= 4 chunk_cnt= 1 mod=SchemaProfile [MEMORY] hold= 32,512 used= 32,256 count= 4 avg_used= 8,064 block_cnt= 4 chunk_cnt= 1 mod=ScheOutlSqlMap [MEMORY] hold= 32,512 used= 32,256 count= 4 avg_used= 8,064 block_cnt= 4 chunk_cnt= 1 mod=SchemaSynonym [MEMORY] hold= 32,448 used= 32,192 count= 4 avg_used= 8,048 block_cnt= 4 chunk_cnt= 1 mod=SchemaSysVariab [MEMORY] hold= 25,152 used= 24,600 count= 6 avg_used= 4,100 block_cnt= 6 chunk_cnt= 2 mod=ScheObSchemAren [MEMORY] hold= 24,576 used= 16,416 count= 1 avg_used= 16,416 block_cnt= 1 chunk_cnt= 1 mod=TableLockMapEle [MEMORY] hold= 24,384 used= 20,232 count= 2 avg_used= 10,116 block_cnt= 2 chunk_cnt= 2 mod=ServerCkptSlogH [MEMORY] hold= 24,304 used= 21,752 count= 2 avg_used= 10,876 block_cnt= 2 chunk_cnt= 2 mod=SchemaStatuMap [MEMORY] hold= 24,288 used= 24,096 count= 3 avg_used= 8,032 block_cnt= 3 chunk_cnt= 1 mod=ScheTenaInfoVec [MEMORY] hold= 17,696 used= 17,432 count= 4 avg_used= 4,358 block_cnt= 3 chunk_cnt= 2 mod=DeviceMng [MEMORY] hold= 16,384 used= 12,304 count= 1 avg_used= 12,304 block_cnt= 1 chunk_cnt= 1 mod=leakMap [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=HasBucTimZonInM [MEMORY] hold= 16,384 used= 11,136 count= 1 avg_used= 11,136 block_cnt= 1 chunk_cnt= 1 mod=SqlPlanMon [MEMORY] hold= 16,384 used= 8,192 count= 1 avg_used= 8,192 block_cnt= 1 chunk_cnt= 1 mod=ServerLogPool [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=MemDumpMap [MEMORY] hold= 16,384 used= 8,992 count= 1 avg_used= 8,992 block_cnt= 1 chunk_cnt= 1 mod=IO_HEALTH [MEMORY] hold= 16,384 used= 10,944 count= 1 avg_used= 10,944 block_cnt= 1 chunk_cnt= 1 mod=MysqlRequesReco [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=GenSchemVersMap [MEMORY] hold= 16,384 used= 12,504 count= 1 avg_used= 12,504 block_cnt= 1 chunk_cnt= 1 mod=BitSet [MEMORY] hold= 16,384 used= 8,192 count= 1 avg_used= 8,192 block_cnt= 1 chunk_cnt= 1 mod=LinkArray [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=SstaLongOpsMoni [MEMORY] hold= 16,384 used= 12,304 count= 1 avg_used= 12,304 block_cnt= 1 chunk_cnt= 1 mod=GrpIdNameMap [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=SchemaSequence [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=SchePackNameMap [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=HiddenTblNames [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=ScheTrigNameMap [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=SchemaUdf [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=SchemaContext [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=ScheUdtIdMap [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=ScheUdtNameMap [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 2 mod=RsEventQueue [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=SchemaSecurAudi [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=MockFkParentTab [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=SchemaKeystore [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=ScheTrigIdMap [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 2 mod=SqlSessiVarMap [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=SchemaTablespac [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=SchePriObjPriMa [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=SchePackIdMap [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 2 mod=ObOBJLockHashNo [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=ScheDataNameMap [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=ScheTablIdMap [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=ScheTablNameMap [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=ScheIndeNameMap [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=ScheAuxVpNameVe [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=ScheConsNameMap [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=ScheForKeyNamMa [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=SchePriTabPriMa [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=ScheRoutNameMap [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=ScheRoutIdMap [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=ScheOutlNameMap [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=ScheOutlIdMap [MEMORY] hold= 16,192 used= 16,064 count= 2 avg_used= 8,032 block_cnt= 2 chunk_cnt= 1 mod=ScheTablInfoVec [MEMORY] hold= 15,760 used= 15,440 count= 5 avg_used= 3,088 block_cnt= 3 chunk_cnt= 2 mod=SqlPx [MEMORY] hold= 12,960 used= 12,320 count= 10 avg_used= 1,232 block_cnt= 7 chunk_cnt= 3 mod=ObGuard [MEMORY] hold= 10,496 used= 7,872 count= 41 avg_used= 192 block_cnt= 17 chunk_cnt= 6 mod=Scheduler [MEMORY] hold= 9,472 used= 8,384 count= 16 avg_used= 524 block_cnt= 10 chunk_cnt= 4 mod=RpcBuffer [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=ServerIdcMap [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=DiskReport [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=OB_MICB_DECODER [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=ObLSTxCtxMgrHas [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=ObTsTenantInfoN [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=ServerRegioMap [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=PoolFreeList [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=CommSysVarDefVa [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=SchemaRowKey [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=ServerCidMap [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=SessionInfoHash [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=TenCompProgMgr [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=ServerBlacklist [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=RetryTask [MEMORY] hold= 8,096 used= 8,032 count= 1 avg_used= 8,032 block_cnt= 1 chunk_cnt= 1 mod=LuaHandler [MEMORY] hold= 8,096 used= 8,032 count= 1 avg_used= 8,032 block_cnt= 1 chunk_cnt= 1 mod=ScheUserInfoVec [MEMORY] hold= 7,936 used= 7,872 count= 1 avg_used= 7,872 block_cnt= 1 chunk_cnt= 1 mod=HashNodePlanCac [MEMORY] hold= 7,936 used= 7,872 count= 1 avg_used= 7,872 block_cnt= 1 chunk_cnt= 1 mod=SessHoldMapNode [MEMORY] hold= 7,936 used= 7,872 count= 1 avg_used= 7,872 block_cnt= 1 chunk_cnt= 1 mod=HasNodTzInfM [MEMORY] hold= 7,504 used= 7,368 count= 2 avg_used= 3,684 block_cnt= 2 chunk_cnt= 2 mod=DeadLock [MEMORY] hold= 6,288 used= 6,224 count= 1 avg_used= 6,224 block_cnt= 1 chunk_cnt= 1 mod=InnerLobHash [MEMORY] hold= 4,672 used= 4,608 count= 1 avg_used= 4,608 block_cnt= 1 chunk_cnt= 1 mod=MemstoreAllocat [MEMORY] hold= 4,576 used= 2,992 count= 22 avg_used= 136 block_cnt= 11 chunk_cnt= 7 mod=ObTxLSLogCb [MEMORY] hold= 3,888 used= 3,816 count= 1 avg_used= 3,816 block_cnt= 1 chunk_cnt= 1 mod=RecScheHisMap [MEMORY] hold= 3,888 used= 3,816 count= 1 avg_used= 3,816 block_cnt= 1 chunk_cnt= 1 mod=RemMasterMap [MEMORY] hold= 3,840 used= 3,712 count= 2 avg_used= 1,856 block_cnt= 1 chunk_cnt= 1 mod=RootContext [MEMORY] hold= 3,408 used= 3,272 count= 2 avg_used= 1,636 block_cnt= 2 chunk_cnt= 2 mod=ColUsagHashMap [MEMORY] hold= 3,152 used= 3,088 count= 1 avg_used= 3,088 block_cnt= 1 chunk_cnt= 1 mod=PxPoolBkt [MEMORY] hold= 3,152 used= 3,088 count= 1 avg_used= 3,088 block_cnt= 1 chunk_cnt= 1 mod=DmlStatsHashMap [MEMORY] hold= 2,768 used= 2,704 count= 1 avg_used= 2,704 block_cnt= 1 chunk_cnt= 1 mod=LoggerAlloc [MEMORY] hold= 2,736 used= 2,544 count= 3 avg_used= 848 block_cnt= 1 chunk_cnt= 1 mod=IO_TENANT_MAP [MEMORY] hold= 2,112 used= 1,632 count= 7 avg_used= 233 block_cnt= 6 chunk_cnt= 2 mod=Rpc [MEMORY] hold= 2,016 used= 1,952 count= 1 avg_used= 1,952 block_cnt= 1 chunk_cnt= 1 mod=GtsRequestRpc [MEMORY] hold= 1,984 used= 1,920 count= 1 avg_used= 1,920 block_cnt= 1 chunk_cnt= 1 mod=GtiRequestRpc [MEMORY] hold= 1,920 used= 1,848 count= 1 avg_used= 1,848 block_cnt= 1 chunk_cnt= 1 mod=GtsRpcProxy [MEMORY] hold= 1,920 used= 1,848 count= 1 avg_used= 1,848 block_cnt= 1 chunk_cnt= 1 mod=GtiRpcProxy [MEMORY] hold= 1,920 used= 1,856 count= 1 avg_used= 1,856 block_cnt= 1 chunk_cnt= 1 mod=WrsTenantServic [MEMORY] hold= 1,728 used= 1,296 count= 6 avg_used= 216 block_cnt= 5 chunk_cnt= 4 mod=ObFuture [MEMORY] hold= 1,536 used= 1,472 count= 1 avg_used= 1,472 block_cnt= 1 chunk_cnt= 1 mod=TZInfoMap [MEMORY] hold= 1,200 used= 1,024 count= 2 avg_used= 512 block_cnt= 2 chunk_cnt= 2 mod=SqlSessiQuerSql [MEMORY] hold= 1,168 used= 1,092 count= 1 avg_used= 1,092 block_cnt= 1 chunk_cnt= 1 mod=LogRegionMap [MEMORY] hold= 1,152 used= 1,088 count= 1 avg_used= 1,088 block_cnt= 1 chunk_cnt= 1 mod=memdumpqueue [MEMORY] hold= 1,152 used= 768 count= 6 avg_used= 128 block_cnt= 5 chunk_cnt= 5 mod=BaseLogWriter [MEMORY] hold= 1,152 used= 1,024 count= 2 avg_used= 512 block_cnt= 2 chunk_cnt= 2 mod=SqlString [MEMORY] hold= 1,104 used= 1,040 count= 1 avg_used= 1,040 block_cnt= 1 chunk_cnt= 1 mod=SqlPlanCache [MEMORY] hold= 1,024 used= 960 count= 1 avg_used= 960 block_cnt= 1 chunk_cnt= 1 mod=SchemaService [MEMORY] hold= 912 used= 848 count= 1 avg_used= 848 block_cnt= 1 chunk_cnt= 1 mod=IO_CHANNEL_MAP [MEMORY] hold= 768 used= 704 count= 1 avg_used= 704 block_cnt= 1 chunk_cnt= 1 mod=ScheMgrCacheMap [MEMORY] hold= 752 used= 688 count= 1 avg_used= 688 block_cnt= 1 chunk_cnt= 1 mod=TenaScheMemMgr [MEMORY] hold= 752 used= 688 count= 1 avg_used= 688 block_cnt= 1 chunk_cnt= 1 mod=TenSchMemMgrFoL [MEMORY] hold= 640 used= 128 count= 8 avg_used= 16 block_cnt= 2 chunk_cnt= 2 mod=CreateEntity [MEMORY] hold= 640 used= 200 count= 5 avg_used= 40 block_cnt= 3 chunk_cnt= 2 mod=Log [MEMORY] hold= 576 used= 512 count= 1 avg_used= 512 block_cnt= 1 chunk_cnt= 1 mod=TsMgr [MEMORY] hold= 480 used= 120 count= 5 avg_used= 24 block_cnt= 5 chunk_cnt= 1 mod=FreezeTask [MEMORY] hold= 416 used= 272 count= 2 avg_used= 136 block_cnt= 1 chunk_cnt= 1 mod=unknonw [MEMORY] hold= 336 used= 256 count= 1 avg_used= 256 block_cnt= 1 chunk_cnt= 1 mod=rpc_server [MEMORY] hold= 256 used= 152 count= 1 avg_used= 152 block_cnt= 1 chunk_cnt= 1 mod=OccamTimeGuard [MEMORY] hold= 240 used= 112 count= 2 avg_used= 56 block_cnt= 2 chunk_cnt= 2 mod=UserResourceMgr [MEMORY] hold= 160 used= 81 count= 1 avg_used= 81 block_cnt= 1 chunk_cnt= 1 mod=TxLSLogBuf [MEMORY] hold= 144 used= 28 count= 1 avg_used= 28 block_cnt= 1 chunk_cnt= 1 mod=KeepAliveServer [MEMORY] hold= 144 used= 80 count= 1 avg_used= 80 block_cnt= 1 chunk_cnt= 1 mod=TZInfoMgr [MEMORY] hold= 128 used= 56 count= 1 avg_used= 56 block_cnt= 1 chunk_cnt= 1 mod=TenantTZ [MEMORY] hold= 128 used= 56 count= 1 avg_used= 56 block_cnt= 1 chunk_cnt= 1 mod=PxTargetMgr [MEMORY] hold= 112 used= 41 count= 1 avg_used= 41 block_cnt= 1 chunk_cnt= 1 mod=KeepAliveBuf [MEMORY] hold= 80 used= 7 count= 1 avg_used= 7 block_cnt= 1 chunk_cnt= 1 mod=SqlExpr [MEMORY] hold= 898,494,576 used= 889,212,828 count= 72,743 avg_used= 12,224 mod=SUMMARY [2024-02-19 19:03:32.934234] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=91] [MEMORY] tenant_id= 500 ctx_id= GLIBC hold= 96,571,392 used= 92,461,488 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 92,461,488 used= 85,227,464 count= 29,533 avg_used= 2,885 block_cnt= 1,349 chunk_cnt= 43 mod=glibc_malloc [MEMORY] hold= 92,461,488 used= 85,227,464 count= 29,533 avg_used= 2,885 mod=SUMMARY [2024-02-19 19:03:32.934276] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=26] [MEMORY] tenant_id= 500 ctx_id= CO_STACK hold= 69,206,016 used= 65,547,312 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 65,519,616 used= 65,394,744 count= 129 avg_used= 506,936 block_cnt= 129 chunk_cnt= 33 mod=CoStack [MEMORY] hold= 27,696 used= 10,320 count= 242 avg_used= 42 block_cnt= 4 chunk_cnt= 1 mod=Coro [MEMORY] hold= 65,547,312 used= 65,405,064 count= 371 avg_used= 176,293 mod=SUMMARY [2024-02-19 19:03:32.934312] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=26] [MEMORY] tenant_id= 500 ctx_id= LIBEASY hold= 21,012,480 used= 20,216,352 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 20,216,352 used= 20,069,608 count= 129 avg_used= 155,578 block_cnt= 24 chunk_cnt= 4 mod=OB_TEST2_PCODE [MEMORY] hold= 20,216,352 used= 20,069,608 count= 129 avg_used= 155,578 mod=SUMMARY [2024-02-19 19:03:32.934344] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=19] [MEMORY] tenant_id= 500 ctx_id= LOGGER_CTX_ID hold= 12,582,912 used= 12,484,608 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 8,323,072 used= 8,318,976 count= 4 avg_used= 2,079,744 block_cnt= 4 chunk_cnt= 4 mod=Logger [MEMORY] hold= 4,161,536 used= 4,159,488 count= 2 avg_used= 2,079,744 block_cnt= 2 chunk_cnt= 2 mod=ErrorLogger [MEMORY] hold= 12,484,608 used= 12,478,464 count= 6 avg_used= 2,079,744 mod=SUMMARY [2024-02-19 19:03:32.934392] ERROR [COMMON] print_tenant_usage_ (ob_tenant_mgr.cpp:438) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=20] the hold of observer tenant is over the system_memory(observer_tenant_hold=1158397952, system_memory=1073741824) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3c12f81 0x3c12c84 0x3c12a99 0x3bf0efb 0xadf6807 0xadf64a9 0xadf6294 0x8a6fee1 0x8a6fbba 0x3a391f1 0xb5fc3ac 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.934443] INFO [LIB] operator() (ob_malloc_allocator.cpp:397) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=36] [MEMORY] tenant: 999, limit: 2,147,483,648 hold: 12,582,912 rpc_hold: 0 cache_hold: 0 cache_used: 0 cache_item_count: 0 [MEMORY] ctx_id= DEFAULT_CTX_ID hold_bytes= 2,097,152 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= CO_STACK hold_bytes= 10,485,760 limit= 9,223,372,036,854,775,807 [2024-02-19 19:03:32.934473] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=17] [MEMORY] tenant_id= 999 ctx_id= DEFAULT_CTX_ID hold= 2,097,152 used= 32,768 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=thread_factor [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 mod=SUMMARY [2024-02-19 19:03:32.934627] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=18] [MEMORY] tenant_id= 999 ctx_id= CO_STACK hold= 10,485,760 used= 9,146,304 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 9,142,272 used= 9,124,848 count= 18 avg_used= 506,936 block_cnt= 18 chunk_cnt= 5 mod=CoStack [MEMORY] hold= 4,032 used= 1,440 count= 36 avg_used= 40 block_cnt= 1 chunk_cnt= 1 mod=Coro [MEMORY] hold= 9,146,304 used= 9,126,288 count= 54 avg_used= 169,005 mod=SUMMARY [2024-02-19 19:03:32.934712] INFO [LIB] operator() (ob_malloc_allocator.cpp:397) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=26] [MEMORY] tenant: 506, limit: 4,294,967,296 hold: 27,262,976 rpc_hold: 0 cache_hold: 0 cache_used: 0 cache_item_count: 0 [MEMORY] ctx_id= DEFAULT_CTX_ID hold_bytes= 2,097,152 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= CO_STACK hold_bytes= 25,165,824 limit= 9,223,372,036,854,775,807 [2024-02-19 19:03:32.934745] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=20] [MEMORY] tenant_id= 506 ctx_id= DEFAULT_CTX_ID hold= 2,097,152 used= 32,768 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=thread_factor [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 mod=SUMMARY [2024-02-19 19:03:32.934906] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=19] [MEMORY] tenant_id= 506 ctx_id= CO_STACK hold= 25,165,824 used= 24,390,192 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 24,379,392 used= 24,332,928 count= 48 avg_used= 506,936 block_cnt= 48 chunk_cnt= 12 mod=CoStack [MEMORY] hold= 10,800 used= 3,840 count= 96 avg_used= 40 block_cnt= 2 chunk_cnt= 1 mod=Coro [MEMORY] hold= 24,390,192 used= 24,336,768 count= 144 avg_used= 169,005 mod=SUMMARY [2024-02-19 19:03:32.934992] INFO [LIB] operator() (ob_malloc_allocator.cpp:397) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=27] [MEMORY] tenant: 507, limit: 1,073,741,824 hold: 12,582,912 rpc_hold: 0 cache_hold: 0 cache_used: 0 cache_item_count: 0 [MEMORY] ctx_id= DEFAULT_CTX_ID hold_bytes= 2,097,152 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= CO_STACK hold_bytes= 10,485,760 limit= 9,223,372,036,854,775,807 [2024-02-19 19:03:32.935024] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=20] [MEMORY] tenant_id= 507 ctx_id= DEFAULT_CTX_ID hold= 2,097,152 used= 32,768 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=thread_factor [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 mod=SUMMARY [2024-02-19 19:03:32.935184] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=21] [MEMORY] tenant_id= 507 ctx_id= CO_STACK hold= 10,485,760 used= 9,146,304 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 9,142,272 used= 9,124,848 count= 18 avg_used= 506,936 block_cnt= 18 chunk_cnt= 5 mod=CoStack [MEMORY] hold= 4,032 used= 1,440 count= 36 avg_used= 40 block_cnt= 1 chunk_cnt= 1 mod=Coro [MEMORY] hold= 9,146,304 used= 9,126,288 count= 54 avg_used= 169,005 mod=SUMMARY [2024-02-19 19:03:32.935273] INFO [LIB] operator() (ob_malloc_allocator.cpp:397) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=23] [MEMORY] tenant: 508, limit: 1,073,741,824 hold: 33,554,432 rpc_hold: 0 cache_hold: 0 cache_used: 0 cache_item_count: 0 [MEMORY] ctx_id= DEFAULT_CTX_ID hold_bytes= 2,097,152 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= CO_STACK hold_bytes= 31,457,280 limit= 9,223,372,036,854,775,807 [2024-02-19 19:03:32.935315] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=28] [MEMORY] tenant_id= 508 ctx_id= DEFAULT_CTX_ID hold= 2,097,152 used= 32,768 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=thread_factor [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 mod=SUMMARY [2024-02-19 19:03:32.935476] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=23] [MEMORY] tenant_id= 508 ctx_id= CO_STACK hold= 31,457,280 used= 29,471,472 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 29,458,432 used= 29,402,288 count= 58 avg_used= 506,936 block_cnt= 58 chunk_cnt= 15 mod=CoStack [MEMORY] hold= 13,040 used= 4,640 count= 116 avg_used= 40 block_cnt= 2 chunk_cnt= 1 mod=Coro [MEMORY] hold= 29,471,472 used= 29,406,928 count= 174 avg_used= 169,005 mod=SUMMARY [2024-02-19 19:03:32.935552] INFO [LIB] operator() (ob_malloc_allocator.cpp:397) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=27] [MEMORY] tenant: 509, limit: 1,073,741,824 hold: 12,582,912 rpc_hold: 0 cache_hold: 0 cache_used: 0 cache_item_count: 0 [MEMORY] ctx_id= DEFAULT_CTX_ID hold_bytes= 2,097,152 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= CO_STACK hold_bytes= 10,485,760 limit= 9,223,372,036,854,775,807 [2024-02-19 19:03:32.935585] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=20] [MEMORY] tenant_id= 509 ctx_id= DEFAULT_CTX_ID hold= 2,097,152 used= 32,768 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=thread_factor [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 mod=SUMMARY [2024-02-19 19:03:32.935747] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=20] [MEMORY] tenant_id= 509 ctx_id= CO_STACK hold= 10,485,760 used= 9,146,304 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 9,142,272 used= 9,124,848 count= 18 avg_used= 506,936 block_cnt= 18 chunk_cnt= 5 mod=CoStack [MEMORY] hold= 4,032 used= 1,440 count= 36 avg_used= 40 block_cnt= 1 chunk_cnt= 1 mod=Coro [MEMORY] hold= 9,146,304 used= 9,126,288 count= 54 avg_used= 169,005 mod=SUMMARY [2024-02-19 19:03:32.935821] INFO [LIB] operator() (ob_malloc_allocator.cpp:397) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=23] [MEMORY] tenant: 510, limit: 2,147,483,648 hold: 12,582,912 rpc_hold: 0 cache_hold: 0 cache_used: 0 cache_item_count: 0 [MEMORY] ctx_id= DEFAULT_CTX_ID hold_bytes= 2,097,152 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= CO_STACK hold_bytes= 10,485,760 limit= 9,223,372,036,854,775,807 [2024-02-19 19:03:32.935861] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=27] [MEMORY] tenant_id= 510 ctx_id= DEFAULT_CTX_ID hold= 2,097,152 used= 32,768 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=thread_factor [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 mod=SUMMARY [2024-02-19 19:03:32.936020] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=21] [MEMORY] tenant_id= 510 ctx_id= CO_STACK hold= 10,485,760 used= 9,146,304 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 9,142,272 used= 9,124,848 count= 18 avg_used= 506,936 block_cnt= 18 chunk_cnt= 5 mod=CoStack [MEMORY] hold= 4,032 used= 1,440 count= 36 avg_used= 40 block_cnt= 1 chunk_cnt= 1 mod=Coro [MEMORY] hold= 9,146,304 used= 9,126,288 count= 54 avg_used= 169,005 mod=SUMMARY [2024-02-19 19:03:32.936122] INFO [LIB] operator() (ob_malloc_allocator.cpp:397) [1106653][ServerGTimer][T1][Y0-0000000000000000-0-0] [lt=24] [MEMORY] tenant: 1, limit: 2,147,483,648 hold: 1,718,894,592 rpc_hold: 0 cache_hold: 4,194,304 cache_used: 4,194,304 cache_item_count: 2 [MEMORY] ctx_id= DEFAULT_CTX_ID hold_bytes= 1,624,522,752 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= TRANS_CTX_MGR_ID hold_bytes= 2,097,152 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= PLAN_CACHE_CTX_ID hold_bytes= 2,097,152 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= CO_STACK hold_bytes= 71,303,168 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= META_OBJ_CTX_ID hold_bytes= 10,485,760 limit= 429,496,720 [MEMORY] ctx_id= TX_CALLBACK_CTX_ID hold_bytes= 2,097,152 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= LOB_CTX_ID hold_bytes= 2,097,152 limit= 9,223,372,036,854,775,807 [2024-02-19 19:03:32.936273] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T1][Y0-0000000000000000-0-0] [lt=24] [MEMORY] tenant_id= 1 ctx_id= DEFAULT_CTX_ID hold= 1,624,522,752 used= 827,545,680 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 608,827,648 used= 604,277,696 count= 70,648 avg_used= 8,553 block_cnt= 70,648 chunk_cnt= 663 mod=TX_DATA_TABLE [MEMORY] hold= 41,963,520 used= 41,943,040 count= 1 avg_used= 41,943,040 block_cnt= 1 chunk_cnt= 1 mod=LogGroupBuffer [MEMORY] hold= 35,639,296 used= 35,613,984 count= 6 avg_used= 5,935,664 block_cnt= 6 chunk_cnt= 6 mod=IoControl [MEMORY] hold= 32,014,336 used= 31,996,160 count= 16 avg_used= 1,999,760 block_cnt= 16 chunk_cnt= 16 mod=MysqlRequesReco [MEMORY] hold= 21,626,432 used= 20,549,230 count= 132 avg_used= 155,675 block_cnt= 132 chunk_cnt= 11 mod=SqlDtl [MEMORY] hold= 8,540,160 used= 8,519,680 count= 1 avg_used= 8,519,680 block_cnt= 1 chunk_cnt= 1 mod=ArcFetchQueue [MEMORY] hold= 8,540,160 used= 8,519,680 count= 1 avg_used= 8,519,680 block_cnt= 1 chunk_cnt= 1 mod=RCSrv [MEMORY] hold= 8,540,160 used= 8,519,680 count= 1 avg_used= 8,519,680 block_cnt= 1 chunk_cnt= 1 mod=RFLTaskQueue [MEMORY] hold= 8,151,040 used= 8,131,072 count= 1 avg_used= 8,131,072 block_cnt= 1 chunk_cnt= 1 mod=LogIOCb [MEMORY] hold= 5,160,960 used= 5,120,384 count= 2 avg_used= 2,560,192 block_cnt= 2 chunk_cnt= 2 mod=XATimeWheel [MEMORY] hold= 5,160,960 used= 5,120,384 count= 2 avg_used= 2,560,192 block_cnt= 2 chunk_cnt= 2 mod=TransTimeWheel [MEMORY] hold= 4,345,856 used= 4,325,376 count= 1 avg_used= 4,325,376 block_cnt= 1 chunk_cnt= 1 mod=TransService [MEMORY] hold= 4,292,608 used= 4,251,648 count= 2 avg_used= 2,125,824 block_cnt= 2 chunk_cnt= 2 mod=LogDIOAligned [MEMORY] hold= 4,161,536 used= 4,159,488 count= 2 avg_used= 2,079,744 block_cnt= 2 chunk_cnt= 2 mod=CACHE_MAP_NODE [MEMORY] hold= 2,580,480 used= 2,560,192 count= 1 avg_used= 2,560,192 block_cnt= 1 chunk_cnt= 1 mod=ElectTimer [MEMORY] hold= 2,580,480 used= 2,560,192 count= 1 avg_used= 2,560,192 block_cnt= 1 chunk_cnt= 1 mod=CoordTimer [MEMORY] hold= 2,580,480 used= 2,560,192 count= 1 avg_used= 2,560,192 block_cnt= 1 chunk_cnt= 1 mod=DupTbLease [MEMORY] hold= 2,580,480 used= 2,560,192 count= 1 avg_used= 2,560,192 block_cnt= 1 chunk_cnt= 1 mod=FrzTrigger [MEMORY] hold= 2,580,480 used= 2,560,192 count= 1 avg_used= 2,560,192 block_cnt= 1 chunk_cnt= 1 mod=DetectorTimer [MEMORY] hold= 2,321,120 used= 2,298,764 count= 40 avg_used= 57,469 block_cnt= 14 chunk_cnt= 3 mod=OmtTenant [MEMORY] hold= 2,158,880 used= 408,600 count= 24,952 avg_used= 16 block_cnt= 264 chunk_cnt= 2 mod=Number [MEMORY] hold= 2,146,304 used= 2,125,824 count= 1 avg_used= 2,125,824 block_cnt= 1 chunk_cnt= 1 mod=DiskIteratorSto [MEMORY] hold= 1,589,248 used= 1,573,024 count= 2 avg_used= 786,512 block_cnt= 2 chunk_cnt= 1 mod=HashBuckLCSta [MEMORY] hold= 1,429,568 used= 1,412,823 count= 6 avg_used= 235,470 block_cnt= 6 chunk_cnt= 1 mod=LOCALDEVICE [MEMORY] hold= 958,464 used= 950,272 count= 1 avg_used= 950,272 block_cnt= 1 chunk_cnt= 1 mod=IOWorkerLQ [MEMORY] hold= 933,888 used= 931,072 count= 1 avg_used= 931,072 block_cnt= 1 chunk_cnt= 1 mod=ArcSenderQueue [MEMORY] hold= 802,816 used= 800,000 count= 1 avg_used= 800,000 block_cnt= 1 chunk_cnt= 1 mod=SqlPlanMon [MEMORY] hold= 794,624 used= 786,512 count= 1 avg_used= 786,512 block_cnt= 1 chunk_cnt= 1 mod=HashBuckPlanCac [MEMORY] hold= 788,736 used= 526,016 count= 41 avg_used= 12,829 block_cnt= 34 chunk_cnt= 1 mod=LogAlloc [MEMORY] hold= 401,408 used= 393,488 count= 1 avg_used= 393,488 block_cnt= 1 chunk_cnt= 1 mod=DagNetIdMap [MEMORY] hold= 393,216 used= 392,448 count= 6 avg_used= 65,408 block_cnt= 6 chunk_cnt= 2 mod=MemTblMgrObj [MEMORY] hold= 368,640 used= 360,736 count= 1 avg_used= 360,736 block_cnt= 1 chunk_cnt= 1 mod=ClogGe [MEMORY] hold= 352,064 used= 344,288 count= 6 avg_used= 57,381 block_cnt= 6 chunk_cnt= 2 mod=PoolFreeList [MEMORY] hold= 235,712 used= 233,856 count= 29 avg_used= 8,064 block_cnt= 29 chunk_cnt= 5 mod=BlockMap [MEMORY] hold= 212,736 used= 204,488 count= 2 avg_used= 102,244 block_cnt= 2 chunk_cnt= 2 mod=ColUsagHashMap [MEMORY] hold= 204,800 used= 196,744 count= 1 avg_used= 196,744 block_cnt= 1 chunk_cnt= 1 mod=DagNetMap [MEMORY] hold= 204,800 used= 196,744 count= 1 avg_used= 196,744 block_cnt= 1 chunk_cnt= 1 mod=DagMap [MEMORY] hold= 174,144 used= 149,504 count= 258 avg_used= 579 block_cnt= 24 chunk_cnt= 3 mod=LSMap [MEMORY] hold= 139,264 used= 131,904 count= 1 avg_used= 131,904 block_cnt= 1 chunk_cnt= 1 mod=GtsTaskQueue [MEMORY] hold= 130,944 used= 130,816 count= 2 avg_used= 65,408 block_cnt= 2 chunk_cnt= 2 mod=ResultSet [MEMORY] hold= 81,920 used= 80,288 count= 1 avg_used= 80,288 block_cnt= 1 chunk_cnt= 1 mod=bf_queue [MEMORY] hold= 73,728 used= 70,656 count= 1 avg_used= 70,656 block_cnt= 1 chunk_cnt= 1 mod=FetchLog [MEMORY] hold= 73,664 used= 72,544 count= 2 avg_used= 36,272 block_cnt= 2 chunk_cnt= 2 mod=LSSvr [MEMORY] hold= 65,536 used= 65,408 count= 1 avg_used= 65,408 block_cnt= 1 chunk_cnt= 1 mod=LockMemObj [MEMORY] hold= 65,536 used= 65,408 count= 1 avg_used= 65,408 block_cnt= 1 chunk_cnt= 1 mod=DEVICE_MANAGER [MEMORY] hold= 65,536 used= 65,408 count= 1 avg_used= 65,408 block_cnt= 1 chunk_cnt= 1 mod=FlushMeta [MEMORY] hold= 65,536 used= 65,408 count= 1 avg_used= 65,408 block_cnt= 1 chunk_cnt= 1 mod=TxDataMemObj [MEMORY] hold= 65,536 used= 65,408 count= 1 avg_used= 65,408 block_cnt= 1 chunk_cnt= 1 mod=DDLKvMgrObj [MEMORY] hold= 65,536 used= 65,408 count= 1 avg_used= 65,408 block_cnt= 1 chunk_cnt= 1 mod=TxCtxMemObj [MEMORY] hold= 51,360 used= 768 count= 640 avg_used= 1 block_cnt= 7 chunk_cnt= 1 mod=CharsetUtil [MEMORY] hold= 40,960 used= 35,408 count= 1 avg_used= 35,408 block_cnt= 1 chunk_cnt= 1 mod=ReplaySrv [MEMORY] hold= 40,960 used= 35,408 count= 1 avg_used= 35,408 block_cnt= 1 chunk_cnt= 1 mod=ApplySrv [MEMORY] hold= 32,768 used= 24,688 count= 1 avg_used= 24,688 block_cnt= 1 chunk_cnt= 1 mod=thread_factor [MEMORY] hold= 16,384 used= 9,024 count= 1 avg_used= 9,024 block_cnt= 1 chunk_cnt= 1 mod=LogApplyStatus [MEMORY] hold= 16,384 used= 13,632 count= 1 avg_used= 13,632 block_cnt= 1 chunk_cnt= 1 mod=LogReplayStatus [MEMORY] hold= 15,872 used= 15,744 count= 2 avg_used= 7,872 block_cnt= 2 chunk_cnt= 2 mod=HashNodeLCSta [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=APPLY_STATUS [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=MatchOffsetMap [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=REPLAY_STATUS [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=LogReplayTask [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=FetchLogTask [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=Scheduler [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=LockWaitMgr [MEMORY] hold= 8,128 used= 8,064 count= 1 avg_used= 8,064 block_cnt= 1 chunk_cnt= 1 mod=DeadLock [MEMORY] hold= 7,968 used= 7,896 count= 1 avg_used= 7,896 block_cnt= 1 chunk_cnt= 1 mod=HashNode [MEMORY] hold= 7,936 used= 7,872 count= 1 avg_used= 7,872 block_cnt= 1 chunk_cnt= 1 mod=DagNode [MEMORY] hold= 7,936 used= 7,872 count= 1 avg_used= 7,872 block_cnt= 1 chunk_cnt= 1 mod=HashNodePlanCac [MEMORY] hold= 5,120 used= 5,056 count= 1 avg_used= 5,056 block_cnt= 1 chunk_cnt= 1 mod=PalfEnv [MEMORY] hold= 2,624 used= 2,560 count= 1 avg_used= 2,560 block_cnt= 1 chunk_cnt= 1 mod=PxResMgr [MEMORY] hold= 1,120 used= 1,056 count= 1 avg_used= 1,056 block_cnt= 1 chunk_cnt= 1 mod=LOG_HASH_MAP [MEMORY] hold= 624 used= 552 count= 1 avg_used= 552 block_cnt= 1 chunk_cnt= 1 mod=LSFreeze [MEMORY] hold= 576 used= 512 count= 1 avg_used= 512 block_cnt= 1 chunk_cnt= 1 mod=TsSourceInfo [MEMORY] hold= 480 used= 408 count= 1 avg_used= 408 block_cnt= 1 chunk_cnt= 1 mod=Election [MEMORY] hold= 192 used= 64 count= 2 avg_used= 32 block_cnt= 2 chunk_cnt= 1 mod=PalfFSCbNode [MEMORY] hold= 160 used= 96 count= 1 avg_used= 96 block_cnt= 1 chunk_cnt= 1 mod=Coordinator [MEMORY] hold= 128 used= 32 count= 1 avg_used= 32 block_cnt= 1 chunk_cnt= 1 mod=PalfRCCbNode [MEMORY] hold= 128 used= 32 count= 1 avg_used= 32 block_cnt= 1 chunk_cnt= 1 mod=RebuildCbNode [MEMORY] hold= 827,545,680 used= 819,245,513 count= 96,854 avg_used= 8,458 mod=SUMMARY [2024-02-19 19:03:32.936366] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T1][Y0-0000000000000000-0-0] [lt=54] [MEMORY] tenant_id= 1 ctx_id= TRANS_CTX_MGR_ID hold= 2,097,152 used= 401,408 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 401,408 used= 394,688 count= 1 avg_used= 394,688 block_cnt= 1 chunk_cnt= 1 mod=PartTranCtxMgr [MEMORY] hold= 401,408 used= 394,688 count= 1 avg_used= 394,688 mod=SUMMARY [2024-02-19 19:03:32.936467] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T1][Y0-0000000000000000-0-0] [lt=21] [MEMORY] tenant_id= 1 ctx_id= PLAN_CACHE_CTX_ID hold= 2,097,152 used= 154,816 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 138,560 used= 130,144 count= 13 avg_used= 10,011 block_cnt= 13 chunk_cnt= 1 mod=SqlPhyPlan [MEMORY] hold= 16,256 used= 16,128 count= 2 avg_used= 8,064 block_cnt= 2 chunk_cnt= 1 mod=SqlPlanCache [MEMORY] hold= 154,816 used= 146,272 count= 15 avg_used= 9,751 mod=SUMMARY [2024-02-19 19:03:32.936519] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T1][Y0-0000000000000000-0-0] [lt=21] [MEMORY] tenant_id= 1 ctx_id= CO_STACK hold= 71,303,168 used= 69,105,296 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 69,074,944 used= 68,943,296 count= 136 avg_used= 506,936 block_cnt= 136 chunk_cnt= 34 mod=CoStack [MEMORY] hold= 30,352 used= 10,880 count= 270 avg_used= 40 block_cnt= 4 chunk_cnt= 1 mod=Coro [MEMORY] hold= 69,105,296 used= 68,954,176 count= 406 avg_used= 169,837 mod=SUMMARY [2024-02-19 19:03:32.936572] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T1][Y0-0000000000000000-0-0] [lt=31] [MEMORY] tenant_id= 1 ctx_id= KVSTORE_CACHE_ID hold= 4,194,304 used= 4,194,304 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 4,194,304 used= 0 count= 2 avg_used= 0 block_cnt= 0 chunk_cnt= 0 mod=KvstorCacheMb [MEMORY] hold= 4,194,304 used= 0 count= 2 avg_used= 0 mod=SUMMARY [2024-02-19 19:03:32.936610] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T1][Y0-0000000000000000-0-0] [lt=23] [MEMORY] tenant_id= 1 ctx_id= META_OBJ_CTX_ID hold= 10,485,760 used= 8,364,672 limit= 429,496,720 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 3,097,216 used= 3,058,518 count= 376 avg_used= 8,134 block_cnt= 376 chunk_cnt= 4 mod=MetaAllocator [MEMORY] hold= 2,490,368 used= 2,485,504 count= 38 avg_used= 65,408 block_cnt= 38 chunk_cnt= 4 mod=TabletObj [MEMORY] hold= 2,293,760 used= 2,289,280 count= 35 avg_used= 65,408 block_cnt= 35 chunk_cnt= 5 mod=SSTblObj [MEMORY] hold= 483,328 used= 480,064 count= 2 avg_used= 240,032 block_cnt= 2 chunk_cnt= 1 mod=PoolFreeList [MEMORY] hold= 8,364,672 used= 8,313,366 count= 451 avg_used= 18,433 mod=SUMMARY [2024-02-19 19:03:32.936643] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T1][Y0-0000000000000000-0-0] [lt=21] [MEMORY] tenant_id= 1 ctx_id= TX_CALLBACK_CTX_ID hold= 2,097,152 used= 24,384 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 24,384 used= 24,192 count= 3 avg_used= 8,064 block_cnt= 3 chunk_cnt= 1 mod=MemtableCallbac [MEMORY] hold= 24,384 used= 24,192 count= 3 avg_used= 8,064 mod=SUMMARY [2024-02-19 19:03:32.936672] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:234) [1106653][ServerGTimer][T1][Y0-0000000000000000-0-0] [lt=19] [MEMORY] tenant_id= 1 ctx_id= LOB_CTX_ID hold= 2,097,152 used= 65,536 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 65,536 used= 65,408 count= 1 avg_used= 65,408 block_cnt= 1 chunk_cnt= 1 mod=LobAllocator [MEMORY] hold= 65,536 used= 65,408 count= 1 avg_used= 65,408 mod=SUMMARY [2024-02-19 19:03:32.936704] INFO [STORAGE] print_tenant_usage (ob_tenant_memory_printer.cpp:125) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=19] ====== tenants memory info ====== === TENANTS MEMORY INFO === all_tenants_memstore_used= 0 [TENANT_MEMORY] tenant_id= 512 mem_tenant_limit= 1,073,741,824 mem_tenant_hold= 12,582,912 kv_cache_mem= 0 [TENANT_MEMORY] tenant_id= 500 mem_tenant_limit= 9,223,372,036,854,775,807 mem_tenant_hold= 1,158,397,952 kv_cache_mem= 0 [TENANT_MEMORY] tenant_id= 999 mem_tenant_limit= 2,147,483,648 mem_tenant_hold= 12,582,912 kv_cache_mem= 0 [TENANT_MEMORY] tenant_id= 506 mem_tenant_limit= 4,294,967,296 mem_tenant_hold= 27,262,976 kv_cache_mem= 0 [TENANT_MEMORY] tenant_id= 507 mem_tenant_limit= 1,073,741,824 mem_tenant_hold= 12,582,912 kv_cache_mem= 0 [TENANT_MEMORY] tenant_id= 508 mem_tenant_limit= 1,073,741,824 mem_tenant_hold= 33,554,432 kv_cache_mem= 0 [TENANT_MEMORY] tenant_id= 509 mem_tenant_limit= 1,073,741,824 mem_tenant_hold= 12,582,912 kv_cache_mem= 0 [TENANT_MEMORY] tenant_id= 510 mem_tenant_limit= 2,147,483,648 mem_tenant_hold= 12,582,912 kv_cache_mem= 0 [TENANT_MEMORY] tenant_id= 1 active_memstore_used= 0 total_memstore_used= 0 total_memstore_hold= 0 memstore_freeze_trigger_limit= 86,556,660 memstore_limit= 1,073,741,800 mem_tenant_limit= 2,147,483,648 mem_tenant_hold= 1,718,894,592 kv_cache_mem= 4,194,304 max_mem_memstore_can_get_now= 432,783,360 [2024-02-19 19:03:32.936808] INFO [STORAGE] print_tenant_usage (ob_tenant_memory_printer.cpp:157) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=37] [CHUNK_MGR] free=4 pushes=64826 pops=64822 limit= 6,442,450,944 hold= 3,017,691,136 total_hold= 3,158,310,912 used= 3,009,302,528 freelist_hold= 8,388,608 maps= 18,316 unmaps= 17,225 large_maps= 17,306 large_unmaps= 17,225 memalign=0 virtual_memory_used= 4,172,034,048 [2024-02-19 19:03:32.936856] INFO dump (ob_concurrency_objpool.cpp:852) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=30] [MEMORY] dump object freelist statistic: [MEMORY] allocated | in-use | type size | cache type | free list name [MEMORY] --------------------|--------------------|------------|------------|---------------------------------- [MEMORY] 112,064 | 0 | 28,016 | reclaim | N9oceanbase3sql16ObSQLSessionInfoE [MEMORY] 7,920 | 80 | 80 | reclaim | N9oceanbase6common12LinkHashNodeINS_4palf5LSKeyEEE [MEMORY] 112,704 | 28,176 | 28,176 | reclaim | N9oceanbase4palf14PalfHandleImplE [MEMORY] 23,760 | 576 | 144 | reclaim | ObThreadCache [MEMORY] 8,192 | 768 | 256 | global | ObjFreeList [2024-02-19 19:03:32.937950] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=19] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:32.938012] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=60] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:32.938925] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=909] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:32.938948] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=22] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:32.938972] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=18] fail to resolve table(ret=-5019) [2024-02-19 19:03:32.938989] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=17] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:32.939010] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=15] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:32.939027] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=16] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:32.939061] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=33] resolve basic table failed(ret=-5019) [2024-02-19 19:03:32.939079] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=16] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:32.939095] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=16] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:32.939111] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=15] resolve normal query failed(ret=-5019) [2024-02-19 19:03:32.939128] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=15] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:32.939154] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=17] failed to resolve(ret=-5019) [2024-02-19 19:03:32.939172] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=17] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.939192] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=16] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.939209] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=17] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:32.939229] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=16] fail to handle text query(stmt=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server, ret=-5019) [2024-02-19 19:03:32.939249] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=18] executor execute failed(ret=-5019) [2024-02-19 19:03:32.939268] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=18] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, retry_cnt=0) [2024-02-19 19:03:32.939304] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=24] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:32.939332] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=28] result set close failed(ret=-5019) [2024-02-19 19:03:32.939352] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=18] result set close failed(ret=-5019) [2024-02-19 19:03:32.939370] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=16] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:32.939448] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106653][ServerGTimer][T1][YB42AC0103F2-000611B9224784A0-0-0] [lt=16] failed to process record(executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:32.939489] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106653][ServerGTimer][T0][YB42AC0103F2-000611B9224784A0-0-0] [lt=36] failed to process final(executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, aret=-5019, ret=-5019) [2024-02-19 19:03:32.939514] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=21] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server) [2024-02-19 19:03:32.939533] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=17] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:32.939548] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=15] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:32.939564] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=14] query failed(ret=-5019, conn=0x7fdcd7d06050, start=1708340612937716, sql=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server) [2024-02-19 19:03:32.939582] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=17] read failed(ret=-5019) [2024-02-19 19:03:32.939693] WARN [SHARE] refresh (ob_all_server_tracer.cpp:159) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=24] fail to get server status(ret=-5019) [2024-02-19 19:03:32.939713] WARN [SHARE] runTimerTask (ob_all_server_tracer.cpp:210) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=19] fail to refresh all server map(ret=-5019) [2024-02-19 19:03:32.939909] WARN [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2113) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=14] get invalid Ethernet speed, use default(devname="ens18") [2024-02-19 19:03:32.939933] WARN [SERVER] runTimerTask (ob_server.cpp:2632) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=23] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4002, ret="OB_INVALID_ARGUMENT") [2024-02-19 19:03:32.940546] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:32.940575] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=27] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612940537}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.940594] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=17] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340612940537}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:32.940608] WARN [STORAGE.TRANS] operator() (ob_ts_mgr.h:225) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=11] refresh gts failed(ret=-4038, ret="OB_NOT_MASTER", gts_tenant_info={v:1}) [2024-02-19 19:03:32.940620] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:229) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=11] refresh gts functor(ret=-4038, ret="OB_NOT_MASTER", gts_tenant_info={v:1}) [2024-02-19 19:03:32.943738] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.943790] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.954134] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.954173] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.964359] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.964395] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.974749] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.974797] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.976252] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=27] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340612976229}) [2024-02-19 19:03:32.976284] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=33] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340612940537}}) [2024-02-19 19:03:32.979672] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:738) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] current server is WRS leader, need start CLUSTER weak read service(tenant_id=1, serve_leader_epoch=0, cur_leader_epoch=138, cluster_service_tablet_id_={id:226}, in_service=false, can_update_version=false, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:32.979714] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:336) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=32] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] begin start service(tenant_id=1, is_in_service()=false, can_update_version=false) [2024-02-19 19:03:32.979724] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:338) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] start TenantWeakReadClusterService(tenant_id=1) [2024-02-19 19:03:32.980619] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=6] table not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:32.980652] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=31] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:32.980666] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=11] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:32.980678] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=10] resolve table relation factor failed(ret=-5019, table_name=__all_weak_read_service) [2024-02-19 19:03:32.980706] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=24] fail to resolve table(ret=-5019) [2024-02-19 19:03:32.980715] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=9] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:32.980729] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=8] Table 'oceanbase.__all_weak_read_service' doesn't exist [2024-02-19 19:03:32.980738] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=8] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:32.980762] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=22] resolve basic table failed(ret=-5019) [2024-02-19 19:03:32.980773] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=9] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:32.980782] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=9] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:32.980792] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=8] resolve normal query failed(ret=-5019) [2024-02-19 19:03:32.980803] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:32.980838] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=26] failed to resolve(ret=-5019) [2024-02-19 19:03:32.980849] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=10] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.980862] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=9] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:32.980893] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=30] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:32.980906] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=10] fail to handle text query(stmt=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '', ret=-5019) [2024-02-19 19:03:32.980918] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=10] executor execute failed(ret=-5019) [2024-02-19 19:03:32.980948] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=29] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, retry_cnt=0) [2024-02-19 19:03:32.980968] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=14] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:32.980986] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=15] result set close failed(ret=-5019) [2024-02-19 19:03:32.980996] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:32.981003] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=7] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:32.981040] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:32.981057] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797ED-0-0] [lt=16] failed to process final(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, aret=-5019, ret=-5019) [2024-02-19 19:03:32.981070] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:32.981083] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:32.981093] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:32.981104] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] query failed(ret=-5019, conn=0x7fdd189bc050, start=1708340612980390, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:32.981118] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] read failed(ret=-5019) [2024-02-19 19:03:32.981129] WARN [STORAGE.TRANS] query_cluster_version_range_ (ob_tenant_weak_read_cluster_service.cpp:192) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] execute sql read fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", exec_tenant_id=1, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:32.981221] WARN [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:367) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] query cluster version range from WRS table fail(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:32.981233] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:415) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] start service done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, in_service=false, leader_epoch=0, current_version=0, delta=1708340612981230, min_version=0, max_version=0, max_stale_time=5000000000, all_valid_server_count=0, total_time=1530, wlock_time=30, check_leader_time=2, query_version_time=0, persist_version_time=0) [2024-02-19 19:03:32.981253] WARN [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:781) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] start CLUSTER weak read service fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:32.981271] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:791) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, need_start_service=true, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=138, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:32.981333] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] get wrs ts(ls_id={id:1}, delta_ns=-1706042771802314819, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:32.981359] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:32.984979] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.985026] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:32.995173] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:32.995219] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.006104] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.006156] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=57] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.016307] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.016349] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.026553] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.026605] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=56] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.036789] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.036837] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.041235] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=8] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.041269] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=35] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613041220}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.041293] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=22] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613041220}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.047007] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.047068] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=63] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.054703] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=136] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:33.054826] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=42] Wash time detail, (compute_wash_size_time=151, refresh_score_time=65, wash_time=17) [2024-02-19 19:03:33.057461] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.057513] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=53] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.067722] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.067771] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.076261] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=17] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340613076241}) [2024-02-19 19:03:33.076292] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=33] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340613041220}}) [2024-02-19 19:03:33.077909] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.077950] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.079748] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:33.079776] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=29] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:33.079807] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340613079795}) [2024-02-19 19:03:33.079827] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340613079729) [2024-02-19 19:03:33.079841] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340612879649, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:33.079920] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] get wrs ts(ls_id={id:1}, delta_ns=-1706042771802216043, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:33.079935] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:33.085399] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7E-0-0] [lt=129] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:33.085442] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7E-0-0] [lt=43] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:33.085468] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7E-0-0] [lt=24] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:33.085488] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7E-0-0] [lt=17] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:33.085503] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7E-0-0] [lt=15] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:33.088627] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=60] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.088675] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.091783] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=25] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=36927, clean_start_pos=817882, clean_num=31457) [2024-02-19 19:03:33.098845] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.098889] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.109119] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=135] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.109167] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.118424] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:199) [1107573][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=28] sql audit evict task end(evict_high_mem_level=32212254, evict_high_size_level=90000, evict_batch_count=0, elapse_time=1, size_used=14904, mem_used=31196160) [2024-02-19 19:03:33.119292] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.119338] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.128580] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=31] Cache replace map node details(ret=0, replace_node_count=0, replace_time=17094, replace_start_pos=1148144, replace_num=15728) [2024-02-19 19:03:33.129637] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.129676] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.139825] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.139871] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.142432] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=30] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.142461] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=31] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613142421}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.142487] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=22] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613142421}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.150233] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.150285] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=55] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.160492] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.160545] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=55] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.170684] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.170729] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.176829] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=23] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340613176809}) [2024-02-19 19:03:33.176866] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=40] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340613142421}}) [2024-02-19 19:03:33.180148] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:33.180280] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=132] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:33.180302] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340613180130) [2024-02-19 19:03:33.180313] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340613079853, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:33.180373] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8] get wrs ts(ls_id={id:1}, delta_ns=-1706042771802115685, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:33.180394] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:33.180900] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.180981] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=81] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.191111] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.191168] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=57] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.201358] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.201409] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.211287] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_sys_parameter, ret=-5019) [2024-02-19 19:03:33.211311] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=25] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_sys_parameter, ret=-5019) [2024-02-19 19:03:33.211321] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=8] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:33.211329] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=8] resolve table relation factor failed(ret=-5019, table_name=__all_sys_parameter) [2024-02-19 19:03:33.211339] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=7] fail to resolve table(ret=-5019) [2024-02-19 19:03:33.211346] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=6] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:33.211357] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=6] Table 'oceanbase.__all_sys_parameter' doesn't exist [2024-02-19 19:03:33.211364] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=7] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:33.211371] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=6] resolve basic table failed(ret=-5019) [2024-02-19 19:03:33.211378] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=6] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:33.211384] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=5] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:33.211392] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=7] resolve normal query failed(ret=-5019) [2024-02-19 19:03:33.211399] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=6] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:33.211413] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=7] failed to resolve(ret=-5019) [2024-02-19 19:03:33.211422] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=8] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:33.211431] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=7] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:33.211438] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=6] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:33.211447] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=6] fail to handle text query(stmt=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter, ret=-5019) [2024-02-19 19:03:33.211456] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=7] executor execute failed(ret=-5019) [2024-02-19 19:03:33.211463] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=7] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, retry_cnt=0) [2024-02-19 19:03:33.211479] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=10] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:33.211493] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=12] result set close failed(ret=-5019) [2024-02-19 19:03:33.211501] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=6] result set close failed(ret=-5019) [2024-02-19 19:03:33.211506] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=5] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:33.211525] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D7-0-0] [lt=6] failed to process record(executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:33.211535] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D7-0-0] [lt=8] failed to process final(executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, aret=-5019, ret=-5019) [2024-02-19 19:03:33.211544] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D7-0-0] [lt=7] execute sql failed(ret=-5019, tenant_id=1, sql=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter) [2024-02-19 19:03:33.211552] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D7-0-0] [lt=7] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:33.211559] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D7-0-0] [lt=6] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:33.211566] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D7-0-0] [lt=6] query failed(ret=-5019, conn=0x7fdcf4ef4050, start=1708340613211067, sql=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter) [2024-02-19 19:03:33.211556] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.211575] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D7-0-0] [lt=8] read failed(ret=-5019) [2024-02-19 19:03:33.211582] WARN [SHARE] update_local (ob_config_manager.cpp:322) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D7-0-0] [lt=6] read config from __all_sys_parameter failed(sqlstr="select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter", ret=-5019) [2024-02-19 19:03:33.211585] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.211641] WARN [SHARE] update_local (ob_config_manager.cpp:356) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D7-0-0] [lt=7] Read system config from inner table error(ret=-5019) [2024-02-19 19:03:33.211651] WARN [SHARE] runTimerTask (ob_config_manager.cpp:455) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D7-0-0] [lt=9] Update local config failed(ret=-5019) [2024-02-19 19:03:33.221834] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.221877] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.232011] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.232061] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=56] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.240447] INFO [LIB] runTimerTask (ob_work_queue.cpp:24) [1106715][ObTimer][T0][Y0-0000000000000000-0-0] [lt=39] add async task(this=tasktype:N9oceanbase10rootserver13ObRootService19ObRefreshServerTaskE) [2024-02-19 19:03:33.241374] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=10] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:33.241397] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=20] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:33.241407] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=8] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:33.241415] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=7] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:33.241424] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=6] fail to resolve table(ret=-5019) [2024-02-19 19:03:33.241436] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=11] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:33.241447] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=6] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:33.241454] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=6] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:33.241463] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=8] resolve basic table failed(ret=-5019) [2024-02-19 19:03:33.241472] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=8] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:33.241479] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=7] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:33.241486] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=6] resolve normal query failed(ret=-5019) [2024-02-19 19:03:33.241503] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=15] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:33.241522] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=11] failed to resolve(ret=-5019) [2024-02-19 19:03:33.241537] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=14] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:33.241550] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=10] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:33.241560] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=8] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:33.241568] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=6] fail to handle text query(stmt=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server, ret=-5019) [2024-02-19 19:03:33.241578] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=9] executor execute failed(ret=-5019) [2024-02-19 19:03:33.241586] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=6] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, retry_cnt=0) [2024-02-19 19:03:33.241605] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=12] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:33.241622] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=15] result set close failed(ret=-5019) [2024-02-19 19:03:33.241629] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=6] result set close failed(ret=-5019) [2024-02-19 19:03:33.241635] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=5] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:33.241659] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78805-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:33.241672] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106718][RSAsyncTask1][T0][YB42AC0103F2-000611B922A78805-0-0] [lt=10] failed to process final(executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, aret=-5019, ret=-5019) [2024-02-19 19:03:33.241688] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=14] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server) [2024-02-19 19:03:33.241700] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:33.241706] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=5] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:33.241715] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=7] query failed(ret=-5019, conn=0x7fdcdc89a050, start=1708340613241146, sql=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server) [2024-02-19 19:03:33.241733] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=18] read failed(ret=-5019) [2024-02-19 19:03:33.241882] WARN [SHARE] run2 (ob_async_task_queue.cpp:148) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=12] task process failed, start retry(max retry time=0, retry interval=1000000, ret=-5019) [2024-02-19 19:03:33.243066] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=15] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.243087] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=20] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613243057}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.243110] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=20] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613243057}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.245129] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.245155] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.255317] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.255361] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.260771] INFO [SHARE] run_loop_ (ob_bg_thread_monitor.cpp:331) [1109111][BGThreadMonitor][T0][Y0-0000000000000000-0-0] [lt=31] current monitor number(seq_=-1) [2024-02-19 19:03:33.265512] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.265545] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.271953] INFO [STORAGE] operator() (ob_tenant_freezer.cpp:124) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] ====== tenant freeze timer task ====== [2024-02-19 19:03:33.273446] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=27] table not exist(tenant_id=1, database_id=201001, table_name=__all_freeze_info, ret=-5019) [2024-02-19 19:03:33.273479] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=31] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_freeze_info, ret=-5019) [2024-02-19 19:03:33.273492] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=12] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:33.273501] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=7] resolve table relation factor failed(ret=-5019, table_name=__all_freeze_info) [2024-02-19 19:03:33.273510] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=6] fail to resolve table(ret=-5019) [2024-02-19 19:03:33.273518] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=7] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:33.273549] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=8] Table 'oceanbase.__all_freeze_info' doesn't exist [2024-02-19 19:03:33.273559] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=9] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:33.273569] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:33.273578] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=8] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:33.273588] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=8] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:33.273598] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=8] resolve normal query failed(ret=-5019) [2024-02-19 19:03:33.273606] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=7] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:33.273620] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=7] failed to resolve(ret=-5019) [2024-02-19 19:03:33.273627] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=7] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:33.273636] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=6] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:33.273643] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=6] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:33.273651] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=6] fail to handle text query(stmt=SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1, ret=-5019) [2024-02-19 19:03:33.273660] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=6] executor execute failed(ret=-5019) [2024-02-19 19:03:33.273667] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=7] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1"}, retry_cnt=0) [2024-02-19 19:03:33.273683] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=11] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:33.273698] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=12] result set close failed(ret=-5019) [2024-02-19 19:03:33.273705] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=6] result set close failed(ret=-5019) [2024-02-19 19:03:33.273710] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=5] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:33.273730] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=5] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:33.273739] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D7-0-0] [lt=8] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1"}, aret=-5019, ret=-5019) [2024-02-19 19:03:33.273747] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1) [2024-02-19 19:03:33.273755] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:33.273764] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:33.273775] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] query failed(ret=-5019, conn=0x7fdcdc9be050, start=1708340613273253, sql=SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1) [2024-02-19 19:03:33.273788] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] read failed(ret=-5019) [2024-02-19 19:03:33.273796] WARN [SHARE] get_freeze_info (ob_freeze_info_proxy.cpp:68) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", sql=SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1, tenant_id=1) [2024-02-19 19:03:33.273888] WARN [STORAGE] get_global_frozen_scn_ (ob_tenant_freezer.cpp:1086) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] get_frozen_scn failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:33.273899] WARN [STORAGE] do_major_if_need_ (ob_tenant_freezer.cpp:1188) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] fail to get global frozen version(ret=-5019) [2024-02-19 19:03:33.273908] WARN [STORAGE] check_and_freeze_normal_data_ (ob_tenant_freezer.cpp:379) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] [TenantFreezer] fail to do major freeze(tmp_ret=-5019) [2024-02-19 19:03:33.273932] INFO [STORAGE] check_and_freeze_tx_data_ (ob_tenant_freezer.cpp:419) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] [TenantFreezer] Trigger Tx Data Table Self Freeze. (tenant_info_.tenant_id_=1, tenant_tx_data_mem_used=430988896, self_freeze_max_limit_=214748364, hold_memory=1718894592, self_freeze_tenant_hold_limit_=429496729, self_freeze_min_limit_=21474836) [2024-02-19 19:03:33.274593] INFO [STORAGE] do_tx_data_table_freeze_ (ob_tenant_freezer_rpc.cpp:73) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D8-0-0] [lt=7] start tx data table self freeze task in rpc handle thread(arg_=freeze_type:3) [2024-02-19 19:03:33.274636] INFO [STORAGE] self_freeze_task (ob_tx_data_table.cpp:794) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D8-0-0] [lt=36] start tx data table self freeze task(get_ls_id()={id:1}) [2024-02-19 19:03:33.274658] INFO [STORAGE] freeze (ob_tx_data_memtable_mgr.cpp:192) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D8-0-0] [lt=15] start freeze tx data memtable(ls_id_={id:1}) [2024-02-19 19:03:33.274671] INFO [STORAGE] freeze_ (ob_tx_data_memtable_mgr.cpp:228) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D8-0-0] [lt=12] There is a freezed memetable existed. Try freeze after flushing it.(ret=-4023, ret="OB_EAGAIN", get_memtable_count_()=2) [2024-02-19 19:03:33.274685] WARN [STORAGE] freeze (ob_tx_data_memtable_mgr.cpp:206) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D8-0-0] [lt=13] freeze tx data memtable fail.(ret=-4023, ret="OB_EAGAIN") [2024-02-19 19:03:33.274696] WARN [STORAGE] flush (ob_tx_data_memtable_mgr.cpp:430) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D8-0-0] [lt=9] freeze failed(ret=-4023, ret="OB_EAGAIN", this=0x7fdce89de180) [2024-02-19 19:03:33.274708] WARN [STORAGE] self_freeze_task (ob_tx_data_table.cpp:798) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D8-0-0] [lt=10] self freeze of tx data memtable failed.(ret=-4023, ret="OB_EAGAIN", ls_id={id:1}, memtable_mgr_={ObIMemtableMgr:{Memtables:this:0x7fdce89de180, ref_cnt:1, is_inited:true, tablet_id:{id:49402}, freezer:0x7fdce89e30d0, table_type:1, memtable_head:0, memtable_tail:2, t3m:0x7fdd18bce030, tables:[0x7fdce5eea080, 0x7fdce5eea360, null, null, null, null, null, null, null, null, null, null, null, null, null, null]}, is_freezing:false, ls_id:{id:1}, tx_data_table:0x7fdce89e4550, ls_tablet_svr:0x7fdce89de160, slice_allocator:0x7fdce89e4590}) [2024-02-19 19:03:33.274740] INFO [STORAGE] self_freeze_task (ob_tx_data_table.cpp:801) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D8-0-0] [lt=32] finish tx data table self freeze task(ret=-4023, ret="OB_EAGAIN", get_ls_id()={id:1}) [2024-02-19 19:03:33.274751] WARN [STORAGE] do_tx_data_table_freeze_ (ob_tenant_freezer_rpc.cpp:102) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D8-0-0] [lt=10] freeze tx data table failed.(ret=-4023, ret="OB_EAGAIN", arg_=freeze_type:3) [2024-02-19 19:03:33.274761] INFO [STORAGE] do_tx_data_table_freeze_ (ob_tenant_freezer_rpc.cpp:115) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D8-0-0] [lt=9] finish self freeze task in rpc handle thread(ret=-4023, ret="OB_EAGAIN", arg_=freeze_type:3) [2024-02-19 19:03:33.274775] WARN [STORAGE] process (ob_tenant_freezer_rpc.cpp:56) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790D8-0-0] [lt=9] do tx data table freeze failed.(ret=-4023, ret="OB_EAGAIN", arg_=freeze_type:3) [2024-02-19 19:03:33.275200] INFO [STORAGE] rpc_callback (ob_tenant_freezer.cpp:990) [1106789][RpcIO][T0][Y0-0000000000000000-0-0] [lt=16] [TenantFreezer] call back of tenant freezer request [2024-02-19 19:03:33.275694] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.275721] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.277045] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=18] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340613277026}) [2024-02-19 19:03:33.277069] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=25] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340613243057}}) [2024-02-19 19:03:33.280251] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] get wrs ts(ls_id={id:1}, delta_ns=-1706042771802015765, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:33.280288] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=37] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:33.285876] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.285924] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.293922] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=33] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:33.294047] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=44] Wash time detail, (compute_wash_size_time=148, refresh_score_time=80, wash_time=4) [2024-02-19 19:03:33.297729] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.297770] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.307907] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.307952] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.318088] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.318136] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.328327] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.328371] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.328668] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7F-0-0] [lt=130] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:33.328701] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7F-0-0] [lt=32] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:33.328735] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7F-0-0] [lt=23] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:33.328755] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7F-0-0] [lt=16] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:33.328771] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC7F-0-0] [lt=16] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:33.334924] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=8] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:33.334958] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=33] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:33.334972] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=12] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:33.334983] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=11] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:33.335002] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=14] fail to resolve table(ret=-5019) [2024-02-19 19:03:33.335013] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=10] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:33.335027] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=9] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:33.335037] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=9] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:33.335047] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:33.335061] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=12] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:33.335074] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=11] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:33.335088] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=11] resolve normal query failed(ret=-5019) [2024-02-19 19:03:33.335101] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=13] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:33.335123] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=13] failed to resolve(ret=-5019) [2024-02-19 19:03:33.335139] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=14] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:33.335154] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=13] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:33.335167] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=11] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:33.335182] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=11] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, ret=-5019) [2024-02-19 19:03:33.335197] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=14] executor execute failed(ret=-5019) [2024-02-19 19:03:33.335212] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=14] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, retry_cnt=0) [2024-02-19 19:03:33.335235] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=16] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:33.335258] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=20] result set close failed(ret=-5019) [2024-02-19 19:03:33.335271] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=12] result set close failed(ret=-5019) [2024-02-19 19:03:33.335283] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=10] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:33.335312] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=12] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:33.335329] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A028-0-0] [lt=15] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-02-19 19:03:33.335345] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:33.335360] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:33.335373] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=12] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:33.335388] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=12] query failed(ret=-5019, conn=0x7fdcd7d06050, start=1708340613334687, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:33.335404] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=15] read failed(ret=-5019) [2024-02-19 19:03:33.335419] WARN get_my_sql_result_ (ob_table_access_helper.h:329) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x7fdcfffccd78, table=__all_server, condition=where svr_ip='172.1.3.242' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, columns_str="zone") [2024-02-19 19:03:33.335444] WARN read_single_row (ob_table_access_helper.h:178) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=20] get mysql result failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1, columns=0x7fdcfffccd78, table=__all_server, where_condition=where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:33.335536] WARN [COORDINATOR] get_self_zone_name (table_accessor.cpp:517) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=16] get zone from __all_server failed(ret=-4016, ret="OB_ERR_UNEXPECTED", columns=0x7fdcfffccd78, where_condition="where svr_ip='172.1.3.242' and svr_port=2882", zone_name_holder=) [2024-02-19 19:03:33.335557] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:450) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=20] get self zone name failed(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:33.335573] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:459) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=15] zone name is empty(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:33.335587] WARN [COORDINATOR] refresh (ob_leader_coordinator.cpp:107) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] get all ls election reference info failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1) [2024-02-19 19:03:33.339153] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.339198] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.344921] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.344978] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=58] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613344909}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.345003] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=22] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613344909}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.349405] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.349449] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.355205] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=92] Cache replace map node details(ret=0, replace_node_count=0, replace_time=26088, replace_start_pos=1163872, replace_num=15728) [2024-02-19 19:03:33.359586] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.359640] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=58] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.364388] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=12] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=70318, clean_start_pos=849339, clean_num=31457) [2024-02-19 19:03:33.369783] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.369852] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=71] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.377189] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=14] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340613377173}) [2024-02-19 19:03:33.377230] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=42] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340613344909}}) [2024-02-19 19:03:33.379998] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.380044] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.380267] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:33.380291] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:33.380307] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340613380252) [2024-02-19 19:03:33.380317] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340613180321, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:33.380381] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] get wrs ts(ls_id={id:1}, delta_ns=-1706042771801915024, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:33.380398] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:33.390215] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=56] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.390273] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=59] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.400456] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.400507] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=65] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.410670] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.410753] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=86] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.420895] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.420971] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=78] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.432451] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.432503] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=55] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.443200] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.443237] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=36] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.443256] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:33.443541] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.443573] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.443636] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=17] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.443650] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.443664] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:33.443956] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.443972] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.443985] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:33.444257] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.444273] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.444285] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=11] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:33.444545] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=16] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.444573] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=27] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.444587] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=13] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:33.444883] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.444910] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.444943] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=31] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:33.445410] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=11] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.445428] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.445441] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:33.445863] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.445891] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.445905] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=13] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:33.446040] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.446058] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.446074] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:33.446678] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.446695] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.446708] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:33.446792] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.446813] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.446827] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:33.447449] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.447466] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.447480] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:33.447503] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.447530] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=27] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.447543] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:33.448033] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.448064] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=29] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.448076] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:33.448115] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=11] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.448136] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.448151] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=15] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:33.448232] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=8] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.448298] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=65] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.448357] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=57] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:33.448729] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=18] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.448749] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106788][RpcIO][T0][Y0-0000000000000000-0-0] [lt=14] [RPC EASY STAT](log_str=conn count=1/1, request done=19518/19518, request doing=0/0) [2024-02-19 19:03:33.448748] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=18] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613448719}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.448770] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=18] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613448719}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.448803] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.449302] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.449398] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.449918] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.450194] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.450803] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.450881] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.451419] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.451486] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.451642] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106789][RpcIO][T0][Y0-0000000000000000-0-0] [lt=15] [RPC EASY STAT](log_str=conn count=1/1, request done=19518/19518, request doing=0/0) [2024-02-19 19:03:33.451948] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.452049] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.452738] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=91] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.452842] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.453343] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.453437] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.454111] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.454752] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.454774] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=23] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.454856] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.455241] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.455452] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=9] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.455851] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.456055] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.456461] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.456662] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.457106] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.457269] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.457819] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.457871] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.458118] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.458490] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=29] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.458732] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.458830] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.459084] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.459452] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.461917] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.462154] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.462758] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.463356] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.463966] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.464702] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.465085] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.465115] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.465317] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.468164] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=30] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.469737] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.470068] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=29] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.470166] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.470344] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.470696] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.470763] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.470955] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.471292] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.471357] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.471691] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.471957] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.472332] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.472549] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.473126] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.473194] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=53] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.473727] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.473848] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.473964] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.474330] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.474440] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.474935] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.475026] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.475565] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.475638] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.476176] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.476261] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.476758] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.476780] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.476797] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.476871] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=46] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.477379] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.477414] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=16] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340613477395}) [2024-02-19 19:03:33.477432] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340613448719}}) [2024-02-19 19:03:33.477460] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=10] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.477835] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.478035] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.478045] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.478481] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.478680] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.479105] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.479280] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.479715] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.479878] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.479993] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.480491] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.480570] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.480590] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:33.480635] WARN [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:287) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-02-19 19:03:33.480651] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:33.480667] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:33.480685] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340613480622) [2024-02-19 19:03:33.480700] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340613380325, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:33.480791] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] get wrs ts(ls_id={id:1}, delta_ns=-1706042771801814722, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:33.480810] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:33.487029] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.487087] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=60] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.497218] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.497255] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.507388] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.507435] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.517572] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.517621] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.527722] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.527756] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.539004] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.539054] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=53] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.549316] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.549363] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.549546] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.549570] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=23] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613549536}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.549594] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=21] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613549536}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.552761] INFO [ARCHIVE] stop (ob_archive_scheduler_service.cpp:137) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=14] stop archive scheduler service [2024-02-19 19:03:33.556036] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=7] table not exist(tenant_id=1, database_id=201001, table_name=__all_backup_info, ret=-5019) [2024-02-19 19:03:33.556066] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=30] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_backup_info, ret=-5019) [2024-02-19 19:03:33.556081] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=12] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:33.556092] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=10] resolve table relation factor failed(ret=-5019, table_name=__all_backup_info) [2024-02-19 19:03:33.556105] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=10] fail to resolve table(ret=-5019) [2024-02-19 19:03:33.556112] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=7] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:33.556123] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=6] Table 'oceanbase.__all_backup_info' doesn't exist [2024-02-19 19:03:33.556131] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=7] resolve basic table failed(ret=-5019) [2024-02-19 19:03:33.556140] WARN [SQL.RESV] resolve_table_list (ob_update_resolver.cpp:423) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=8] failed to resolve table(ret=-5019) [2024-02-19 19:03:33.556147] WARN [SQL.RESV] resolve (ob_update_resolver.cpp:76) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=7] resolve table failed(ret=-5019) [2024-02-19 19:03:33.556157] WARN [SQL.RESV] stmt_resolver_func (ob_resolver.cpp:155) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=8] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3074) [2024-02-19 19:03:33.556177] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=11] failed to resolve(ret=-5019) [2024-02-19 19:03:33.556190] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=12] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:33.556201] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=8] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:33.556210] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=8] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:33.556222] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=9] fail to handle text query(stmt=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882', ret=-5019) [2024-02-19 19:03:33.556231] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=8] executor execute failed(ret=-5019) [2024-02-19 19:03:33.556240] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=7] execute failed(ret=-5019, executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, retry_cnt=0) [2024-02-19 19:03:33.556263] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=13] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:33.556282] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=16] result set close failed(ret=-5019) [2024-02-19 19:03:33.556292] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:33.556300] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=7] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:33.556324] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAC-0-0] [lt=6] failed to process record(executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:33.556336] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106733][BackupLease][T0][YB42AC0103F2-000611B923978EAC-0-0] [lt=9] failed to process final(executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, aret=-5019, ret=-5019) [2024-02-19 19:03:33.556346] WARN [SERVER] execute_write_inner (ob_inner_sql_connection.cpp:1818) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=9] execute sql failed(ret=-5019, tenant_id=1, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:33.556356] INFO [SERVER] execute_write_inner (ob_inner_sql_connection.cpp:1900) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=7] execute write sql(ret=-5019, tenant_id=1, affected_rows=0, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:33.556399] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=7] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:33.556407] WARN [SERVER] execute_write (ob_inner_sql_connection.cpp:1786) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=7] execute_write failed(ret=-5019, tenant_id=1, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882', is_user_sql=false) [2024-02-19 19:03:33.556415] WARN [SERVER] execute_write (ob_inner_sql_connection.cpp:1775) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=7] execute_write failed(ret=-5019, tenant_id=1, sql="update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'") [2024-02-19 19:03:33.556422] WARN [COMMON.MYSQLP] write (ob_mysql_proxy.cpp:133) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=6] execute sql failed(ret=-5019, conn=0x7fdd189bc050, start=1708340613552874, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:33.556463] WARN [SERVER] clean_backup_scheduler_leader (ob_backup_operator.cpp:348) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=8] execute sql failed(ret=-5019, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:33.556472] WARN [SERVER] clean_backup_scheduler_leader (ob_backup_manager.cpp:517) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=9] failed to clean backup scheduler leader(ret=-5019) [2024-02-19 19:03:33.563004] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.563039] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.565070] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=44] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:33.565222] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=40] Wash time detail, (compute_wash_size_time=199, refresh_score_time=103, wash_time=9) [2024-02-19 19:03:33.574823] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.574874] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=56] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.575568] INFO [SQL.PC] update_memory_conf (ob_plan_cache.cpp:1499) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=26] update plan cache memory config(ob_plan_cache_percentage=5, ob_plan_cache_evict_high_percentage=90, ob_plan_cache_evict_low_percentage=50, tenant_id=1) [2024-02-19 19:03:33.575602] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1130) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=28] start lib cache evict(tenant_id=1, mem_hold=2097152, mem_limit=107374180, cache_obj_num=2, cache_node_num=2) [2024-02-19 19:03:33.575631] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1147) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=26] end lib cache evict(tenant_id=1, cache_evict_num=0, mem_hold=2097152, mem_limit=107374180, cache_obj_num=2, cache_node_num=2) [2024-02-19 19:03:33.577811] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=19] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340613577791}) [2024-02-19 19:03:33.577850] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=39] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340613549536}}) [2024-02-19 19:03:33.578719] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC80-0-0] [lt=141] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:33.578742] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC80-0-0] [lt=22] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:33.578763] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC80-0-0] [lt=19] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:33.578778] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC80-0-0] [lt=13] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:33.578791] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC80-0-0] [lt=12] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:33.579207] INFO [SQL.PC] runTimerTask (ob_plan_cache_manager.cpp:291) [1106739][PlanCacheEvict][T0][Y0-0000000000000000-0-0] [lt=28] schedule next cache evict task(evict_interval=1000000) [2024-02-19 19:03:33.580707] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:738) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] current server is WRS leader, need start CLUSTER weak read service(tenant_id=1, serve_leader_epoch=0, cur_leader_epoch=138, cluster_service_tablet_id_={id:226}, in_service=false, can_update_version=false, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:33.580751] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:336) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=30] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] begin start service(tenant_id=1, is_in_service()=false, can_update_version=false) [2024-02-19 19:03:33.580763] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:338) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] start TenantWeakReadClusterService(tenant_id=1) [2024-02-19 19:03:33.582157] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:33.582184] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=25] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:33.582196] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=11] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:33.582209] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=11] resolve table relation factor failed(ret=-5019, table_name=__all_weak_read_service) [2024-02-19 19:03:33.582223] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=9] fail to resolve table(ret=-5019) [2024-02-19 19:03:33.582233] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=9] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:33.582247] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=8] Table 'oceanbase.__all_weak_read_service' doesn't exist [2024-02-19 19:03:33.582257] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=9] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:33.582266] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=8] resolve basic table failed(ret=-5019) [2024-02-19 19:03:33.582276] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=8] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:33.582285] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=9] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:33.582295] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=8] resolve normal query failed(ret=-5019) [2024-02-19 19:03:33.582305] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:33.582324] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=9] failed to resolve(ret=-5019) [2024-02-19 19:03:33.582335] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=12] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:33.582347] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=9] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:33.582358] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=10] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:33.582370] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=8] fail to handle text query(stmt=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '', ret=-5019) [2024-02-19 19:03:33.582382] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=10] executor execute failed(ret=-5019) [2024-02-19 19:03:33.582393] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=9] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, retry_cnt=0) [2024-02-19 19:03:33.582413] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=13] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:33.582433] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=17] result set close failed(ret=-5019) [2024-02-19 19:03:33.582444] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=9] result set close failed(ret=-5019) [2024-02-19 19:03:33.582452] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:33.582479] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:33.582492] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EE-0-0] [lt=11] failed to process final(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, aret=-5019, ret=-5019) [2024-02-19 19:03:33.582504] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:33.582515] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:33.582525] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:33.582535] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] query failed(ret=-5019, conn=0x7fdcf4ef4050, start=1708340613581880, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:33.582548] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] read failed(ret=-5019) [2024-02-19 19:03:33.582560] WARN [STORAGE.TRANS] query_cluster_version_range_ (ob_tenant_weak_read_cluster_service.cpp:192) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] execute sql read fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", exec_tenant_id=1, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:33.582648] WARN [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:367) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] query cluster version range from WRS table fail(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:33.582665] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:415) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] start service done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, in_service=false, leader_epoch=0, current_version=0, delta=1708340613582659, min_version=0, max_version=0, max_stale_time=5000000000, all_valid_server_count=0, total_time=1925, wlock_time=38, check_leader_time=2, query_version_time=0, persist_version_time=0) [2024-02-19 19:03:33.582689] WARN [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:781) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23] start CLUSTER weak read service fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:33.582704] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:791) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, need_start_service=true, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=138, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:33.582770] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] get wrs ts(ls_id={id:1}, delta_ns=-1706042771801714198, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:33.582785] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:33.584401] INFO [SQL.PC] runTimerTask (ob_plan_cache_manager.cpp:299) [1106739][PlanCacheEvict][T0][Y0-0000000000000000-0-0] [lt=20] schedule next cache evict task(evict_interval=1000000) [2024-02-19 19:03:33.584519] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=74] Cache replace map node details(ret=0, replace_node_count=0, replace_time=26392, replace_start_pos=1179600, replace_num=15728) [2024-02-19 19:03:33.584981] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.585015] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.595147] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.595200] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=58] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.605841] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.605893] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=53] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.616181] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.616240] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=61] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.626578] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.626622] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=65] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.633757] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=20] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=68509, clean_start_pos=880796, clean_num=31457) [2024-02-19 19:03:33.636849] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.636864] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:186) [1108342][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=49] start do ls ha handler(ls_id_array_=[{id:1}]) [2024-02-19 19:03:33.636897] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.647032] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.647077] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.652633] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.652667] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=34] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613652621}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.652690] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=21] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613652621}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.658881] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.658928] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.664645] WARN [STORAGE.TRANS] acquire_global_snapshot__ (ob_trans_service_v4.cpp:1472) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=15] acquire global snapshot fail(ret=-4012, gts_ahead=0, expire_ts=1708340613664318, now={mts:1708340611735066}, now0={mts:1708340611735066}, snapshot=-1, uncertain_bound=0) [2024-02-19 19:03:33.664682] WARN [STORAGE.TRANS] get_read_snapshot (ob_tx_api.cpp:552) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=38] acquire global snapshot fail(ret=-4012, tx={this:0x7fdcd5932f30, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340611734172, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}) [2024-02-19 19:03:33.664737] WARN [SQL.EXE] stmt_setup_snapshot_ (ob_sql_trans_control.cpp:614) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=46] fail to get snapshot(ret=-4012, local_ls_id={id:1}, session={this:0x7fdcf4e200c0, id:1, tenant:"sys", tenant_id:1, effective_tenant:"sys", effective_tenant_id:1, database:"oceanbase", user:"root@%", consistency_level:3, session_state:0, tx:0x7fdcd5932f30}) [2024-02-19 19:03:33.664759] WARN [SQL.EXE] start_stmt (ob_sql_trans_control.cpp:481) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=22] fail to exec stmt_setup_snapshot_(session, das_ctx, plan, plan_ctx, txs)(ret=-4012, session_id=1, *tx_desc={this:0x7fdcd5932f30, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340611734172, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}) [2024-02-19 19:03:33.664786] INFO [SQL.EXE] start_stmt (ob_sql_trans_control.cpp:530) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=24] start stmt(ret=-4012, auto_commit=true, session_id=1, snapshot={this:0x7fdce42d3e80, valid:false, source:0, core:{version:-1, tx_id:{txid:0}, scn:-1}, uncertain_bound:0, snapshot_lsid:{id:-1}, parts:[]}, savepoint=0, tx_desc={this:0x7fdcd5932f30, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340611734172, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}, plan_type=1, stmt_type=1, has_for_update=false, query_start_time=1708340611734920, use_das=false, session={this:0x7fdcf4e200c0, id:1, tenant:"sys", tenant_id:1, effective_tenant:"sys", effective_tenant_id:1, database:"oceanbase", user:"root@%", consistency_level:3, session_state:0, tx:0x7fdcd5932f30}, plan=0x7fdcda010050, consistency_level_in_plan_ctx=3, trans_result={incomplete:false, parts:[], touched_ls_list:[], cflict_txs:[]}) [2024-02-19 19:03:33.664837] WARN [SQL] start_stmt (ob_result_set.cpp:282) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=50] fail to start stmt(ret=-4012, phy_plan->get_dependency_table()=[{table_id:1, schema_version:0, object_type:1, is_db_explicit:false, is_existed:true}]) [2024-02-19 19:03:33.664856] WARN [SQL] do_open_plan (ob_result_set.cpp:451) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=15] fail start stmt(ret=-4012) [2024-02-19 19:03:33.664867] WARN [SQL] open (ob_result_set.cpp:150) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=10] execute plan failed(ret=-4012) [2024-02-19 19:03:33.664878] WARN [SERVER] open (ob_inner_sql_result.cpp:146) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=9] open result set failed(ret=-4012) [2024-02-19 19:03:33.664890] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:607) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=10] result set open failed(ret=-4012, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}) [2024-02-19 19:03:33.664905] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=14] execute failed(ret=-4012, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=0) [2024-02-19 19:03:33.664921] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=12] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-4012, err_:"OB_TIMEOUT", retry_type:0, client_ret:-4012}, need_retry=false) [2024-02-19 19:03:33.664976] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=38] result set close failed(ret=-4012) [2024-02-19 19:03:33.664986] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=9] result set close failed(ret=-4012) [2024-02-19 19:03:33.664995] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=8] failed to close result(close_ret=-4012, ret=-4012) [2024-02-19 19:03:33.665024] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-02-19 19:03:33.665036] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:574) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=11] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=1930114) [2024-02-19 19:03:33.665047] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C05-0-0] [lt=10] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-02-19 19:03:33.665061] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-02-19 19:03:33.665073] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-02-19 19:03:33.665083] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=10] execute_read failed(ret=-4012, cluster_id=1, tenant_id=1) [2024-02-19 19:03:33.665094] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=7] query failed(ret=-4012, conn=0x7fdcf4e20050, start=1708340611734908, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-02-19 19:03:33.665106] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=11] read failed(ret=-4012) [2024-02-19 19:03:33.665117] WARN [SHARE] load (ob_core_table_proxy.cpp:436) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=7] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-02-19 19:03:33.665211] WARN [SHARE] load (ob_core_table_proxy.cpp:368) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=12] load failed(ret=-4012, for_update=false) [2024-02-19 19:03:33.665223] WARN [SHARE] get (ob_global_stat_proxy.cpp:321) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=11] core_table load failed(ret=-4012) [2024-02-19 19:03:33.665232] WARN [SHARE] get_snapshot_gc_scn (ob_global_stat_proxy.cpp:165) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=8] get failed(ret=-4012) [2024-02-19 19:03:33.665242] WARN [STORAGE] get_global_info (ob_tenant_freeze_info_mgr.cpp:721) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=8] fail to get global info(ret=-4012, tenant_id=1) [2024-02-19 19:03:33.665253] WARN [STORAGE] try_update_info (ob_tenant_freeze_info_mgr.cpp:838) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=10] failed to get global info(ret=-4012) [2024-02-19 19:03:33.665263] WARN [STORAGE] runTimerTask (ob_tenant_freeze_info_mgr.cpp:889) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=8] fail to try update info(tmp_ret=-4012, tmp_ret="OB_TIMEOUT") [2024-02-19 19:03:33.665279] WARN run1 (ob_timer.cpp:396) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9] timer task cost too much time(task="tasktype:N9oceanbase7storage21ObTenantFreezeInfoMgr10ReloadTaskE", start_time=1708340611731611, end_time=1708340613665271, elapsed_time=1933660, this=0x7fdd191ad4f0, thread_id=1107631) [2024-02-19 19:03:33.666809] INFO do_work (ob_rl_mgr.cpp:704) [1106705][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=31] swc wakeup.(stat_period_=1000000, ready=false) [2024-02-19 19:03:33.668970] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106798][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=35] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:33.670051] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106792][BatchIO][T0][Y0-0000000000000000-0-0] [lt=22] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:33.670084] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106800][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=14] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:33.670459] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106795][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=19] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/6, request doing=0/0) [2024-02-19 19:03:33.670496] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106791][BatchIO][T0][Y0-0000000000000000-0-0] [lt=16] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:33.670516] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106793][BatchIO][T0][Y0-0000000000000000-0-0] [lt=11] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:33.670640] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106796][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=18] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/1, request doing=0/0) [2024-02-19 19:03:33.670715] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.670750] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.678837] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=16] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340613678815}) [2024-02-19 19:03:33.678877] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=43] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340613652621}}) [2024-02-19 19:03:33.681638] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.681681] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=55] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.683419] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:33.683451] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=33] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:33.683492] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340613683471}) [2024-02-19 19:03:33.683519] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340613683397) [2024-02-19 19:03:33.683543] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340613480712, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:33.683710] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=77] get wrs ts(ls_id={id:1}, delta_ns=-1706042771801611939, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:33.683732] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:33.691805] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.691842] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.702013] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.702074] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=66] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.712374] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=138] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.712733] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=362] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.716575] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:202) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=15] decide disk size finished(dir="/backup/oceanbase/data/sstable", suggested_disk_size=8589934592, suggested_disk_percentage=0, default_disk_percentage=60, total_space=246944890880, free_space=220974178304, disk_size=8589934592) [2024-02-19 19:03:33.716617] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:202) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=43] decide disk size finished(dir="/backup/oceanbase/data/clog", suggested_disk_size=8589934592, suggested_disk_percentage=0, default_disk_percentage=30, total_space=246944890880, free_space=220974178304, disk_size=8589934592) [2024-02-19 19:03:33.716630] INFO [SERVER] cal_all_part_disk_size (ob_server_utils.cpp:164) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=12] decide_all_disk_size succ(data_dir="/backup/oceanbase/data/sstable", clog_dir="/backup/oceanbase/data/clog", suggested_data_disk_size=8589934592, suggested_data_disk_percentage=0, data_default_disk_percentage=60, clog_default_disk_percentage=30, shared_mode=true, data_disk_size=8589934592, log_disk_size=8589934592) [2024-02-19 19:03:33.722879] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.722934] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=58] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.733734] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.733770] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.746093] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.746140] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.747896] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-02-19 19:03:33.747922] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=24] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-02-19 19:03:33.747932] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=8] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:33.747940] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=9] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-02-19 19:03:33.747951] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=7] fail to resolve table(ret=-5019) [2024-02-19 19:03:33.747957] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=7] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:33.747969] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=5] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-02-19 19:03:33.747976] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=7] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:33.747983] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=6] resolve basic table failed(ret=-5019) [2024-02-19 19:03:33.747990] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=6] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:33.747996] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=6] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:33.748004] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=6] resolve normal query failed(ret=-5019) [2024-02-19 19:03:33.748011] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=6] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:33.748024] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=7] failed to resolve(ret=-5019) [2024-02-19 19:03:33.748032] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=7] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:33.748041] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=7] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:33.748048] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=6] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:33.748056] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=5] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-02-19 19:03:33.748064] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=6] executor execute failed(ret=-5019) [2024-02-19 19:03:33.748073] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=8] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0) [2024-02-19 19:03:33.748091] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=12] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:33.748112] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=16] result set close failed(ret=-5019) [2024-02-19 19:03:33.748122] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:33.748130] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=7] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:33.748156] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:33.748168] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=11] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-02-19 19:03:33.748180] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:33.748193] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=11] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:33.748202] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=9] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:33.748212] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=9] query failed(ret=-5019, conn=0x7fdcdc89a050, start=1708340613747671, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:33.748224] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=11] read failed(ret=-5019) [2024-02-19 19:03:33.748234] WARN [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:612) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=8] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:33.748330] WARN [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=11] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:33.748343] WARN [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=12] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-02-19 19:03:33.748353] WARN [SHARE] next (ob_ls_table_iterator.cpp:71) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=10] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:33.748363] WARN [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:331) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=8] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:33.748375] WARN [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:213) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=10] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-02-19 19:03:33.748387] WARN [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:193) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=10] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-02-19 19:03:33.748396] WARN [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:43) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E0-0-0] [lt=8] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:33.753251] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.753332] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=79] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613753237}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.753362] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=26] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613753237}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.756427] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.756462] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.763374] INFO [SERVER.OMT] calibrate_worker_count (ob_tenant.cpp:1397) [1106806][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=22] tenant calibrate worker(id=506, ass_token_cnt=40, new_ass_token_cnt=40) [2024-02-19 19:03:33.763417] INFO [SERVER.OMT] calibrate_worker_count (ob_tenant.cpp:1397) [1106806][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=39] tenant calibrate worker(id=507, ass_token_cnt=10, new_ass_token_cnt=10) [2024-02-19 19:03:33.763453] INFO [SERVER.OMT] calibrate_worker_count (ob_tenant.cpp:1397) [1106806][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=8] tenant calibrate worker(id=508, ass_token_cnt=50, new_ass_token_cnt=50) [2024-02-19 19:03:33.763471] INFO [SERVER.OMT] calibrate_worker_count (ob_tenant.cpp:1397) [1106806][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7] tenant calibrate worker(id=509, ass_token_cnt=10, new_ass_token_cnt=10) [2024-02-19 19:03:33.766634] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.766670] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.778847] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340613778829}) [2024-02-19 19:03:33.778880] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=33] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340613753237}}) [2024-02-19 19:03:33.783580] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] get wrs ts(ls_id={id:1}, delta_ns=-1706042771801511831, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:33.783623] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=46] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:33.786138] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.786180] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.777190] INFO [SERVER.OMT] run1 (ob_multi_tenant.cpp:1945) [1106806][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=14] dump tenant info(tenant={id:1, tenant_meta:{unit:{tenant_id:1, unit_id:1, has_memstore:true, unit_status:"NORMAL", config:{unit_config_id:1, name:"sys_unit_config", resource:{min_cpu:1, max_cpu:1, memory_size:"2GB", log_disk_size:"2GB", min_iops:10000, max_iops:10000, iops_weight:1}}, mode:0, create_timestamp:1700448815550045, is_removed:false}, super_block:{tenant_id:1, replay_start_point:ObLogCursor{file_id=11, log_id=254712, offset=6914}, ls_meta_entry:[89](ver=0,mode=0,seq=15469007), tablet_meta_entry:[92](ver=0,mode=0,seq=15469010), is_hidden:false}, create_status:1}, unit_min_cpu:"1.000000000000000000e+00", unit_max_cpu:"1.000000000000000000e+00", slice:"0.000000000000000000e+00", slice_remain:"0.000000000000000000e+00", token_cnt:10, sug_token_cnt:10, ass_token_cnt:10, lq_tokens:3, used_lq_tokens:0, stopped:false, idle_us:6137462, recv_hp_rpc_cnt:4926, recv_np_rpc_cnt:1950, recv_lp_rpc_cnt:0, recv_mysql_cnt:4, recv_task_cnt:1, recv_large_req_cnt:72, tt_large_quries:0, pop_normal_cnt:2887115, actives:10, workers:10, nesting workers:8, lq waiting workers:0, req_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 , large queued:0, multi_level_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=3 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , group_map:group_id = 2,queue_size = 0,recv_req_cnt = 30198,pop_req_cnt = 30198,token_cnt = 1,min_token_cnt = 1,max_token_cnt = 1,ass_token_cnt = 1 group_id = 1,queue_size = 0,recv_req_cnt = 1,pop_req_cnt = 1,token_cnt = 1,min_token_cnt = 1,max_token_cnt = 1,ass_token_cnt = 1 , rpc_stat_info: pcode=0x150a:cnt=1213 pcode=0x150b:cnt=1181 pcode=0x11b:cnt=257 pcode=0x113:cnt=155}) [2024-02-19 19:03:33.786875] INFO [SERVER.OMT] run1 (ob_multi_tenant.cpp:1945) [1106806][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=9686] dump tenant info(tenant={id:506, tenant_meta:{unit:{tenant_id:506, unit_id:1000, has_memstore:true, unit_status:"NORMAL", config:{unit_config_id:1000, name:"virtual_tenant_unit", resource:{min_cpu:4, max_cpu:4, memory_size:"4GB", log_disk_size:"2GB", min_iops:10000, max_iops:50000, iops_weight:0}}, mode:0, create_timestamp:1708336682994319, is_removed:false}, super_block:{tenant_id:506, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, ls_meta_entry:[-1](ver=0,mode=0,seq=0), tablet_meta_entry:[-1](ver=0,mode=0,seq=0), is_hidden:true}, create_status:1}, unit_min_cpu:"4.000000000000000000e+00", unit_max_cpu:"4.000000000000000000e+00", slice:"0.000000000000000000e+00", slice_remain:"0.000000000000000000e+00", token_cnt:40, sug_token_cnt:40, ass_token_cnt:40, lq_tokens:12, used_lq_tokens:0, stopped:false, idle_us:15184971, recv_hp_rpc_cnt:0, recv_np_rpc_cnt:0, recv_lp_rpc_cnt:0, recv_mysql_cnt:0, recv_task_cnt:0, recv_large_req_cnt:0, tt_large_quries:0, pop_normal_cnt:14189082, actives:40, workers:40, nesting workers:8, lq waiting workers:0, req_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 , large queued:0, multi_level_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , group_map:, rpc_stat_info:}) [2024-02-19 19:03:33.787236] INFO [SERVER.OMT] run1 (ob_multi_tenant.cpp:1945) [1106806][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=361] dump tenant info(tenant={id:507, tenant_meta:{unit:{tenant_id:507, unit_id:1000, has_memstore:true, unit_status:"NORMAL", config:{unit_config_id:1000, name:"virtual_tenant_unit", resource:{min_cpu:1, max_cpu:1, memory_size:"1GB", log_disk_size:"2GB", min_iops:10000, max_iops:50000, iops_weight:0}}, mode:0, create_timestamp:1708336682998600, is_removed:false}, super_block:{tenant_id:507, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, ls_meta_entry:[-1](ver=0,mode=0,seq=0), tablet_meta_entry:[-1](ver=0,mode=0,seq=0), is_hidden:true}, create_status:1}, unit_min_cpu:"1.000000000000000000e+00", unit_max_cpu:"1.000000000000000000e+00", slice:"0.000000000000000000e+00", slice_remain:"0.000000000000000000e+00", token_cnt:10, sug_token_cnt:10, ass_token_cnt:10, lq_tokens:3, used_lq_tokens:0, stopped:false, idle_us:7007658, recv_hp_rpc_cnt:0, recv_np_rpc_cnt:0, recv_lp_rpc_cnt:0, recv_mysql_cnt:0, recv_task_cnt:0, recv_large_req_cnt:0, tt_large_quries:0, pop_normal_cnt:2986043, actives:10, workers:10, nesting workers:8, lq waiting workers:0, req_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 , large queued:0, multi_level_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , group_map:, rpc_stat_info:}) [2024-02-19 19:03:33.787771] INFO [SERVER.OMT] run1 (ob_multi_tenant.cpp:1945) [1106806][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=534] dump tenant info(tenant={id:508, tenant_meta:{unit:{tenant_id:508, unit_id:1000, has_memstore:true, unit_status:"NORMAL", config:{unit_config_id:1000, name:"virtual_tenant_unit", resource:{min_cpu:5, max_cpu:5, memory_size:"1GB", log_disk_size:"2GB", min_iops:10000, max_iops:50000, iops_weight:0}}, mode:0, create_timestamp:1708336683007848, is_removed:false}, super_block:{tenant_id:508, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, ls_meta_entry:[-1](ver=0,mode=0,seq=0), tablet_meta_entry:[-1](ver=0,mode=0,seq=0), is_hidden:true}, create_status:1}, unit_min_cpu:"5.000000000000000000e+00", unit_max_cpu:"5.000000000000000000e+00", slice:"0.000000000000000000e+00", slice_remain:"0.000000000000000000e+00", token_cnt:50, sug_token_cnt:50, ass_token_cnt:50, lq_tokens:15, used_lq_tokens:0, stopped:false, idle_us:22517632, recv_hp_rpc_cnt:0, recv_np_rpc_cnt:0, recv_lp_rpc_cnt:0, recv_mysql_cnt:0, recv_task_cnt:0, recv_large_req_cnt:0, tt_large_quries:0, pop_normal_cnt:17925138, actives:50, workers:50, nesting workers:8, lq waiting workers:0, req_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 , large queued:0, multi_level_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , group_map:, rpc_stat_info:}) [2024-02-19 19:03:33.788262] INFO [SERVER.OMT] run1 (ob_multi_tenant.cpp:1945) [1106806][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=490] dump tenant info(tenant={id:509, tenant_meta:{unit:{tenant_id:509, unit_id:1000, has_memstore:true, unit_status:"NORMAL", config:{unit_config_id:1000, name:"virtual_tenant_unit", resource:{min_cpu:2.5, max_cpu:2.5, memory_size:"1GB", log_disk_size:"2GB", min_iops:10000, max_iops:50000, iops_weight:0}}, mode:0, create_timestamp:1708336683002985, is_removed:false}, super_block:{tenant_id:509, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, ls_meta_entry:[-1](ver=0,mode=0,seq=0), tablet_meta_entry:[-1](ver=0,mode=0,seq=0), is_hidden:true}, create_status:1}, unit_min_cpu:"2.500000000000000000e+00", unit_max_cpu:"2.500000000000000000e+00", slice:"0.000000000000000000e+00", slice_remain:"0.000000000000000000e+00", token_cnt:10, sug_token_cnt:10, ass_token_cnt:10, lq_tokens:3, used_lq_tokens:0, stopped:false, idle_us:6975890, recv_hp_rpc_cnt:0, recv_np_rpc_cnt:0, recv_lp_rpc_cnt:0, recv_mysql_cnt:0, recv_task_cnt:0, recv_large_req_cnt:0, tt_large_quries:0, pop_normal_cnt:2985510, actives:10, workers:10, nesting workers:8, lq waiting workers:0, req_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 , large queued:0, multi_level_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , group_map:, rpc_stat_info:}) [2024-02-19 19:03:33.788688] INFO [SERVER.OMT] run1 (ob_multi_tenant.cpp:1945) [1106806][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=425] dump tenant info(tenant={id:510, tenant_meta:{unit:{tenant_id:510, unit_id:1000, has_memstore:true, unit_status:"NORMAL", config:{unit_config_id:1000, name:"virtual_tenant_unit", resource:{min_cpu:1, max_cpu:1, memory_size:"2GB", log_disk_size:"2GB", min_iops:10000, max_iops:50000, iops_weight:0}}, mode:0, create_timestamp:1708336683015518, is_removed:false}, super_block:{tenant_id:510, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, ls_meta_entry:[-1](ver=0,mode=0,seq=0), tablet_meta_entry:[-1](ver=0,mode=0,seq=0), is_hidden:true}, create_status:1}, unit_min_cpu:"1.000000000000000000e+00", unit_max_cpu:"1.000000000000000000e+00", slice:"0.000000000000000000e+00", slice_remain:"0.000000000000000000e+00", token_cnt:10, sug_token_cnt:10, ass_token_cnt:10, lq_tokens:3, used_lq_tokens:0, stopped:false, idle_us:7013451, recv_hp_rpc_cnt:0, recv_np_rpc_cnt:0, recv_lp_rpc_cnt:0, recv_mysql_cnt:0, recv_task_cnt:0, recv_large_req_cnt:0, tt_large_quries:0, pop_normal_cnt:2985264, actives:10, workers:10, nesting workers:8, lq waiting workers:0, req_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 , large queued:0, multi_level_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , group_map:, rpc_stat_info:}) [2024-02-19 19:03:33.789182] INFO [SERVER.OMT] run1 (ob_multi_tenant.cpp:1945) [1106806][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=493] dump tenant info(tenant={id:512, tenant_meta:{unit:{tenant_id:512, unit_id:1000, has_memstore:true, unit_status:"NORMAL", config:{unit_config_id:1000, name:"virtual_tenant_unit", resource:{min_cpu:1, max_cpu:1, memory_size:"1GB", log_disk_size:"2GB", min_iops:10000, max_iops:50000, iops_weight:0}}, mode:0, create_timestamp:1708336683025571, is_removed:false}, super_block:{tenant_id:512, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, ls_meta_entry:[-1](ver=0,mode=0,seq=0), tablet_meta_entry:[-1](ver=0,mode=0,seq=0), is_hidden:true}, create_status:1}, unit_min_cpu:"1.000000000000000000e+00", unit_max_cpu:"1.000000000000000000e+00", slice:"0.000000000000000000e+00", slice_remain:"0.000000000000000000e+00", token_cnt:10, sug_token_cnt:10, ass_token_cnt:10, lq_tokens:3, used_lq_tokens:0, stopped:false, idle_us:7035543, recv_hp_rpc_cnt:0, recv_np_rpc_cnt:0, recv_lp_rpc_cnt:0, recv_mysql_cnt:0, recv_task_cnt:0, recv_large_req_cnt:0, tt_large_quries:0, pop_normal_cnt:2985918, actives:10, workers:10, nesting workers:8, lq waiting workers:0, req_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 , large queued:0, multi_level_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , group_map:, rpc_stat_info:}) [2024-02-19 19:03:33.789602] INFO [SERVER.OMT] run1 (ob_multi_tenant.cpp:1945) [1106806][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=394] dump tenant info(tenant={id:999, tenant_meta:{unit:{tenant_id:999, unit_id:1000, has_memstore:true, unit_status:"NORMAL", config:{unit_config_id:1000, name:"virtual_tenant_unit", resource:{min_cpu:1, max_cpu:1, memory_size:"2GB", log_disk_size:"2GB", min_iops:10000, max_iops:50000, iops_weight:0}}, mode:0, create_timestamp:1708336683020360, is_removed:false}, super_block:{tenant_id:999, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, ls_meta_entry:[-1](ver=0,mode=0,seq=0), tablet_meta_entry:[-1](ver=0,mode=0,seq=0), is_hidden:true}, create_status:1}, unit_min_cpu:"1.000000000000000000e+00", unit_max_cpu:"1.000000000000000000e+00", slice:"0.000000000000000000e+00", slice_remain:"0.000000000000000000e+00", token_cnt:10, sug_token_cnt:10, ass_token_cnt:10, lq_tokens:3, used_lq_tokens:0, stopped:false, idle_us:7067754, recv_hp_rpc_cnt:0, recv_np_rpc_cnt:0, recv_lp_rpc_cnt:0, recv_mysql_cnt:0, recv_task_cnt:0, recv_large_req_cnt:0, tt_large_quries:0, pop_normal_cnt:2985550, actives:10, workers:10, nesting workers:8, lq waiting workers:0, req_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 , large queued:0, multi_level_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , group_map:, rpc_stat_info:}) [2024-02-19 19:03:33.796382] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.796424] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.805929] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=28] Cache replace map node details(ret=0, replace_node_count=0, replace_time=21277, replace_start_pos=1195328, replace_num=15728) [2024-02-19 19:03:33.806051] INFO [SERVER.OMT] calibrate_worker_count (ob_tenant.cpp:1397) [1106806][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6080] tenant calibrate worker(id=510, ass_token_cnt=10, new_ass_token_cnt=10) [2024-02-19 19:03:33.806089] INFO [SERVER.OMT] calibrate_worker_count (ob_tenant.cpp:1397) [1106806][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=30] tenant calibrate worker(id=512, ass_token_cnt=10, new_ass_token_cnt=10) [2024-02-19 19:03:33.806303] INFO [SERVER.OMT] calibrate_worker_count (ob_tenant.cpp:1397) [1106806][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=203] tenant calibrate worker(id=999, ass_token_cnt=10, new_ass_token_cnt=10) [2024-02-19 19:03:33.806556] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.806590] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.816876] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.816973] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.819044] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:326) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=57] ====== check clog disk timer task ====== [2024-02-19 19:03:33.819077] INFO [PALF] get_disk_usage (palf_env_impl.cpp:820) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=30] get_disk_usage(ret=0, capacity(MB):=2048, used(MB):=1945) [2024-02-19 19:03:33.820995] INFO [STORAGE.TRANS] get_rec_log_ts (ob_trans_ctx_mgr_v4.cpp:1283) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=14] succ to get rec log ts(*this={this:0x7fdce3204030, ls_id:{id:1}, tenant_id:1, state:"F_WORKING", total_tx_ctx_count:0, leader_takeover_ts:{mts:0}, is_leader_serving:false, max_replay_commit_version:1707751112415295196, ls_retain_ctx_mgr:{retain_ctx_list_.size():0}, aggre_rec_log_ts:-1, prev_aggre_rec_log_ts:-1, online_ts:0, uref:1073741825}, aggre_rec_log_ts=9223372036854775807) [2024-02-19 19:03:33.821031] INFO [STORAGE.TRANS] get_rec_log_ts (ob_tx_ctx_memtable.cpp:231) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=37] tx ctx memtable get rec log ts(this={ObITable:{this:0x7fdce5f6e080, key:{tablet_id:{id:49401}, column_group_idx:0, table_type:"TX_CTX_MEMTABLE", log_ts_range:{start_log_ts:1, end_log_ts:1708337131277985}}, ref_cnt:2, upper_trans_version:-4007, timestamp:0}, this:0x7fdce5f6e080, snapshot_version:1708337131277985, ls_id:{id:1}, is_frozen:false}, rec_log_ts=9223372036854775807) [2024-02-19 19:03:33.821066] INFO [STORAGE.TABLELOCK] get_rec_log_ts (ob_lock_memtable.cpp:739) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=25] rec_log_ts of ObLockMemtable is (rec_log_ts_=9223372036854775807, flushed_log_ts_=1707033175148098668, freeze_log_ts_=0, max_committed_log_ts_=-1, is_frozen_=false, ls_id_={id:1}) [2024-02-19 19:03:33.821088] INFO [STORAGE.TRANS] get_rec_log_ts (ob_ls_tx_service.cpp:437) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=19] [CHECKPOINT] ObLSTxService::get_rec_log_ts(common_checkpoint_type="TX_DATA_MEMTABLE_TYPE", common_checkpoints_[min_rec_log_ts_common_checkpoint_type_index]={ObIMemtableMgr:{Memtables:this:0x7fdce89de180, ref_cnt:1, is_inited:true, tablet_id:{id:49402}, freezer:0x7fdce89e30d0, table_type:1, memtable_head:0, memtable_tail:2, t3m:0x7fdd18bce030, tables:[0x7fdce5eea080, 0x7fdce5eea360, null, null, null, null, null, null, null, null, null, null, null, null, null, null]}, is_freezing:false, ls_id:{id:1}, tx_data_table:0x7fdce89e4550, ls_tablet_svr:0x7fdce89de160, slice_allocator:0x7fdce89e4590}, min_rec_log_ts=1707209832548318068, ls_id_={id:1}) [2024-02-19 19:03:33.823471] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=33] get rec log ts(service_type_=0, rec_log_ts=9223372036854775807) [2024-02-19 19:03:33.823497] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=27] get rec log ts(service_type_=1, rec_log_ts=9223372036854775807) [2024-02-19 19:03:33.823509] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=9] get rec log ts(service_type_=2, rec_log_ts=9223372036854775807) [2024-02-19 19:03:33.823525] INFO [STORAGE] update_clog_checkpoint (ob_checkpoint_executor.cpp:158) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=8] [CHECKPOINT] clog checkpoint no change(checkpoint_ts=1707209832548318068, checkpoint_ts_in_ls_meta=1707209832548318068, ls_id={id:1}, service_type="TRANS_SERVICE") [2024-02-19 19:03:33.823568] INFO [STORAGE] cannot_recycle_log_over_threshold_ (ob_checkpoint_service.cpp:239) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=34] cannot_recycle_log_size statistics(cannot_recycle_log_size=1905773194, threshold=644245094) [2024-02-19 19:03:33.825256] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=10] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:33.825285] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=28] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:33.825296] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=9] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:33.825307] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=9] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:33.825319] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=8] fail to resolve table(ret=-5019) [2024-02-19 19:03:33.825344] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=24] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:33.825356] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=8] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:33.825365] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=7] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:33.825373] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=7] resolve basic table failed(ret=-5019) [2024-02-19 19:03:33.825382] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=8] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:33.825390] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=7] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:33.825411] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=20] resolve normal query failed(ret=-5019) [2024-02-19 19:03:33.825420] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=8] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:33.825438] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=9] failed to resolve(ret=-5019) [2024-02-19 19:03:33.825457] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=9] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:33.825484] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=34] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:33.825494] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=8] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:33.825504] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=7] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, ret=-5019) [2024-02-19 19:03:33.825514] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=8] executor execute failed(ret=-5019) [2024-02-19 19:03:33.825524] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=8] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, retry_cnt=0) [2024-02-19 19:03:33.825526] INFO [PALF] locate_by_lsn_coarsely (palf_handle_impl.cpp:1605) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=18] locate_by_lsn_coarsely(ret=0, ret="OB_SUCCESS", this={palf_id:1, self:"172.1.3.242:2882", has_set_deleted:false}, lsn={lsn:24563027948}, committed_lsn={lsn:25325337226}, result_ts_ns=1707530339417374084) [2024-02-19 19:03:33.825542] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=12] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:33.825559] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=14] result set close failed(ret=-5019) [2024-02-19 19:03:33.825555] INFO [STORAGE] advance_checkpoint_by_flush (ob_checkpoint_executor.cpp:226) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=28] advance checkpoint by flush to avoid clog disk full(recycle_ts=1707530339417374084, end_lsn={lsn:25325337226}, clog_checkpoint_lsn={lsn:23419564032}, calcu_recycle_lsn={lsn:24563027948}, ls_->get_ls_id()={id:1}) [2024-02-19 19:03:33.825568] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:33.825576] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=7] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:33.825579] INFO [STORAGE] advance_checkpoint_by_flush (ob_checkpoint_executor.cpp:244) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=17] start flush(recycle_ts=1707530339417374084, ls_->get_clog_checkpoint_ts()=1707209832548318068, ls_->get_ls_id()={id:1}) [2024-02-19 19:03:33.825601] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=7] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:33.825612] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A029-0-0] [lt=9] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-02-19 19:03:33.825623] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:33.825633] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:33.825641] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:33.825651] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] query failed(ret=-5019, conn=0x7fdcdc9be050, start=1708340613824867, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:33.825678] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] read failed(ret=-5019) [2024-02-19 19:03:33.825690] WARN get_my_sql_result_ (ob_table_access_helper.h:329) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=23] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x7fdcfffccd78, table=__all_server, condition=where svr_ip='172.1.3.242' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, columns_str="zone") [2024-02-19 19:03:33.825708] WARN read_single_row (ob_table_access_helper.h:178) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=14] get mysql result failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1, columns=0x7fdcfffccd78, table=__all_server, where_condition=where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:33.825818] WARN [COORDINATOR] get_self_zone_name (table_accessor.cpp:517) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] get zone from __all_server failed(ret=-4016, ret="OB_ERR_UNEXPECTED", columns=0x7fdcfffccd78, where_condition="where svr_ip='172.1.3.242' and svr_port=2882", zone_name_holder=) [2024-02-19 19:03:33.825831] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:450) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=12] get self zone name failed(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:33.825854] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:459) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=22] zone name is empty(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:33.825864] WARN [COORDINATOR] refresh (ob_leader_coordinator.cpp:107) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] get all ls election reference info failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1) [2024-02-19 19:03:33.827067] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.827093] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.827900] INFO [STORAGE.TRANS] get_rec_log_ts (ob_trans_ctx_mgr_v4.cpp:1283) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=11] succ to get rec log ts(*this={this:0x7fdce3204030, ls_id:{id:1}, tenant_id:1, state:"F_WORKING", total_tx_ctx_count:0, leader_takeover_ts:{mts:0}, is_leader_serving:false, max_replay_commit_version:1707751112415295196, ls_retain_ctx_mgr:{retain_ctx_list_.size():0}, aggre_rec_log_ts:-1, prev_aggre_rec_log_ts:-1, online_ts:0, uref:1073741825}, aggre_rec_log_ts=9223372036854775807) [2024-02-19 19:03:33.827929] INFO [STORAGE.TRANS] get_rec_log_ts (ob_tx_ctx_memtable.cpp:231) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=29] tx ctx memtable get rec log ts(this={ObITable:{this:0x7fdce5f6e080, key:{tablet_id:{id:49401}, column_group_idx:0, table_type:"TX_CTX_MEMTABLE", log_ts_range:{start_log_ts:1, end_log_ts:1708337131277985}}, ref_cnt:2, upper_trans_version:-4007, timestamp:0}, this:0x7fdce5f6e080, snapshot_version:1708337131277985, ls_id:{id:1}, is_frozen:false}, rec_log_ts=9223372036854775807) [2024-02-19 19:03:33.827961] INFO [STORAGE] freeze (ob_tx_data_memtable_mgr.cpp:192) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=25] start freeze tx data memtable(ls_id_={id:1}) [2024-02-19 19:03:33.827973] INFO [STORAGE] freeze_ (ob_tx_data_memtable_mgr.cpp:228) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=10] There is a freezed memetable existed. Try freeze after flushing it.(ret=-4023, ret="OB_EAGAIN", get_memtable_count_()=2) [2024-02-19 19:03:33.827985] WARN [STORAGE] freeze (ob_tx_data_memtable_mgr.cpp:206) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=10] freeze tx data memtable fail.(ret=-4023, ret="OB_EAGAIN") [2024-02-19 19:03:33.827994] WARN [STORAGE] flush (ob_tx_data_memtable_mgr.cpp:430) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=8] freeze failed(ret=-4023, ret="OB_EAGAIN", this=0x7fdce89de180) [2024-02-19 19:03:33.828005] WARN [STORAGE.TRANS] flush (ob_ls_tx_service.cpp:451) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=10] obCommonCheckpoint flush failed(tmp_ret=-4023, common_checkpoints_[i]=0x7fdce89de250) [2024-02-19 19:03:33.828019] INFO [STORAGE.TABLELOCK] get_rec_log_ts (ob_lock_memtable.cpp:739) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=10] rec_log_ts of ObLockMemtable is (rec_log_ts_=9223372036854775807, flushed_log_ts_=1707033175148098668, freeze_log_ts_=0, max_committed_log_ts_=-1, is_frozen_=false, ls_id_={id:1}) [2024-02-19 19:03:33.829047] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC81-0-0] [lt=108] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:33.829069] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC81-0-0] [lt=22] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:33.829085] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC81-0-0] [lt=14] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:33.829104] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC81-0-0] [lt=16] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:33.829123] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC81-0-0] [lt=18] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:33.834341] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=41] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:33.834451] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=37] Wash time detail, (compute_wash_size_time=145, refresh_score_time=67, wash_time=7) [2024-02-19 19:03:33.837398] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.837437] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.848844] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.848885] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.854247] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.854285] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=39] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613854235}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.854308] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=21] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613854235}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.860159] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.860204] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.870455] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.870499] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.873252] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=14] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=38780, clean_start_pos=912253, clean_num=31457) [2024-02-19 19:03:33.879040] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340613879021}) [2024-02-19 19:03:33.879079] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=41] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340613854235}}) [2024-02-19 19:03:33.879139] WARN [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:485) [1106741][SysLocAsyncUp0][T0][YB42AC0103F2-000611B9212AA0DD-0-0] [lt=31] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, tasks=[{cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340613879021}]) [2024-02-19 19:03:33.880662] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.880693] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.883905] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:33.883936] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=33] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:33.883957] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340613883886) [2024-02-19 19:03:33.883971] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340613683619, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:33.884045] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] get wrs ts(ls_id={id:1}, delta_ns=-1706042771801411895, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:33.884061] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=1, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:33.885241] INFO [COMMON] print_io_status (ob_io_struct.cpp:619) [1106661][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=23] [IO STATUS](tenant_ids=[1, 500], send_thread_count=2, send_queues=[0, 0]) [2024-02-19 19:03:33.890928] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.890964] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.901982] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.902036] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=56] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.918000] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.918044] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=75] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.928204] WARN [SERVER] batch_process_tasks (ob_ls_table_updater.cpp:333) [1106713][LSMetaTblUp0][T0][YB42AC0103F2-000611B9217D2E88-0-0] [lt=42] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1, task={tenant_id:1, ls_id:{id:1}, add_timestamp:1708337390831403}) [2024-02-19 19:03:33.928206] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.928242] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.938452] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.938507] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=58] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.940474] WARN [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2113) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=16] get invalid Ethernet speed, use default(devname="ens18") [2024-02-19 19:03:33.940501] WARN [SERVER] runTimerTask (ob_server.cpp:2632) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=28] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4002, ret="OB_INVALID_ARGUMENT") [2024-02-19 19:03:33.948754] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.948811] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=86] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.957329] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:33.957368] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=52] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613957303}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.957408] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=36] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340613957303}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:33.959442] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.959502] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.957426] WARN [STORAGE.TRANS] operator() (ob_ts_mgr.h:225) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=13] refresh gts failed(ret=-4038, ret="OB_NOT_MASTER", gts_tenant_info={v:1}) [2024-02-19 19:03:33.960068] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:229) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=2638] refresh gts functor(ret=-4038, ret="OB_NOT_MASTER", gts_tenant_info={v:1}) [2024-02-19 19:03:33.970049] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=53] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.970157] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=108] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.979063] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=19] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340613979042}) [2024-02-19 19:03:33.979106] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=46] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340613957303}}) [2024-02-19 19:03:33.980674] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.980702] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:33.984044] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] get wrs ts(ls_id={id:1}, delta_ns=-1706042771801311396, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:33.984079] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=37] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:33.991868] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:33.991926] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=63] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.002136] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.002176] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.012455] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.012501] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.023194] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.023240] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.025800] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=46] Cache replace map node details(ret=0, replace_node_count=0, replace_time=18559, replace_start_pos=1211056, replace_num=15728) [2024-02-19 19:03:34.033390] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.033441] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=53] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.043658] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=63] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.043718] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=64] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.053885] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=56] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.053933] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.060721] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=21] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.060760] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=40] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614060706}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.060793] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=25] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614060706}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.064109] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=80] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.064145] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.073955] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=30] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:34.074181] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=154] Wash time detail, (compute_wash_size_time=302, refresh_score_time=66, wash_time=7) [2024-02-19 19:03:34.074812] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.074854] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.079302] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC82-0-0] [lt=89] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:34.079343] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC82-0-0] [lt=42] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:34.079348] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=19] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340614079324}) [2024-02-19 19:03:34.079370] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC82-0-0] [lt=26] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:34.079383] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=35] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340614060706}}) [2024-02-19 19:03:34.079390] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC82-0-0] [lt=16] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:34.079419] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC82-0-0] [lt=29] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:34.084278] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=29] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:34.084313] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=36] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:34.084332] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340614084257) [2024-02-19 19:03:34.084346] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340613883983, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:34.084375] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:738) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] current server is WRS leader, need start CLUSTER weak read service(tenant_id=1, serve_leader_epoch=0, cur_leader_epoch=138, cluster_service_tablet_id_={id:226}, in_service=false, can_update_version=false, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:34.084414] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:336) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] begin start service(tenant_id=1, is_in_service()=false, can_update_version=false) [2024-02-19 19:03:34.084427] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:338) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] start TenantWeakReadClusterService(tenant_id=1) [2024-02-19 19:03:34.085022] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.085050] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.085545] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=8] table not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:34.085577] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=29] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:34.085590] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=12] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:34.085602] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=11] resolve table relation factor failed(ret=-5019, table_name=__all_weak_read_service) [2024-02-19 19:03:34.085622] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=14] fail to resolve table(ret=-5019) [2024-02-19 19:03:34.085637] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=14] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:34.085655] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=11] Table 'oceanbase.__all_weak_read_service' doesn't exist [2024-02-19 19:03:34.085670] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=13] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:34.085683] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=12] resolve basic table failed(ret=-5019) [2024-02-19 19:03:34.085696] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=12] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:34.085708] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=10] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:34.085721] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=11] resolve normal query failed(ret=-5019) [2024-02-19 19:03:34.085734] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=12] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:34.085778] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=32] failed to resolve(ret=-5019) [2024-02-19 19:03:34.085793] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=15] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.085816] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=19] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.085829] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=11] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:34.085843] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=11] fail to handle text query(stmt=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '', ret=-5019) [2024-02-19 19:03:34.085857] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=12] executor execute failed(ret=-5019) [2024-02-19 19:03:34.085869] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=10] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, retry_cnt=0) [2024-02-19 19:03:34.085897] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=18] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:34.085921] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=21] result set close failed(ret=-5019) [2024-02-19 19:03:34.085936] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=13] result set close failed(ret=-5019) [2024-02-19 19:03:34.085948] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=11] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:34.086018] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=12] failed to process record(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:34.086036] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797EF-0-0] [lt=17] failed to process final(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, aret=-5019, ret=-5019) [2024-02-19 19:03:34.086052] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] execute sql failed(ret=-5019, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:34.086068] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:34.086081] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:34.086092] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] query failed(ret=-5019, conn=0x7fdcd7d06050, start=1708340614085310, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:34.086109] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] read failed(ret=-5019) [2024-02-19 19:03:34.086122] WARN [STORAGE.TRANS] query_cluster_version_range_ (ob_tenant_weak_read_cluster_service.cpp:192) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] execute sql read fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", exec_tenant_id=1, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:34.086214] WARN [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:367) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] query cluster version range from WRS table fail(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:34.086262] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:415) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=45] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] start service done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, in_service=false, leader_epoch=0, current_version=0, delta=1708340614086258, min_version=0, max_version=0, max_stale_time=5000000000, all_valid_server_count=0, total_time=1865, wlock_time=41, check_leader_time=2, query_version_time=0, persist_version_time=0) [2024-02-19 19:03:34.086286] WARN [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:781) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23] start CLUSTER weak read service fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:34.086302] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:791) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, need_start_service=true, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=138, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:34.086371] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] get wrs ts(ls_id={id:1}, delta_ns=-1706042771801210033, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:34.086389] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:34.095245] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.095281] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.105489] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.105533] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.115728] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.115776] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.119375] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:199) [1107573][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=45] sql audit evict task end(evict_high_mem_level=32212254, evict_high_size_level=90000, evict_batch_count=0, elapse_time=2, size_used=14914, mem_used=31196160) [2024-02-19 19:03:34.126907] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.126950] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.132473] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=44] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=58239, clean_start_pos=943710, clean_num=31457) [2024-02-19 19:03:34.137574] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.137624] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.148559] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.148606] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.158748] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.158811] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=65] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.161404] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=15] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.161430] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=26] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614161377}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.161460] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=17] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614161377}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.169246] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.169294] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.179430] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.179488] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=61] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.179611] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=19] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340614179597}) [2024-02-19 19:03:34.179634] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=23] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340614161377}}) [2024-02-19 19:03:34.184372] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] get wrs ts(ls_id={id:1}, delta_ns=-1706042771801111351, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:34.184403] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=31] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:34.189613] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.189716] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=106] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.199893] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.199936] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.211152] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.211193] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.212607] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=6] table not exist(tenant_id=1, database_id=201001, table_name=__all_sys_parameter, ret=-5019) [2024-02-19 19:03:34.212634] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=25] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_sys_parameter, ret=-5019) [2024-02-19 19:03:34.212647] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=11] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:34.212658] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=9] resolve table relation factor failed(ret=-5019, table_name=__all_sys_parameter) [2024-02-19 19:03:34.212671] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=8] fail to resolve table(ret=-5019) [2024-02-19 19:03:34.212682] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=11] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:34.212697] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=8] Table 'oceanbase.__all_sys_parameter' doesn't exist [2024-02-19 19:03:34.212707] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=8] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:34.212716] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=8] resolve basic table failed(ret=-5019) [2024-02-19 19:03:34.212726] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=9] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:34.212736] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=9] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:34.212749] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=11] resolve normal query failed(ret=-5019) [2024-02-19 19:03:34.212760] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:34.212786] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=15] failed to resolve(ret=-5019) [2024-02-19 19:03:34.212799] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=12] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.212815] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=12] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.212825] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=8] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:34.212836] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=8] fail to handle text query(stmt=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter, ret=-5019) [2024-02-19 19:03:34.212854] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=15] executor execute failed(ret=-5019) [2024-02-19 19:03:34.212865] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=10] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, retry_cnt=0) [2024-02-19 19:03:34.212886] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=13] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:34.212904] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=14] result set close failed(ret=-5019) [2024-02-19 19:03:34.212914] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=10] result set close failed(ret=-5019) [2024-02-19 19:03:34.212942] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:34.212969] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D8-0-0] [lt=29] failed to process record(executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:34.212984] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D8-0-0] [lt=12] failed to process final(executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, aret=-5019, ret=-5019) [2024-02-19 19:03:34.213003] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D8-0-0] [lt=16] execute sql failed(ret=-5019, tenant_id=1, sql=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter) [2024-02-19 19:03:34.213015] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D8-0-0] [lt=9] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:34.213028] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D8-0-0] [lt=13] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:34.213039] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D8-0-0] [lt=9] query failed(ret=-5019, conn=0x7fdd189bc050, start=1708340614212429, sql=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter) [2024-02-19 19:03:34.213055] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D8-0-0] [lt=16] read failed(ret=-5019) [2024-02-19 19:03:34.213066] WARN [SHARE] update_local (ob_config_manager.cpp:322) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D8-0-0] [lt=8] read config from __all_sys_parameter failed(sqlstr="select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter", ret=-5019) [2024-02-19 19:03:34.213134] WARN [SHARE] update_local (ob_config_manager.cpp:356) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D8-0-0] [lt=14] Read system config from inner table error(ret=-5019) [2024-02-19 19:03:34.213149] WARN [SHARE] runTimerTask (ob_config_manager.cpp:455) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D8-0-0] [lt=14] Update local config failed(ret=-5019) [2024-02-19 19:03:34.221545] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.221578] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.232743] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.232789] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.241910] INFO [LIB] runTimerTask (ob_work_queue.cpp:24) [1106715][ObTimer][T0][Y0-0000000000000000-0-0] [lt=27] add async task(this=tasktype:N9oceanbase10rootserver13ObRootService19ObRefreshServerTaskE) [2024-02-19 19:03:34.242930] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.242965] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.243080] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=24] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:34.243106] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=24] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:34.243115] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=8] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:34.243123] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=7] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:34.243132] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=6] fail to resolve table(ret=-5019) [2024-02-19 19:03:34.243138] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=6] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:34.243148] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=5] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:34.243154] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=6] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:34.243161] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=6] resolve basic table failed(ret=-5019) [2024-02-19 19:03:34.243168] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=6] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:34.243178] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=9] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:34.243185] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=6] resolve normal query failed(ret=-5019) [2024-02-19 19:03:34.243192] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=6] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:34.243205] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=7] failed to resolve(ret=-5019) [2024-02-19 19:03:34.243214] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=8] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.243223] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=6] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.243230] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=6] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:34.243238] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=6] fail to handle text query(stmt=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server, ret=-5019) [2024-02-19 19:03:34.243247] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=7] executor execute failed(ret=-5019) [2024-02-19 19:03:34.243254] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=7] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, retry_cnt=0) [2024-02-19 19:03:34.243270] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=10] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:34.243284] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=12] result set close failed(ret=-5019) [2024-02-19 19:03:34.243291] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=6] result set close failed(ret=-5019) [2024-02-19 19:03:34.243297] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=5] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:34.243316] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106718][RSAsyncTask1][T1][YB42AC0103F2-000611B922A78807-0-0] [lt=6] failed to process record(executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:34.243326] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106718][RSAsyncTask1][T0][YB42AC0103F2-000611B922A78807-0-0] [lt=8] failed to process final(executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, aret=-5019, ret=-5019) [2024-02-19 19:03:34.243335] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=7] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server) [2024-02-19 19:03:34.243344] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=8] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:34.243351] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=6] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:34.243359] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=7] query failed(ret=-5019, conn=0x7fdcf4ef4050, start=1708340614242865, sql=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server) [2024-02-19 19:03:34.243368] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=9] read failed(ret=-5019) [2024-02-19 19:03:34.243495] WARN [SHARE] run2 (ob_async_task_queue.cpp:148) [1106718][RSAsyncTask1][T0][Y0-0000000000000000-0-0] [lt=5] task process failed, start retry(max retry time=0, retry interval=1000000, ret=-5019) [2024-02-19 19:03:34.246461] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=35] Cache replace map node details(ret=0, replace_node_count=0, replace_time=20513, replace_start_pos=1226784, replace_num=15728) [2024-02-19 19:03:34.253914] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.253958] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.260905] INFO [SHARE] run_loop_ (ob_bg_thread_monitor.cpp:331) [1109111][BGThreadMonitor][T0][Y0-0000000000000000-0-0] [lt=28] current monitor number(seq_=-1) [2024-02-19 19:03:34.262304] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.262337] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=31] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614262291}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.262362] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=23] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614262291}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.264263] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=61] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.264357] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=95] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.274489] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.274538] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.280105] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=15] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340614280084}) [2024-02-19 19:03:34.280146] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=43] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340614262291}}) [2024-02-19 19:03:34.284417] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:34.284456] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=41] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:34.284486] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340614284476}) [2024-02-19 19:03:34.284505] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340614284397) [2024-02-19 19:03:34.284520] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340614084360, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:34.284599] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] get wrs ts(ls_id={id:1}, delta_ns=-1706042771801011427, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:34.284625] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:34.284684] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.284706] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=24] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.294837] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.294882] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.305289] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.305335] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.315467] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.315510] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.323241] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=8] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:34.323276] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=36] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:34.323349] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=12] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:34.323363] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=72] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:34.323374] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=7] fail to resolve table(ret=-5019) [2024-02-19 19:03:34.323381] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=7] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:34.323391] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=6] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:34.323398] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=6] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:34.323405] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=6] resolve basic table failed(ret=-5019) [2024-02-19 19:03:34.323411] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=5] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:34.323417] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=6] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:34.323424] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=6] resolve normal query failed(ret=-5019) [2024-02-19 19:03:34.323431] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=6] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:34.323445] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=7] failed to resolve(ret=-5019) [2024-02-19 19:03:34.323454] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=8] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.323463] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=6] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.323470] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=6] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:34.323478] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=6] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, ret=-5019) [2024-02-19 19:03:34.323487] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=7] executor execute failed(ret=-5019) [2024-02-19 19:03:34.323494] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=7] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, retry_cnt=0) [2024-02-19 19:03:34.323511] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=10] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:34.323525] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=12] result set close failed(ret=-5019) [2024-02-19 19:03:34.323532] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=6] result set close failed(ret=-5019) [2024-02-19 19:03:34.323537] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=5] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:34.323557] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=6] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:34.323565] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02A-0-0] [lt=7] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-02-19 19:03:34.323574] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:34.323582] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:34.323589] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:34.323597] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6] query failed(ret=-5019, conn=0x7fdcf4e20050, start=1708340614322912, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:34.323606] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] read failed(ret=-5019) [2024-02-19 19:03:34.323613] WARN get_my_sql_result_ (ob_table_access_helper.h:329) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=5] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x7fdcfffccd78, table=__all_server, condition=where svr_ip='172.1.3.242' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, columns_str="zone") [2024-02-19 19:03:34.323629] WARN read_single_row (ob_table_access_helper.h:178) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] get mysql result failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1, columns=0x7fdcfffccd78, table=__all_server, where_condition=where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:34.323688] WARN [COORDINATOR] get_self_zone_name (table_accessor.cpp:517) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] get zone from __all_server failed(ret=-4016, ret="OB_ERR_UNEXPECTED", columns=0x7fdcfffccd78, where_condition="where svr_ip='172.1.3.242' and svr_port=2882", zone_name_holder=) [2024-02-19 19:03:34.323698] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:450) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] get self zone name failed(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:34.323705] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:459) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] zone name is empty(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:34.323713] WARN [COORDINATOR] refresh (ob_leader_coordinator.cpp:107) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6] get all ls election reference info failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1) [2024-02-19 19:03:34.325653] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.325683] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.331607] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC83-0-0] [lt=154] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:34.331667] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC83-0-0] [lt=60] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:34.331708] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC83-0-0] [lt=39] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:34.331734] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC83-0-0] [lt=22] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:34.331771] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC83-0-0] [lt=36] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:34.333144] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=97] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1388204032, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:34.333220] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=24] Wash time detail, (compute_wash_size_time=129, refresh_score_time=48, wash_time=5) [2024-02-19 19:03:34.335809] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.335875] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=67] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.348534] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=13] table not exist(tenant_id=1, database_id=201001, table_name=__all_unit, ret=-5019) [2024-02-19 19:03:34.348569] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=35] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_unit, ret=-5019) [2024-02-19 19:03:34.348584] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=12] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:34.348595] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=11] resolve table relation factor failed(ret=-5019, table_name=__all_unit) [2024-02-19 19:03:34.348610] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=10] fail to resolve table(ret=-5019) [2024-02-19 19:03:34.348620] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=10] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:34.348635] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=8] Table 'oceanbase.__all_unit' doesn't exist [2024-02-19 19:03:34.348644] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=9] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:34.348654] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=8] resolve basic table failed(ret=-5019) [2024-02-19 19:03:34.348663] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=9] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:34.348673] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=9] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:34.348684] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=9] resolve normal query failed(ret=-5019) [2024-02-19 19:03:34.348695] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=10] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:34.348729] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=24] failed to resolve(ret=-5019) [2024-02-19 19:03:34.348741] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=12] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.348754] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=10] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.348765] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=9] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:34.348776] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=9] fail to handle text query(stmt=SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1), ret=-5019) [2024-02-19 19:03:34.348789] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=10] executor execute failed(ret=-5019) [2024-02-19 19:03:34.348802] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=10] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)"}, retry_cnt=0) [2024-02-19 19:03:34.348823] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=16] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:34.348842] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=16] result set close failed(ret=-5019) [2024-02-19 19:03:34.348853] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=10] result set close failed(ret=-5019) [2024-02-19 19:03:34.348862] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=7] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:34.348889] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=10] failed to process record(executor={ObIExecutor:, sql:"SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:34.348906] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=12] failed to process final(executor={ObIExecutor:, sql:"SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)"}, aret=-5019, ret=-5019) [2024-02-19 19:03:34.348920] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=12] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)) [2024-02-19 19:03:34.349024] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=29] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:34.349038] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=86] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:34.349049] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=10] query failed(ret=-5019, conn=0x7fdcdc89a050, start=1708340614348262, sql=SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)) [2024-02-19 19:03:34.349063] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=13] read failed(ret=-5019) [2024-02-19 19:03:34.349074] WARN [SHARE] read_units (ob_unit_table_operator.cpp:958) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=9] execute sql failed(sql=SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1), ret=-5019) [2024-02-19 19:03:34.349153] WARN [SHARE] get_units_by_tenant (ob_unit_table_operator.cpp:715) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=12] read_units failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", sql=SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)) [2024-02-19 19:03:34.349166] WARN [SHARE] get_sys_unit_count (ob_unit_table_operator.cpp:68) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=12] failed to get units by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:34.349176] WARN [SHARE] get_sys_unit_count (ob_unit_getter.cpp:436) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=8] ut_operator get sys unit count failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:34.349187] WARN [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:88) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=8] get sys unit count fail(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:34.349197] INFO [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:102) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=9] refresh tenant units(sys_unit_cnt=0, units=[], ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:34.349238] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.349268] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.349851] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=10] table not exist(tenant_id=1, database_id=201001, table_name=__all_tenant, ret=-5019) [2024-02-19 19:03:34.349877] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=25] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_tenant, ret=-5019) [2024-02-19 19:03:34.349896] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=11] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:34.349905] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=9] resolve table relation factor failed(ret=-5019, table_name=__all_tenant) [2024-02-19 19:03:34.349916] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=8] fail to resolve table(ret=-5019) [2024-02-19 19:03:34.349925] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=8] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:34.349937] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=7] Table 'oceanbase.__all_tenant' doesn't exist [2024-02-19 19:03:34.349947] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=10] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:34.349957] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:34.349994] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=10] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:34.350003] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=34] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:34.350012] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=8] resolve normal query failed(ret=-5019) [2024-02-19 19:03:34.350021] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=8] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:34.350035] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=9] failed to resolve(ret=-5019) [2024-02-19 19:03:34.350044] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=8] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.350054] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=8] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.350063] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=8] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:34.350072] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=8] fail to handle text query(stmt=SELECT tenant_id FROM __all_tenant, ret=-5019) [2024-02-19 19:03:34.350082] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=9] executor execute failed(ret=-5019) [2024-02-19 19:03:34.350091] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=9] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT tenant_id FROM __all_tenant"}, retry_cnt=0) [2024-02-19 19:03:34.350106] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=11] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:34.350121] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=12] result set close failed(ret=-5019) [2024-02-19 19:03:34.350130] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=9] result set close failed(ret=-5019) [2024-02-19 19:03:34.350138] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=7] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:34.350156] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106807][OmtNodeBalancer][T1][YB42AC0103F2-000611B9211784A6-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"SELECT tenant_id FROM __all_tenant"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:34.350169] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=11] failed to process final(executor={ObIExecutor:, sql:"SELECT tenant_id FROM __all_tenant"}, aret=-5019, ret=-5019) [2024-02-19 19:03:34.350181] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT tenant_id FROM __all_tenant) [2024-02-19 19:03:34.350192] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:34.350205] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=13] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:34.350215] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=8] query failed(ret=-5019, conn=0x7fdcdc9be050, start=1708340614349771, sql=SELECT tenant_id FROM __all_tenant) [2024-02-19 19:03:34.350226] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=11] read failed(ret=-5019) [2024-02-19 19:03:34.350235] WARN [SHARE] read_tenants (ob_unit_table_operator.cpp:990) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=7] execute sql failed(sql=SELECT tenant_id FROM __all_tenant, ret=-5019) [2024-02-19 19:03:34.350286] WARN [SHARE] get_tenants (ob_unit_table_operator.cpp:109) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=12] fail read tenants(sql=SELECT tenant_id FROM __all_tenant, ret=-5019) [2024-02-19 19:03:34.350302] WARN [SHARE] get_tenants (ob_unit_getter.cpp:198) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=15] ut_operator get_resource_pools failed(ret=-5019) [2024-02-19 19:03:34.350312] WARN [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:114) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=8] get cluster tenants fail(ret=-5019) [2024-02-19 19:03:34.350323] INFO [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:119) [1106807][OmtNodeBalancer][T0][YB42AC0103F2-000611B9211784A6-0-0] [lt=10] refresh tenant config(tenants=[], ret=-5019) [2024-02-19 19:03:34.359449] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.359493] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.362910] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.362946] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=36] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614362899}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.362967] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=19] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614362899}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.370578] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.370623] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.379670] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=17] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=46424, clean_start_pos=975167, clean_num=31457) [2024-02-19 19:03:34.380446] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=25] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340614380434}) [2024-02-19 19:03:34.380469] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=24] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340614362899}}) [2024-02-19 19:03:34.381044] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.381068] WARN [STORAGE.TRANS] acquire_global_snapshot__ (ob_trans_service_v4.cpp:1472) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=18] acquire global snapshot fail(ret=-4012, gts_ahead=0, expire_ts=1708340614380522, now={mts:1708340612451637}, now0={mts:1708340612451637}, snapshot=-1, uncertain_bound=0) [2024-02-19 19:03:34.381073] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.381097] WARN [STORAGE.TRANS] get_read_snapshot (ob_tx_api.cpp:552) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=30] acquire global snapshot fail(ret=-4012, tx={this:0x7fdcd5ac36f0, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340612450346, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}) [2024-02-19 19:03:34.381142] WARN [SQL.EXE] stmt_setup_snapshot_ (ob_sql_trans_control.cpp:614) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=37] fail to get snapshot(ret=-4012, local_ls_id={id:1}, session={this:0x7fdcdc9240c0, id:1, tenant:"sys", tenant_id:1, effective_tenant:"sys", effective_tenant_id:1, database:"oceanbase", user:"root@%", consistency_level:3, session_state:0, tx:0x7fdcd5ac36f0}) [2024-02-19 19:03:34.381164] WARN [SQL.EXE] start_stmt (ob_sql_trans_control.cpp:481) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=21] fail to exec stmt_setup_snapshot_(session, das_ctx, plan, plan_ctx, txs)(ret=-4012, session_id=1, *tx_desc={this:0x7fdcd5ac36f0, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340612450346, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}) [2024-02-19 19:03:34.381192] INFO [SQL.EXE] start_stmt (ob_sql_trans_control.cpp:530) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=25] start stmt(ret=-4012, auto_commit=true, session_id=1, snapshot={this:0x7fdd2afcbab0, valid:false, source:0, core:{version:-1, tx_id:{txid:0}, scn:-1}, uncertain_bound:0, snapshot_lsid:{id:-1}, parts:[]}, savepoint=0, tx_desc={this:0x7fdcd5ac36f0, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340612450346, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}, plan_type=1, stmt_type=1, has_for_update=false, query_start_time=1708340612451423, use_das=false, session={this:0x7fdcdc9240c0, id:1, tenant:"sys", tenant_id:1, effective_tenant:"sys", effective_tenant_id:1, database:"oceanbase", user:"root@%", consistency_level:3, session_state:0, tx:0x7fdcd5ac36f0}, plan=0x7fdcda010050, consistency_level_in_plan_ctx=3, trans_result={incomplete:false, parts:[], touched_ls_list:[], cflict_txs:[]}) [2024-02-19 19:03:34.381233] WARN [SQL] start_stmt (ob_result_set.cpp:282) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=40] fail to start stmt(ret=-4012, phy_plan->get_dependency_table()=[{table_id:1, schema_version:0, object_type:1, is_db_explicit:false, is_existed:true}]) [2024-02-19 19:03:34.381247] WARN [SQL] do_open_plan (ob_result_set.cpp:451) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=11] fail start stmt(ret=-4012) [2024-02-19 19:03:34.381255] WARN [SQL] open (ob_result_set.cpp:150) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=8] execute plan failed(ret=-4012) [2024-02-19 19:03:34.381263] WARN [SERVER] open (ob_inner_sql_result.cpp:146) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=7] open result set failed(ret=-4012) [2024-02-19 19:03:34.381272] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:607) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=6] result set open failed(ret=-4012, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}) [2024-02-19 19:03:34.381282] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=9] execute failed(ret=-4012, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=0) [2024-02-19 19:03:34.381292] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=6] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-4012, err_:"OB_TIMEOUT", retry_type:0, client_ret:-4012}, need_retry=false) [2024-02-19 19:03:34.381316] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=11] result set close failed(ret=-4012) [2024-02-19 19:03:34.381322] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=5] result set close failed(ret=-4012) [2024-02-19 19:03:34.381327] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=5] failed to close result(close_ret=-4012, ret=-4012) [2024-02-19 19:03:34.381347] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78584-0-0] [lt=6] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-02-19 19:03:34.381357] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:574) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78584-0-0] [lt=7] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=1929932) [2024-02-19 19:03:34.381366] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78584-0-0] [lt=8] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-02-19 19:03:34.381380] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78584-0-0] [lt=9] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-02-19 19:03:34.381393] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78584-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-02-19 19:03:34.381403] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78584-0-0] [lt=10] execute_read failed(ret=-4012, cluster_id=1, tenant_id=1) [2024-02-19 19:03:34.381415] WARN [COMMON.MYSQLP] read_without_check_sys_variable (ob_sql_client_decorator.cpp:119) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78584-0-0] [lt=8] failed to read without check sys variable(ret=-4012, sql="SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name", tenant_id=1, check_sys_variable=false, snapshot_timestamp=-1) [2024-02-19 19:03:34.381429] WARN [SHARE] load (ob_core_table_proxy.cpp:436) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78584-0-0] [lt=9] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-02-19 19:03:34.381519] WARN [SHARE] load (ob_core_table_proxy.cpp:368) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78584-0-0] [lt=12] load failed(ret=-4012, for_update=false) [2024-02-19 19:03:34.381532] WARN [SHARE] get (ob_global_stat_proxy.cpp:321) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78584-0-0] [lt=11] core_table load failed(ret=-4012) [2024-02-19 19:03:34.381541] WARN [SHARE] get_baseline_schema_version (ob_global_stat_proxy.cpp:287) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78584-0-0] [lt=8] get failed(ret=-4012) [2024-02-19 19:03:34.381551] WARN [SHARE.SCHEMA] get_baseline_schema_version (ob_schema_service_sql_impl.cpp:795) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78584-0-0] [lt=8] get_baseline_schema_version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-02-19 19:03:34.381572] WARN [SHARE.SCHEMA] get_baseline_schema_version (ob_multi_version_schema_service.cpp:4009) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78584-0-0] [lt=19] get baseline schema version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-02-19 19:03:34.381584] WARN [SERVER] try_load_baseline_schema_version_ (ob_server_schema_updater.cpp:512) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78584-0-0] [lt=10] fail to update baseline schema version(tmp_ret=-4012, tmp_ret="OB_TIMEOUT", *tenant_id=1) [2024-02-19 19:03:34.381600] WARN [SERVER] batch_process_tasks (ob_server_schema_updater.cpp:229) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78584-0-0] [lt=9] fail to process refresh task(ret=-4023, ret="OB_EAGAIN", tasks.at(0)={type:1, did_retry:true, schema_info:{schema_version:-1, tenant_id:0, sequence_id:18446744073709551615}}) [2024-02-19 19:03:34.381632] WARN [SERVER] batch_process_tasks (ob_uniq_task_queue.h:498) [1106708][SerScheQueue1][T0][Y0-0000000000000000-0-0] [lt=28] fail to batch process task(ret=-4023) [2024-02-19 19:03:34.381640] WARN [SERVER] run1 (ob_uniq_task_queue.h:449) [1106708][SerScheQueue1][T0][Y0-0000000000000000-0-0] [lt=8] fail to batch execute task(ret=-4023, tasks.count()=1) [2024-02-19 19:03:34.384491] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21] get wrs ts(ls_id={id:1}, delta_ns=-1706042771800913533, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:34.384529] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=39] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:34.392645] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.392691] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.402756] INFO [PALF] submit_broadcast_leader_info_ (log_config_mgr.cpp:468) [1107532][T1_LogLoop][T1][Y0-0000000000000000-0-0] [lt=37] submit_prepare_meta_req success(ret=0, palf_id=1, self="172.1.3.242:2882", proposal_id=138) [2024-02-19 19:03:34.402884] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.402914] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.413075] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.413119] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.423313] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.423355] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.433489] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.433524] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.443670] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.443701] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=31] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.443700] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.443716] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:34.443732] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.444328] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=16] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.444351] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.444362] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:34.444972] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=7] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.444996] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.445007] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:34.445735] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.445761] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.445776] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=13] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:34.446382] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.446401] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.446412] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:34.447021] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=15] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.447046] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.447059] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:34.447671] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.447696] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.447709] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:34.448150] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106788][RpcIO][T0][Y0-0000000000000000-0-0] [lt=22] [RPC EASY STAT](log_str=conn count=1/1, request done=19522/19522, request doing=1/0) [2024-02-19 19:03:34.448312] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.448329] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.448340] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:34.448474] INFO [SERVER] try_reload_schema (ob_server_schema_updater.cpp:435) [1108363][LeaseHB][T0][Y0-0000000000000000-0-0] [lt=12] schedule fetch new schema task(ret=0, ret="OB_SUCCESS", schema_info={schema_version:-1, tenant_id:0, sequence_id:18446744073709551615}) [2024-02-19 19:03:34.448495] INFO [SERVER] do_heartbeat_event (ob_heartbeat.cpp:188) [1108363][LeaseHB][T0][Y0-0000000000000000-0-0] [lt=22] try reload schema success(schema_version=1, refresh_schema_info={schema_version:-1, tenant_id:0, sequence_id:18446744073709551615}, schema_ret=0) [2024-02-19 19:03:34.448548] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106789][RpcIO][T0][Y0-0000000000000000-0-0] [lt=26] [RPC EASY STAT](log_str=conn count=1/1, request done=19522/19522, request doing=0/0) [2024-02-19 19:03:34.449010] INFO [SERVER] process_refresh_task (ob_server_schema_updater.cpp:254) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78585-0-0] [lt=6] [REFRESH_SCHEMA] start to process schema refresh task(ret=0, ret="OB_SUCCESS", schema_info={schema_version:-1, tenant_id:0, sequence_id:18446744073709551615}) [2024-02-19 19:03:34.449039] WARN [SERVER] process_refresh_task (ob_server_schema_updater.cpp:267) [1106708][SerScheQueue1][T0][YB42AC0103F2-000611B922B78585-0-0] [lt=27] rootservice is not in full service, try again(ret=-4023, ret="OB_EAGAIN", GCTX.root_service_->in_service()=true, GCTX.root_service_->is_full_service()=false) [2024-02-19 19:03:34.450016] INFO [STORAGE.TRANS] in_leader_serving_state (ob_trans_ctx_mgr_v4.cpp:881) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=12] ObLSTxCtxMgr not master(this={this:0x7fdce3204030, ls_id:{id:1}, tenant_id:1, state:"F_WORKING", total_tx_ctx_count:0, leader_takeover_ts:{mts:0}, is_leader_serving:false, max_replay_commit_version:1707751112415295196, ls_retain_ctx_mgr:{retain_ctx_list_.size():0}, aggre_rec_log_ts:-1, prev_aggre_rec_log_ts:-1, online_ts:0, uref:1073741826}) [2024-02-19 19:03:34.450065] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=37] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.450078] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.450098] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=18] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:34.450140] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=8] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.450159] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.450171] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:34.451237] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=11] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.451260] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.451273] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:34.451308] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=34] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.451331] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.451346] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:34.451862] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=7] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.451886] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.451897] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=10] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:34.451933] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=15] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.451945] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.451956] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:34.452510] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=8] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.452549] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=37] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.452561] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=11] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:34.453550] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.453569] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.453599] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=29] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:34.453624] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.454189] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=10] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.454223] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.454238] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.454267] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.454793] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.454839] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.455525] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.455556] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=7] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.456139] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.456173] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.456730] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.456764] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.457358] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.457404] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.458077] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=31] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.458114] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.459242] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.459271] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.460318] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.460358] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=8] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.460989] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.461031] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.461590] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.461621] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.462232] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.462273] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.463772] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.463794] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=21] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614463763}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.463813] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=17] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614463763}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.464582] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.464617] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.464740] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.464772] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.465352] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.465388] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=8] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.466435] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.466474] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.467354] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.467395] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.467677] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=34] Cache replace map node details(ret=0, replace_node_count=0, replace_time=21079, replace_start_pos=1242512, replace_num=15728) [2024-02-19 19:03:34.467964] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=27] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.468001] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.468576] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.468618] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.469181] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.469206] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=28] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.469788] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=7] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.469820] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.470391] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=8] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.470419] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.471102] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.471226] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.471719] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.471876] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.472304] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.472487] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.472917] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.473085] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.473540] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=37] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.473681] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.475175] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.475218] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.475345] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.475390] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.475988] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=29] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.476026] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.476615] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.476649] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.477236] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.477270] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.478857] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.478891] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.479473] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=8] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.479506] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.480509] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.480543] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=23] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340614480532}) [2024-02-19 19:03:34.480564] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=20] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340614463763}}) [2024-02-19 19:03:34.480594] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.481494] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.481523] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.482106] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.482134] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=7] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.482715] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=8] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.482744] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.483319] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=6] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.483349] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.483919] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.483948] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.484518] WARN [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:287) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-02-19 19:03:34.484540] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.484543] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:34.484568] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.484572] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:34.484591] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340614484504) [2024-02-19 19:03:34.484621] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=29] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340614284532, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:34.484699] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] get wrs ts(ls_id={id:1}, delta_ns=-1706042771800810683, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:34.484716] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:34.485692] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.485733] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.485751] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.485772] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=21] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.486429] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=11] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.486476] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.487056] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:34.496238] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=363] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.496287] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.506517] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.506555] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.516687] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.516730] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.526863] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.526916] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=56] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.537044] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.537079] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.548996] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.549030] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.556690] INFO [ARCHIVE] stop (ob_archive_scheduler_service.cpp:137) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=13] stop archive scheduler service [2024-02-19 19:03:34.557734] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=6] table not exist(tenant_id=1, database_id=201001, table_name=__all_backup_info, ret=-5019) [2024-02-19 19:03:34.557757] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=22] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_backup_info, ret=-5019) [2024-02-19 19:03:34.557771] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=12] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:34.557782] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=11] resolve table relation factor failed(ret=-5019, table_name=__all_backup_info) [2024-02-19 19:03:34.557796] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=10] fail to resolve table(ret=-5019) [2024-02-19 19:03:34.557805] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=8] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:34.557820] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=9] Table 'oceanbase.__all_backup_info' doesn't exist [2024-02-19 19:03:34.557830] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:34.557840] WARN [SQL.RESV] resolve_table_list (ob_update_resolver.cpp:423) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=8] failed to resolve table(ret=-5019) [2024-02-19 19:03:34.557848] WARN [SQL.RESV] resolve (ob_update_resolver.cpp:76) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=8] resolve table failed(ret=-5019) [2024-02-19 19:03:34.557859] WARN [SQL.RESV] stmt_resolver_func (ob_resolver.cpp:155) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=8] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3074) [2024-02-19 19:03:34.557878] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=11] failed to resolve(ret=-5019) [2024-02-19 19:03:34.557890] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=10] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.557901] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=9] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.557911] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=9] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:34.557922] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=8] fail to handle text query(stmt=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882', ret=-5019) [2024-02-19 19:03:34.557933] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=9] executor execute failed(ret=-5019) [2024-02-19 19:03:34.557944] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=9] execute failed(ret=-5019, executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, retry_cnt=0) [2024-02-19 19:03:34.557964] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=13] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:34.558013] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=45] result set close failed(ret=-5019) [2024-02-19 19:03:34.558024] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=10] result set close failed(ret=-5019) [2024-02-19 19:03:34.558033] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=7] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:34.558060] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAD-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:34.558075] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106733][BackupLease][T0][YB42AC0103F2-000611B923978EAD-0-0] [lt=12] failed to process final(executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, aret=-5019, ret=-5019) [2024-02-19 19:03:34.558087] WARN [SERVER] execute_write_inner (ob_inner_sql_connection.cpp:1818) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:34.558099] INFO [SERVER] execute_write_inner (ob_inner_sql_connection.cpp:1900) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=10] execute write sql(ret=-5019, tenant_id=1, affected_rows=0, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:34.558139] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:34.558151] WARN [SERVER] execute_write (ob_inner_sql_connection.cpp:1786) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=11] execute_write failed(ret=-5019, tenant_id=1, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882', is_user_sql=false) [2024-02-19 19:03:34.558162] WARN [SERVER] execute_write (ob_inner_sql_connection.cpp:1775) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=9] execute_write failed(ret=-5019, tenant_id=1, sql="update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'") [2024-02-19 19:03:34.558172] WARN [COMMON.MYSQLP] write (ob_mysql_proxy.cpp:133) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=8] execute sql failed(ret=-5019, conn=0x7fdd189bc050, start=1708340614556816, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:34.558226] WARN [SERVER] clean_backup_scheduler_leader (ob_backup_operator.cpp:348) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=12] execute sql failed(ret=-5019, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:34.558239] WARN [SERVER] clean_backup_scheduler_leader (ob_backup_manager.cpp:517) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=12] failed to clean backup scheduler leader(ret=-5019) [2024-02-19 19:03:34.559159] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.559182] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=23] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.564381] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.564423] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=42] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614564367}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.564450] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=24] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614564367}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.569326] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=25] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.569364] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.579370] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC84-0-0] [lt=172] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:34.579411] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC84-0-0] [lt=44] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:34.579429] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC84-0-0] [lt=17] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:34.579442] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC84-0-0] [lt=11] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:34.579451] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC84-0-0] [lt=9] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:34.579527] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.579558] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.580340] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=45] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:34.580417] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=25] Wash time detail, (compute_wash_size_time=109, refresh_score_time=47, wash_time=5) [2024-02-19 19:03:34.581030] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=34] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340614581016}) [2024-02-19 19:03:34.581049] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=18] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340614564367}}) [2024-02-19 19:03:34.584579] INFO [SQL.PC] update_memory_conf (ob_plan_cache.cpp:1499) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=26] update plan cache memory config(ob_plan_cache_percentage=5, ob_plan_cache_evict_high_percentage=90, ob_plan_cache_evict_low_percentage=50, tenant_id=1) [2024-02-19 19:03:34.584606] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1130) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=25] start lib cache evict(tenant_id=1, mem_hold=2097152, mem_limit=107374180, cache_obj_num=2, cache_node_num=2) [2024-02-19 19:03:34.584618] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1147) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=10] end lib cache evict(tenant_id=1, cache_evict_num=0, mem_hold=2097152, mem_limit=107374180, cache_obj_num=2, cache_node_num=2) [2024-02-19 19:03:34.584650] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] get wrs ts(ls_id={id:1}, delta_ns=-1706042771800711696, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:34.584676] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:34.587077] INFO [SQL.PC] runTimerTask (ob_plan_cache_manager.cpp:291) [1106739][PlanCacheEvict][T0][Y0-0000000000000000-0-0] [lt=8] schedule next cache evict task(evict_interval=1000000) [2024-02-19 19:03:34.589653] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.589684] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.590162] INFO [SQL.PC] runTimerTask (ob_plan_cache_manager.cpp:299) [1106739][PlanCacheEvict][T0][Y0-0000000000000000-0-0] [lt=33] schedule next cache evict task(evict_interval=1000000) [2024-02-19 19:03:34.599803] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.599843] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.609958] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.610254] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=298] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.619082] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=10] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=38646, clean_start_pos=1006624, clean_num=31457) [2024-02-19 19:03:34.620369] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.620395] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.624579] INFO [SHARE] blacklist_loop_ (ob_server_blacklist.cpp:313) [1106781][Blacklist][T0][Y0-0000000000000000-0-0] [lt=42] blacklist_loop exec finished(cost_time=30, is_enabled=true, send_cnt=0) [2024-02-19 19:03:34.630527] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.630573] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.636985] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:186) [1108342][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=28] start do ls ha handler(ls_id_array_=[{id:1}]) [2024-02-19 19:03:34.640718] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.640753] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.650941] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.650988] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=68] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.660612] INFO [STORAGE] gc_tables_in_queue (ob_tenant_meta_mem_mgr.cpp:351) [1107037][T1_T3mGC][T1][Y0-0000000000000000-0-0] [lt=66] Recycle 0 table(ret=0, allocator_={used:2532285, total:3058518}, tablet_pool_={typeid(T).name():"N9oceanbase7storage8ObTabletE", sizeof(T):2432, used_obj_cnt:980, free_obj_hold_cnt:1, allocator used:2448576, allocator total:2485504}, sstable_pool_={typeid(T).name():"N9oceanbase12blocksstable9ObSSTableE", sizeof(T):1024, used_obj_cnt:2027, free_obj_hold_cnt:2, allocator used:2207552, allocator total:2289280}, memtable_pool_={typeid(T).name():"N9oceanbase8memtable10ObMemtableE", sizeof(T):1856, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, tablet count=980, min_minor_cnt=0, pinned_tablet_cnt=0) [2024-02-19 19:03:34.661299] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.661324] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=25] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.665033] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.665070] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=36] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614665021}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.665094] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=21] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614665021}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.666902] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=13] table not exist(tenant_id=1, database_id=201001, table_name=__all_merge_info, ret=-5019) [2024-02-19 19:03:34.666915] INFO do_work (ob_rl_mgr.cpp:704) [1106705][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=26] swc wakeup.(stat_period_=1000000, ready=false) [2024-02-19 19:03:34.666941] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=37] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_merge_info, ret=-5019) [2024-02-19 19:03:34.666954] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=12] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:34.666964] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=8] resolve table relation factor failed(ret=-5019, table_name=__all_merge_info) [2024-02-19 19:03:34.666978] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=9] fail to resolve table(ret=-5019) [2024-02-19 19:03:34.666991] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=12] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:34.667007] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=10] Table 'oceanbase.__all_merge_info' doesn't exist [2024-02-19 19:03:34.667022] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=14] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:34.667032] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=8] resolve basic table failed(ret=-5019) [2024-02-19 19:03:34.667040] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=8] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:34.667048] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=7] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:34.667063] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=13] resolve normal query failed(ret=-5019) [2024-02-19 19:03:34.667072] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:34.667096] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=13] failed to resolve(ret=-5019) [2024-02-19 19:03:34.667107] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=11] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.667124] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=14] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.667139] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=14] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:34.667150] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=8] fail to handle text query(stmt=SELECT * FROM __all_merge_info WHERE tenant_id = '1', ret=-5019) [2024-02-19 19:03:34.667166] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=15] executor execute failed(ret=-5019) [2024-02-19 19:03:34.667184] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=17] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, retry_cnt=0) [2024-02-19 19:03:34.667208] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=16] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:34.667233] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=22] result set close failed(ret=-5019) [2024-02-19 19:03:34.667248] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=14] result set close failed(ret=-5019) [2024-02-19 19:03:34.667258] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=9] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:34.667290] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=14] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:34.667309] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C06-0-0] [lt=18] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, aret=-5019, ret=-5019) [2024-02-19 19:03:34.667321] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-02-19 19:03:34.667337] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=15] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:34.667348] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=10] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:34.667364] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=14] query failed(ret=-5019, conn=0x7fdcf4ef4050, start=1708340614666709, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-02-19 19:03:34.667379] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=15] read failed(ret=-5019) [2024-02-19 19:03:34.667387] WARN [SHARE] load_global_merge_info (ob_global_merge_table_operator.cpp:48) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=6] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, meta_tenant_id=1, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-02-19 19:03:34.667452] WARN [STORAGE] refresh_merge_info (ob_tenant_freeze_info_mgr.cpp:789) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=13] failed to load global merge info(ret=-5019, ret="OB_TABLE_NOT_EXIST", global_merge_info={tenant_id:1, cluster:{name:"cluster", value:0, need_update:false}, frozen_scn:{name:"frozen_scn", value:1, need_update:false}, global_broadcast_scn:{name:"global_broadcast_scn", value:1, need_update:false}, last_merged_scn:{name:"last_merged_scn", value:1, need_update:false}, is_merge_error:{name:"is_merge_error", value:0, need_update:false}, merge_status:{name:"merge_status", value:0, need_update:false}, error_type:{name:"error_type", value:0, need_update:false}, suspend_merging:{name:"suspend_merging", value:0, need_update:false}, merge_start_time:{name:"merge_start_time", value:0, need_update:false}, last_merged_time:{name:"last_merged_time", value:0, need_update:false}}) [2024-02-19 19:03:34.667477] WARN [STORAGE] runTimerTask (ob_tenant_freeze_info_mgr.cpp:884) [1107631][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=25] fail to refresh merge info(tmp_ret=-5019, tmp_ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:34.668108] INFO [STORAGE.TRANS] in_leader_serving_state (ob_trans_ctx_mgr_v4.cpp:881) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=7] ObLSTxCtxMgr not master(this={this:0x7fdce3204030, ls_id:{id:1}, tenant_id:1, state:"F_WORKING", total_tx_ctx_count:0, leader_takeover_ts:{mts:0}, is_leader_serving:false, max_replay_commit_version:1707751112415295196, ls_retain_ctx_mgr:{retain_ctx_list_.size():0}, aggre_rec_log_ts:-1, prev_aggre_rec_log_ts:-1, online_ts:0, uref:1073741826}) [2024-02-19 19:03:34.668427] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106796][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=18] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/1, request doing=0/0) [2024-02-19 19:03:34.668966] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106795][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=25] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/6, request doing=0/0) [2024-02-19 19:03:34.668971] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106798][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=43] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:34.670104] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106792][BatchIO][T0][Y0-0000000000000000-0-0] [lt=19] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:34.670150] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106800][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=13] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:34.670545] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106791][BatchIO][T0][Y0-0000000000000000-0-0] [lt=10] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:34.670576] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106793][BatchIO][T0][Y0-0000000000000000-0-0] [lt=12] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:34.671460] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.671492] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.681115] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=26] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340614681092}) [2024-02-19 19:03:34.681157] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=45] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340614665021}}) [2024-02-19 19:03:34.681604] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=25] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.681637] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.684667] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:34.684688] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:34.684703] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340614684653) [2024-02-19 19:03:34.684714] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340614484640, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:34.684731] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:738) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] current server is WRS leader, need start CLUSTER weak read service(tenant_id=1, serve_leader_epoch=0, cur_leader_epoch=138, cluster_service_tablet_id_={id:226}, in_service=false, can_update_version=false, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:34.684755] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:336) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] begin start service(tenant_id=1, is_in_service()=false, can_update_version=false) [2024-02-19 19:03:34.684764] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:338) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] start TenantWeakReadClusterService(tenant_id=1) [2024-02-19 19:03:34.685598] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:34.685625] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=25] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:34.685638] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=10] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:34.685649] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=11] resolve table relation factor failed(ret=-5019, table_name=__all_weak_read_service) [2024-02-19 19:03:34.685662] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=9] fail to resolve table(ret=-5019) [2024-02-19 19:03:34.685671] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=9] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:34.685690] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=8] Table 'oceanbase.__all_weak_read_service' doesn't exist [2024-02-19 19:03:34.685699] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=8] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:34.685708] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=7] resolve basic table failed(ret=-5019) [2024-02-19 19:03:34.685715] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=7] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:34.685721] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=6] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:34.685735] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=12] resolve normal query failed(ret=-5019) [2024-02-19 19:03:34.685743] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=6] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:34.685758] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=7] failed to resolve(ret=-5019) [2024-02-19 19:03:34.685766] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=8] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.685775] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=7] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.685782] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=6] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:34.685793] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=9] fail to handle text query(stmt=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '', ret=-5019) [2024-02-19 19:03:34.685807] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=12] executor execute failed(ret=-5019) [2024-02-19 19:03:34.685815] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=8] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, retry_cnt=0) [2024-02-19 19:03:34.685833] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=12] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:34.685851] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=15] result set close failed(ret=-5019) [2024-02-19 19:03:34.685858] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=6] result set close failed(ret=-5019) [2024-02-19 19:03:34.685867] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:34.685885] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=6] failed to process record(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:34.685898] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F0-0-0] [lt=12] failed to process final(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, aret=-5019, ret=-5019) [2024-02-19 19:03:34.685907] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7] execute sql failed(ret=-5019, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:34.685919] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:34.685926] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:34.685933] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6] query failed(ret=-5019, conn=0x7fdcdc89a050, start=1708340614685422, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:34.685943] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] read failed(ret=-5019) [2024-02-19 19:03:34.685954] WARN [STORAGE.TRANS] query_cluster_version_range_ (ob_tenant_weak_read_cluster_service.cpp:192) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8] execute sql read fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", exec_tenant_id=1, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:34.686041] WARN [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:367) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24] query cluster version range from WRS table fail(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:34.686056] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:415) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] start service done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, in_service=false, leader_epoch=0, current_version=0, delta=1708340614686053, min_version=0, max_version=0, max_stale_time=5000000000, all_valid_server_count=0, total_time=1311, wlock_time=30, check_leader_time=2, query_version_time=0, persist_version_time=0) [2024-02-19 19:03:34.686070] WARN [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:781) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] start CLUSTER weak read service fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:34.686079] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:791) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, need_start_service=true, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=138, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:34.686126] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] get wrs ts(ls_id={id:1}, delta_ns=-1706042771800609466, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:34.686138] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:34.687902] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=23] Cache replace map node details(ret=0, replace_node_count=0, replace_time=19668, replace_start_pos=1258240, replace_num=15728) [2024-02-19 19:03:34.692616] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.692652] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.702827] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.702880] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=57] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.713007] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.713042] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.716644] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:202) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=14] decide disk size finished(dir="/backup/oceanbase/data/sstable", suggested_disk_size=8589934592, suggested_disk_percentage=0, default_disk_percentage=60, total_space=246944890880, free_space=220974178304, disk_size=8589934592) [2024-02-19 19:03:34.716671] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:202) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=31] decide disk size finished(dir="/backup/oceanbase/data/clog", suggested_disk_size=8589934592, suggested_disk_percentage=0, default_disk_percentage=30, total_space=246944890880, free_space=220974178304, disk_size=8589934592) [2024-02-19 19:03:34.716681] INFO [SERVER] cal_all_part_disk_size (ob_server_utils.cpp:164) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=8] decide_all_disk_size succ(data_dir="/backup/oceanbase/data/sstable", clog_dir="/backup/oceanbase/data/clog", suggested_data_disk_size=8589934592, suggested_data_disk_percentage=0, data_default_disk_percentage=60, clog_default_disk_percentage=30, shared_mode=true, data_disk_size=8589934592, log_disk_size=8589934592) [2024-02-19 19:03:34.723159] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.723200] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.733336] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.733478] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=160] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.743859] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.743892] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.749516] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=8] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-02-19 19:03:34.749554] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=37] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-02-19 19:03:34.749568] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=11] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:34.749579] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=11] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-02-19 19:03:34.749593] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=9] fail to resolve table(ret=-5019) [2024-02-19 19:03:34.749602] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=9] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:34.749615] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=7] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-02-19 19:03:34.749624] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=8] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:34.749633] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=8] resolve basic table failed(ret=-5019) [2024-02-19 19:03:34.749644] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=9] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:34.749656] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=12] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:34.749667] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=9] resolve normal query failed(ret=-5019) [2024-02-19 19:03:34.749677] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:34.749695] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=10] failed to resolve(ret=-5019) [2024-02-19 19:03:34.749707] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=10] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.749719] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=9] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.749734] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=14] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:34.749745] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=9] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-02-19 19:03:34.749756] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=9] executor execute failed(ret=-5019) [2024-02-19 19:03:34.749767] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=9] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0) [2024-02-19 19:03:34.749787] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=12] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:34.749807] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=16] result set close failed(ret=-5019) [2024-02-19 19:03:34.749816] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:34.749825] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:34.749852] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:34.749865] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=12] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-02-19 19:03:34.749877] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=9] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:34.749897] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=18] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:34.749907] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=9] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:34.749924] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=15] query failed(ret=-5019, conn=0x7fdcdc9be050, start=1708340614749295, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:34.749938] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=13] read failed(ret=-5019) [2024-02-19 19:03:34.749950] WARN [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:612) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=9] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:34.750090] WARN [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=16] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:34.750113] WARN [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=21] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-02-19 19:03:34.750124] WARN [SHARE] next (ob_ls_table_iterator.cpp:71) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=11] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:34.750135] WARN [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:331) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=9] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:34.750148] WARN [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:213) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=10] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-02-19 19:03:34.750160] WARN [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:193) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=11] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-02-19 19:03:34.750171] WARN [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:43) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E1-0-0] [lt=9] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:34.754034] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.754073] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.764204] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.764245] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.765746] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.765772] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=25] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614765737}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.765790] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=16] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614765737}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.774373] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.774418] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.781306] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=24] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340614781289}) [2024-02-19 19:03:34.781341] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=37] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340614765737}}) [2024-02-19 19:03:34.784549] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.784585] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.784834] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:34.784879] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=44] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:34.784919] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340614784896}) [2024-02-19 19:03:34.784967] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=45] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340614784818) [2024-02-19 19:03:34.784981] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340614684721, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:34.785060] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] get wrs ts(ls_id={id:1}, delta_ns=-1706042771800511008, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:34.785083] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:34.794727] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.794780] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=55] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.804911] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.804981] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=71] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.815135] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.815171] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.820806] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=90] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:34.820948] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=69] Wash time detail, (compute_wash_size_time=154, refresh_score_time=48, wash_time=27) [2024-02-19 19:03:34.822848] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=6] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:34.822881] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=32] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:34.822896] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=12] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:34.822908] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=10] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:34.822926] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=13] fail to resolve table(ret=-5019) [2024-02-19 19:03:34.822939] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=13] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:34.822956] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=11] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:34.822969] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=12] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:34.822978] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:34.822998] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=19] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:34.823007] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=8] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:34.823017] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=8] resolve normal query failed(ret=-5019) [2024-02-19 19:03:34.823026] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=7] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:34.823042] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=9] failed to resolve(ret=-5019) [2024-02-19 19:03:34.823052] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=9] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.823074] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=20] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:34.823088] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=12] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:34.823102] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=11] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, ret=-5019) [2024-02-19 19:03:34.823117] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=12] executor execute failed(ret=-5019) [2024-02-19 19:03:34.823131] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=13] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, retry_cnt=0) [2024-02-19 19:03:34.823164] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=16] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:34.823186] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=21] result set close failed(ret=-5019) [2024-02-19 19:03:34.823195] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:34.823203] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=7] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:34.823226] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:34.823237] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02B-0-0] [lt=10] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-02-19 19:03:34.823248] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:34.823258] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:34.823267] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:34.823277] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] query failed(ret=-5019, conn=0x7fdcdc924050, start=1708340614822621, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:34.823288] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] read failed(ret=-5019) [2024-02-19 19:03:34.823298] WARN get_my_sql_result_ (ob_table_access_helper.h:329) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x7fdcfffccd78, table=__all_server, condition=where svr_ip='172.1.3.242' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, columns_str="zone") [2024-02-19 19:03:34.823316] WARN read_single_row (ob_table_access_helper.h:178) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] get mysql result failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1, columns=0x7fdcfffccd78, table=__all_server, where_condition=where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:34.823397] WARN [COORDINATOR] get_self_zone_name (table_accessor.cpp:517) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] get zone from __all_server failed(ret=-4016, ret="OB_ERR_UNEXPECTED", columns=0x7fdcfffccd78, where_condition="where svr_ip='172.1.3.242' and svr_port=2882", zone_name_holder=) [2024-02-19 19:03:34.823418] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:450) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=18] get self zone name failed(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:34.823433] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:459) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=14] zone name is empty(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:34.823446] WARN [COORDINATOR] refresh (ob_leader_coordinator.cpp:107) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] get all ls election reference info failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1) [2024-02-19 19:03:34.825278] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.825304] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.828746] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC85-0-0] [lt=87] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:34.828781] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC85-0-0] [lt=36] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:34.828799] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC85-0-0] [lt=16] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:34.828816] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC85-0-0] [lt=15] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:34.828833] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC85-0-0] [lt=16] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:34.835429] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.835461] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.845568] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.845675] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=108] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.855804] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.855838] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.858797] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=24] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=37820, clean_start_pos=1038081, clean_num=31457) [2024-02-19 19:03:34.860526] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=22] block manager free block(macro_id=[1176](ver=0,mode=0,seq=15489151), io_fd={first_id:15489151, second_id:1176, device_handle:null}) [2024-02-19 19:03:34.860555] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=28] block manager free block(macro_id=[1081](ver=0,mode=0,seq=15489155), io_fd={first_id:15489155, second_id:1081, device_handle:null}) [2024-02-19 19:03:34.860565] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2425](ver=0,mode=0,seq=15489153), io_fd={first_id:15489153, second_id:2425, device_handle:null}) [2024-02-19 19:03:34.860573] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2761](ver=0,mode=0,seq=15489152), io_fd={first_id:15489152, second_id:2761, device_handle:null}) [2024-02-19 19:03:34.860580] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[73](ver=0,mode=0,seq=15489154), io_fd={first_id:15489154, second_id:73, device_handle:null}) [2024-02-19 19:03:34.860587] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1994](ver=0,mode=0,seq=15489157), io_fd={first_id:15489157, second_id:1994, device_handle:null}) [2024-02-19 19:03:34.860596] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3674](ver=0,mode=0,seq=15489156), io_fd={first_id:15489156, second_id:3674, device_handle:null}) [2024-02-19 19:03:34.860606] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[219](ver=0,mode=0,seq=15489158), io_fd={first_id:15489158, second_id:219, device_handle:null}) [2024-02-19 19:03:34.860616] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[1563](ver=0,mode=0,seq=15489159), io_fd={first_id:15489159, second_id:1563, device_handle:null}) [2024-02-19 19:03:34.860631] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=13] block manager free block(macro_id=[2571](ver=0,mode=0,seq=15489161), io_fd={first_id:15489161, second_id:2571, device_handle:null}) [2024-02-19 19:03:34.860638] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3243](ver=0,mode=0,seq=15489160), io_fd={first_id:15489160, second_id:3243, device_handle:null}) [2024-02-19 19:03:34.860646] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1804](ver=0,mode=0,seq=15489162), io_fd={first_id:15489162, second_id:1804, device_handle:null}) [2024-02-19 19:03:34.860654] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3389](ver=0,mode=0,seq=15489164), io_fd={first_id:15489164, second_id:3389, device_handle:null}) [2024-02-19 19:03:34.860661] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2381](ver=0,mode=0,seq=15489166), io_fd={first_id:15489166, second_id:2381, device_handle:null}) [2024-02-19 19:03:34.860669] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[365](ver=0,mode=0,seq=15489165), io_fd={first_id:15489165, second_id:365, device_handle:null}) [2024-02-19 19:03:34.860676] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1205](ver=0,mode=0,seq=15489163), io_fd={first_id:15489163, second_id:1205, device_handle:null}) [2024-02-19 19:03:34.860683] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1614](ver=0,mode=0,seq=15487990), io_fd={first_id:15487990, second_id:1614, device_handle:null}) [2024-02-19 19:03:34.860690] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3294](ver=0,mode=0,seq=15487991), io_fd={first_id:15487991, second_id:3294, device_handle:null}) [2024-02-19 19:03:34.860697] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3966](ver=0,mode=0,seq=15487992), io_fd={first_id:15487992, second_id:3966, device_handle:null}) [2024-02-19 19:03:34.860705] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3535](ver=0,mode=0,seq=15487993), io_fd={first_id:15487993, second_id:3535, device_handle:null}) [2024-02-19 19:03:34.860714] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3104](ver=0,mode=0,seq=15487994), io_fd={first_id:15487994, second_id:3104, device_handle:null}) [2024-02-19 19:03:34.860724] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3009](ver=0,mode=0,seq=15487995), io_fd={first_id:15487995, second_id:3009, device_handle:null}) [2024-02-19 19:03:34.860733] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2001](ver=0,mode=0,seq=15487996), io_fd={first_id:15487996, second_id:2001, device_handle:null}) [2024-02-19 19:03:34.860746] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=11] block manager free block(macro_id=[2673](ver=0,mode=0,seq=15487997), io_fd={first_id:15487997, second_id:2673, device_handle:null}) [2024-02-19 19:03:34.860753] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3827](ver=0,mode=0,seq=15487998), io_fd={first_id:15487998, second_id:3827, device_handle:null}) [2024-02-19 19:03:34.860760] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[204](ver=0,mode=0,seq=15487999), io_fd={first_id:15487999, second_id:204, device_handle:null}) [2024-02-19 19:03:34.860768] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3060](ver=0,mode=0,seq=15488000), io_fd={first_id:15488000, second_id:3060, device_handle:null}) [2024-02-19 19:03:34.860775] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[708](ver=0,mode=0,seq=15488001), io_fd={first_id:15488001, second_id:708, device_handle:null}) [2024-02-19 19:03:34.860783] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3396](ver=0,mode=0,seq=15488002), io_fd={first_id:15488002, second_id:3396, device_handle:null}) [2024-02-19 19:03:34.860791] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2724](ver=0,mode=0,seq=15488003), io_fd={first_id:15488003, second_id:2724, device_handle:null}) [2024-02-19 19:03:34.860798] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3732](ver=0,mode=0,seq=15488004), io_fd={first_id:15488004, second_id:3732, device_handle:null}) [2024-02-19 19:03:34.860805] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[540](ver=0,mode=0,seq=15488005), io_fd={first_id:15488005, second_id:540, device_handle:null}) [2024-02-19 19:03:34.860813] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2220](ver=0,mode=0,seq=15488006), io_fd={first_id:15488006, second_id:2220, device_handle:null}) [2024-02-19 19:03:34.860820] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1285](ver=0,mode=0,seq=15488007), io_fd={first_id:15488007, second_id:1285, device_handle:null}) [2024-02-19 19:03:34.860828] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2629](ver=0,mode=0,seq=15488008), io_fd={first_id:15488008, second_id:2629, device_handle:null}) [2024-02-19 19:03:34.860847] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=16] block manager free block(macro_id=[277](ver=0,mode=0,seq=15488009), io_fd={first_id:15488009, second_id:277, device_handle:null}) [2024-02-19 19:03:34.860865] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[3301](ver=0,mode=0,seq=15488010), io_fd={first_id:15488010, second_id:3301, device_handle:null}) [2024-02-19 19:03:34.860883] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=16] block manager free block(macro_id=[445](ver=0,mode=0,seq=15488011), io_fd={first_id:15488011, second_id:445, device_handle:null}) [2024-02-19 19:03:34.860900] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[1694](ver=0,mode=0,seq=15488012), io_fd={first_id:15488012, second_id:1694, device_handle:null}) [2024-02-19 19:03:34.860917] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[255](ver=0,mode=0,seq=15488013), io_fd={first_id:15488013, second_id:255, device_handle:null}) [2024-02-19 19:03:34.860969] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=48] block manager free block(macro_id=[1767](ver=0,mode=0,seq=15488014), io_fd={first_id:15488014, second_id:1767, device_handle:null}) [2024-02-19 19:03:34.860983] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=12] block manager free block(macro_id=[1935](ver=0,mode=0,seq=15488015), io_fd={first_id:15488015, second_id:1935, device_handle:null}) [2024-02-19 19:03:34.860995] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3279](ver=0,mode=0,seq=15489168), io_fd={first_id:15489168, second_id:3279, device_handle:null}) [2024-02-19 19:03:34.861008] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[2271](ver=0,mode=0,seq=15489167), io_fd={first_id:15489167, second_id:2271, device_handle:null}) [2024-02-19 19:03:34.861019] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2680](ver=0,mode=0,seq=15488018), io_fd={first_id:15488018, second_id:2680, device_handle:null}) [2024-02-19 19:03:34.861033] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[905](ver=0,mode=0,seq=15488019), io_fd={first_id:15488019, second_id:905, device_handle:null}) [2024-02-19 19:03:34.861045] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3929](ver=0,mode=0,seq=15488020), io_fd={first_id:15488020, second_id:3929, device_handle:null}) [2024-02-19 19:03:34.861057] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[2921](ver=0,mode=0,seq=15488021), io_fd={first_id:15488021, second_id:2921, device_handle:null}) [2024-02-19 19:03:34.861068] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2417](ver=0,mode=0,seq=15488022), io_fd={first_id:15488022, second_id:2417, device_handle:null}) [2024-02-19 19:03:34.861081] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1073](ver=0,mode=0,seq=15488023), io_fd={first_id:15488023, second_id:1073, device_handle:null}) [2024-02-19 19:03:34.861092] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[306](ver=0,mode=0,seq=15488024), io_fd={first_id:15488024, second_id:306, device_handle:null}) [2024-02-19 19:03:34.861104] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2994](ver=0,mode=0,seq=15488025), io_fd={first_id:15488025, second_id:2994, device_handle:null}) [2024-02-19 19:03:34.861116] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2154](ver=0,mode=0,seq=15488026), io_fd={first_id:15488026, second_id:2154, device_handle:null}) [2024-02-19 19:03:34.861153] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=34] block manager free block(macro_id=[810](ver=0,mode=0,seq=15488027), io_fd={first_id:15488027, second_id:810, device_handle:null}) [2024-02-19 19:03:34.861165] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3330](ver=0,mode=0,seq=15488028), io_fd={first_id:15488028, second_id:3330, device_handle:null}) [2024-02-19 19:03:34.861177] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[4002](ver=0,mode=0,seq=15488029), io_fd={first_id:15488029, second_id:4002, device_handle:null}) [2024-02-19 19:03:34.861188] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[978](ver=0,mode=0,seq=15488030), io_fd={first_id:15488030, second_id:978, device_handle:null}) [2024-02-19 19:03:34.861196] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1219](ver=0,mode=0,seq=15488031), io_fd={first_id:15488031, second_id:1219, device_handle:null}) [2024-02-19 19:03:34.861204] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[379](ver=0,mode=0,seq=15488032), io_fd={first_id:15488032, second_id:379, device_handle:null}) [2024-02-19 19:03:34.861212] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1723](ver=0,mode=0,seq=15488033), io_fd={first_id:15488033, second_id:1723, device_handle:null}) [2024-02-19 19:03:34.861220] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1051](ver=0,mode=0,seq=15488034), io_fd={first_id:15488034, second_id:1051, device_handle:null}) [2024-02-19 19:03:34.861227] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[883](ver=0,mode=0,seq=15488035), io_fd={first_id:15488035, second_id:883, device_handle:null}) [2024-02-19 19:03:34.861235] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[547](ver=0,mode=0,seq=15488036), io_fd={first_id:15488036, second_id:547, device_handle:null}) [2024-02-19 19:03:34.861243] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3476](ver=0,mode=0,seq=15488037), io_fd={first_id:15488037, second_id:3476, device_handle:null}) [2024-02-19 19:03:34.861251] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3644](ver=0,mode=0,seq=15488038), io_fd={first_id:15488038, second_id:3644, device_handle:null}) [2024-02-19 19:03:34.861259] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3308](ver=0,mode=0,seq=15488039), io_fd={first_id:15488039, second_id:3308, device_handle:null}) [2024-02-19 19:03:34.861267] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3980](ver=0,mode=0,seq=15488040), io_fd={first_id:15488040, second_id:3980, device_handle:null}) [2024-02-19 19:03:34.861275] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[620](ver=0,mode=0,seq=15488041), io_fd={first_id:15488041, second_id:620, device_handle:null}) [2024-02-19 19:03:34.861282] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1628](ver=0,mode=0,seq=15488042), io_fd={first_id:15488042, second_id:1628, device_handle:null}) [2024-02-19 19:03:34.861290] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2468](ver=0,mode=0,seq=15488043), io_fd={first_id:15488043, second_id:2468, device_handle:null}) [2024-02-19 19:03:34.861298] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3812](ver=0,mode=0,seq=15488044), io_fd={first_id:15488044, second_id:3812, device_handle:null}) [2024-02-19 19:03:34.861306] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3140](ver=0,mode=0,seq=15488045), io_fd={first_id:15488045, second_id:3140, device_handle:null}) [2024-02-19 19:03:34.861313] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1701](ver=0,mode=0,seq=15488046), io_fd={first_id:15488046, second_id:1701, device_handle:null}) [2024-02-19 19:03:34.861320] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3549](ver=0,mode=0,seq=15488047), io_fd={first_id:15488047, second_id:3549, device_handle:null}) [2024-02-19 19:03:34.861327] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2709](ver=0,mode=0,seq=15488048), io_fd={first_id:15488048, second_id:2709, device_handle:null}) [2024-02-19 19:03:34.861334] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[4053](ver=0,mode=0,seq=15488049), io_fd={first_id:15488049, second_id:4053, device_handle:null}) [2024-02-19 19:03:34.861343] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3717](ver=0,mode=0,seq=15489169), io_fd={first_id:15489169, second_id:3717, device_handle:null}) [2024-02-19 19:03:34.861351] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1029](ver=0,mode=0,seq=15489170), io_fd={first_id:15489170, second_id:1029, device_handle:null}) [2024-02-19 19:03:34.861359] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3958](ver=0,mode=0,seq=15488052), io_fd={first_id:15488052, second_id:3958, device_handle:null}) [2024-02-19 19:03:34.861372] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=12] block manager free block(macro_id=[2782](ver=0,mode=0,seq=15488053), io_fd={first_id:15488053, second_id:2782, device_handle:null}) [2024-02-19 19:03:34.861380] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1102](ver=0,mode=0,seq=15488054), io_fd={first_id:15488054, second_id:1102, device_handle:null}) [2024-02-19 19:03:34.861387] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2278](ver=0,mode=0,seq=15488055), io_fd={first_id:15488055, second_id:2278, device_handle:null}) [2024-02-19 19:03:34.861397] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[934](ver=0,mode=0,seq=15488056), io_fd={first_id:15488056, second_id:934, device_handle:null}) [2024-02-19 19:03:34.861405] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1007](ver=0,mode=0,seq=15488057), io_fd={first_id:15488057, second_id:1007, device_handle:null}) [2024-02-19 19:03:34.861413] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[671](ver=0,mode=0,seq=15488058), io_fd={first_id:15488058, second_id:671, device_handle:null}) [2024-02-19 19:03:34.861423] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3863](ver=0,mode=0,seq=15488059), io_fd={first_id:15488059, second_id:3863, device_handle:null}) [2024-02-19 19:03:34.861432] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[503](ver=0,mode=0,seq=15488060), io_fd={first_id:15488060, second_id:503, device_handle:null}) [2024-02-19 19:03:34.861442] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[1511](ver=0,mode=0,seq=15488061), io_fd={first_id:15488061, second_id:1511, device_handle:null}) [2024-02-19 19:03:34.861451] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3096](ver=0,mode=0,seq=15488062), io_fd={first_id:15488062, second_id:3096, device_handle:null}) [2024-02-19 19:03:34.861459] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2592](ver=0,mode=0,seq=15488063), io_fd={first_id:15488063, second_id:2592, device_handle:null}) [2024-02-19 19:03:34.861467] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3936](ver=0,mode=0,seq=15488064), io_fd={first_id:15488064, second_id:3936, device_handle:null}) [2024-02-19 19:03:34.861481] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=12] block manager free block(macro_id=[2760](ver=0,mode=0,seq=15488065), io_fd={first_id:15488065, second_id:2760, device_handle:null}) [2024-02-19 19:03:34.861489] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3768](ver=0,mode=0,seq=15488066), io_fd={first_id:15488066, second_id:3768, device_handle:null}) [2024-02-19 19:03:34.861496] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[145](ver=0,mode=0,seq=15488067), io_fd={first_id:15488067, second_id:145, device_handle:null}) [2024-02-19 19:03:34.861505] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[481](ver=0,mode=0,seq=15488068), io_fd={first_id:15488068, second_id:481, device_handle:null}) [2024-02-19 19:03:34.861514] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[1825](ver=0,mode=0,seq=15488069), io_fd={first_id:15488069, second_id:1825, device_handle:null}) [2024-02-19 19:03:34.861523] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[1153](ver=0,mode=0,seq=15488070), io_fd={first_id:15488070, second_id:1153, device_handle:null}) [2024-02-19 19:03:34.861532] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2161](ver=0,mode=0,seq=15488071), io_fd={first_id:15488071, second_id:2161, device_handle:null}) [2024-02-19 19:03:34.861541] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3505](ver=0,mode=0,seq=15488072), io_fd={first_id:15488072, second_id:3505, device_handle:null}) [2024-02-19 19:03:34.861551] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[1657](ver=0,mode=0,seq=15488073), io_fd={first_id:15488073, second_id:1657, device_handle:null}) [2024-02-19 19:03:34.861560] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2402](ver=0,mode=0,seq=15488074), io_fd={first_id:15488074, second_id:2402, device_handle:null}) [2024-02-19 19:03:34.861568] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2738](ver=0,mode=0,seq=15488075), io_fd={first_id:15488075, second_id:2738, device_handle:null}) [2024-02-19 19:03:34.861575] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[459](ver=0,mode=0,seq=15488076), io_fd={first_id:15488076, second_id:459, device_handle:null}) [2024-02-19 19:03:34.861584] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3147](ver=0,mode=0,seq=15488077), io_fd={first_id:15488077, second_id:3147, device_handle:null}) [2024-02-19 19:03:34.861591] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2307](ver=0,mode=0,seq=15488078), io_fd={first_id:15488078, second_id:2307, device_handle:null}) [2024-02-19 19:03:34.861601] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2979](ver=0,mode=0,seq=15488079), io_fd={first_id:15488079, second_id:2979, device_handle:null}) [2024-02-19 19:03:34.861608] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3483](ver=0,mode=0,seq=15489171), io_fd={first_id:15489171, second_id:3483, device_handle:null}) [2024-02-19 19:03:34.861615] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2475](ver=0,mode=0,seq=15488081), io_fd={first_id:15488081, second_id:2475, device_handle:null}) [2024-02-19 19:03:34.861629] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=13] block manager free block(macro_id=[2716](ver=0,mode=0,seq=15489172), io_fd={first_id:15489172, second_id:2716, device_handle:null}) [2024-02-19 19:03:34.861637] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[4060](ver=0,mode=0,seq=15488083), io_fd={first_id:15488083, second_id:4060, device_handle:null}) [2024-02-19 19:03:34.861647] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3220](ver=0,mode=0,seq=15488084), io_fd={first_id:15488084, second_id:3220, device_handle:null}) [2024-02-19 19:03:34.861655] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[868](ver=0,mode=0,seq=15488085), io_fd={first_id:15488085, second_id:868, device_handle:null}) [2024-02-19 19:03:34.861663] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[364](ver=0,mode=0,seq=15488086), io_fd={first_id:15488086, second_id:364, device_handle:null}) [2024-02-19 19:03:34.861672] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3052](ver=0,mode=0,seq=15488087), io_fd={first_id:15488087, second_id:3052, device_handle:null}) [2024-02-19 19:03:34.861681] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[1708](ver=0,mode=0,seq=15488088), io_fd={first_id:15488088, second_id:1708, device_handle:null}) [2024-02-19 19:03:34.861691] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2380](ver=0,mode=0,seq=15488089), io_fd={first_id:15488089, second_id:2380, device_handle:null}) [2024-02-19 19:03:34.861700] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3388](ver=0,mode=0,seq=15488090), io_fd={first_id:15488090, second_id:3388, device_handle:null}) [2024-02-19 19:03:34.861715] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=13] block manager free block(macro_id=[1109](ver=0,mode=0,seq=15488091), io_fd={first_id:15488091, second_id:1109, device_handle:null}) [2024-02-19 19:03:34.861723] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1445](ver=0,mode=0,seq=15488092), io_fd={first_id:15488092, second_id:1445, device_handle:null}) [2024-02-19 19:03:34.861730] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2117](ver=0,mode=0,seq=15488093), io_fd={first_id:15488093, second_id:2117, device_handle:null}) [2024-02-19 19:03:34.861740] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2957](ver=0,mode=0,seq=15488094), io_fd={first_id:15488094, second_id:2957, device_handle:null}) [2024-02-19 19:03:34.861748] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[269](ver=0,mode=0,seq=15488095), io_fd={first_id:15488095, second_id:269, device_handle:null}) [2024-02-19 19:03:34.861756] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3198](ver=0,mode=0,seq=15488096), io_fd={first_id:15488096, second_id:3198, device_handle:null}) [2024-02-19 19:03:34.861763] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1854](ver=0,mode=0,seq=15488097), io_fd={first_id:15488097, second_id:1854, device_handle:null}) [2024-02-19 19:03:34.861770] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[510](ver=0,mode=0,seq=15488098), io_fd={first_id:15488098, second_id:510, device_handle:null}) [2024-02-19 19:03:34.861780] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[1686](ver=0,mode=0,seq=15488099), io_fd={first_id:15488099, second_id:1686, device_handle:null}) [2024-02-19 19:03:34.861789] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2190](ver=0,mode=0,seq=15488100), io_fd={first_id:15488100, second_id:2190, device_handle:null}) [2024-02-19 19:03:34.861797] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3534](ver=0,mode=0,seq=15488101), io_fd={first_id:15488101, second_id:3534, device_handle:null}) [2024-02-19 19:03:34.861806] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2862](ver=0,mode=0,seq=15488102), io_fd={first_id:15488102, second_id:2862, device_handle:null}) [2024-02-19 19:03:34.861814] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3870](ver=0,mode=0,seq=15488103), io_fd={first_id:15488103, second_id:3870, device_handle:null}) [2024-02-19 19:03:34.861823] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[1182](ver=0,mode=0,seq=15488104), io_fd={first_id:15488104, second_id:1182, device_handle:null}) [2024-02-19 19:03:34.861831] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2767](ver=0,mode=0,seq=15488105), io_fd={first_id:15488105, second_id:2767, device_handle:null}) [2024-02-19 19:03:34.861841] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3271](ver=0,mode=0,seq=15488106), io_fd={first_id:15488106, second_id:3271, device_handle:null}) [2024-02-19 19:03:34.861849] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2263](ver=0,mode=0,seq=15488107), io_fd={first_id:15488107, second_id:2263, device_handle:null}) [2024-02-19 19:03:34.861856] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[415](ver=0,mode=0,seq=15488108), io_fd={first_id:15488108, second_id:415, device_handle:null}) [2024-02-19 19:03:34.861863] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3775](ver=0,mode=0,seq=15488109), io_fd={first_id:15488109, second_id:3775, device_handle:null}) [2024-02-19 19:03:34.861873] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[1087](ver=0,mode=0,seq=15488110), io_fd={first_id:15488110, second_id:1087, device_handle:null}) [2024-02-19 19:03:34.861881] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2168](ver=0,mode=0,seq=15488111), io_fd={first_id:15488111, second_id:2168, device_handle:null}) [2024-02-19 19:03:34.861888] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[824](ver=0,mode=0,seq=15488112), io_fd={first_id:15488112, second_id:824, device_handle:null}) [2024-02-19 19:03:34.861895] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1832](ver=0,mode=0,seq=15488113), io_fd={first_id:15488113, second_id:1832, device_handle:null}) [2024-02-19 19:03:34.861902] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2672](ver=0,mode=0,seq=15489174), io_fd={first_id:15489174, second_id:2672, device_handle:null}) [2024-02-19 19:03:34.861911] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3344](ver=0,mode=0,seq=15489173), io_fd={first_id:15489173, second_id:3344, device_handle:null}) [2024-02-19 19:03:34.861918] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2000](ver=0,mode=0,seq=15488116), io_fd={first_id:15488116, second_id:2000, device_handle:null}) [2024-02-19 19:03:34.861928] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3008](ver=0,mode=0,seq=15488117), io_fd={first_id:15488117, second_id:3008, device_handle:null}) [2024-02-19 19:03:34.861936] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1905](ver=0,mode=0,seq=15488118), io_fd={first_id:15488118, second_id:1905, device_handle:null}) [2024-02-19 19:03:34.861955] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=16] block manager free block(macro_id=[393](ver=0,mode=0,seq=15488119), io_fd={first_id:15488119, second_id:393, device_handle:null}) [2024-02-19 19:03:34.861966] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[4089](ver=0,mode=0,seq=15488120), io_fd={first_id:15488120, second_id:4089, device_handle:null}) [2024-02-19 19:03:34.861988] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=19] block manager free block(macro_id=[897](ver=0,mode=0,seq=15488121), io_fd={first_id:15488121, second_id:897, device_handle:null}) [2024-02-19 19:03:34.861999] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2577](ver=0,mode=0,seq=15488122), io_fd={first_id:15488122, second_id:2577, device_handle:null}) [2024-02-19 19:03:34.862017] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[3921](ver=0,mode=0,seq=15488123), io_fd={first_id:15488123, second_id:3921, device_handle:null}) [2024-02-19 19:03:34.862029] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[130](ver=0,mode=0,seq=15488124), io_fd={first_id:15488124, second_id:130, device_handle:null}) [2024-02-19 19:03:34.862043] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=12] block manager free block(macro_id=[1642](ver=0,mode=0,seq=15488125), io_fd={first_id:15488125, second_id:1642, device_handle:null}) [2024-02-19 19:03:34.862052] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[298](ver=0,mode=0,seq=15488126), io_fd={first_id:15488126, second_id:298, device_handle:null}) [2024-02-19 19:03:34.862059] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2314](ver=0,mode=0,seq=15488127), io_fd={first_id:15488127, second_id:2314, device_handle:null}) [2024-02-19 19:03:34.862068] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3154](ver=0,mode=0,seq=15488128), io_fd={first_id:15488128, second_id:3154, device_handle:null}) [2024-02-19 19:03:34.862077] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3826](ver=0,mode=0,seq=15488129), io_fd={first_id:15488129, second_id:3826, device_handle:null}) [2024-02-19 19:03:34.862087] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3490](ver=0,mode=0,seq=15488130), io_fd={first_id:15488130, second_id:3490, device_handle:null}) [2024-02-19 19:03:34.862096] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[1211](ver=0,mode=0,seq=15488131), io_fd={first_id:15488131, second_id:1211, device_handle:null}) [2024-02-19 19:03:34.862106] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3227](ver=0,mode=0,seq=15488132), io_fd={first_id:15488132, second_id:3227, device_handle:null}) [2024-02-19 19:03:34.862115] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2723](ver=0,mode=0,seq=15488133), io_fd={first_id:15488133, second_id:2723, device_handle:null}) [2024-02-19 19:03:34.862124] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3395](ver=0,mode=0,seq=15488134), io_fd={first_id:15488134, second_id:3395, device_handle:null}) [2024-02-19 19:03:34.862132] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[612](ver=0,mode=0,seq=15488135), io_fd={first_id:15488135, second_id:612, device_handle:null}) [2024-02-19 19:03:34.862142] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2124](ver=0,mode=0,seq=15488136), io_fd={first_id:15488136, second_id:2124, device_handle:null}) [2024-02-19 19:03:34.862150] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2460](ver=0,mode=0,seq=15488137), io_fd={first_id:15488137, second_id:2460, device_handle:null}) [2024-02-19 19:03:34.862159] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[1116](ver=0,mode=0,seq=15488138), io_fd={first_id:15488138, second_id:1116, device_handle:null}) [2024-02-19 19:03:34.862168] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3132](ver=0,mode=0,seq=15488139), io_fd={first_id:15488139, second_id:3132, device_handle:null}) [2024-02-19 19:03:34.862175] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[108](ver=0,mode=0,seq=15488140), io_fd={first_id:15488140, second_id:108, device_handle:null}) [2024-02-19 19:03:34.862185] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[349](ver=0,mode=0,seq=15488141), io_fd={first_id:15488141, second_id:349, device_handle:null}) [2024-02-19 19:03:34.862193] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3037](ver=0,mode=0,seq=15488142), io_fd={first_id:15488142, second_id:3037, device_handle:null}) [2024-02-19 19:03:34.862209] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=13] block manager free block(macro_id=[13](ver=0,mode=0,seq=15488143), io_fd={first_id:15488143, second_id:13, device_handle:null}) [2024-02-19 19:03:34.862220] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2029](ver=0,mode=0,seq=15488144), io_fd={first_id:15488144, second_id:2029, device_handle:null}) [2024-02-19 19:03:34.862231] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1357](ver=0,mode=0,seq=15488145), io_fd={first_id:15488145, second_id:1357, device_handle:null}) [2024-02-19 19:03:34.862243] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[685](ver=0,mode=0,seq=15488146), io_fd={first_id:15488146, second_id:685, device_handle:null}) [2024-02-19 19:03:34.862253] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2365](ver=0,mode=0,seq=15488147), io_fd={first_id:15488147, second_id:2365, device_handle:null}) [2024-02-19 19:03:34.862270] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[1861](ver=0,mode=0,seq=15489175), io_fd={first_id:15489175, second_id:1861, device_handle:null}) [2024-02-19 19:03:34.862281] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2533](ver=0,mode=0,seq=15489176), io_fd={first_id:15489176, second_id:2533, device_handle:null}) [2024-02-19 19:03:34.862297] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[3782](ver=0,mode=0,seq=15488150), io_fd={first_id:15488150, second_id:3782, device_handle:null}) [2024-02-19 19:03:34.862308] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1262](ver=0,mode=0,seq=15488151), io_fd={first_id:15488151, second_id:1262, device_handle:null}) [2024-02-19 19:03:34.862324] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[3950](ver=0,mode=0,seq=15488152), io_fd={first_id:15488152, second_id:3950, device_handle:null}) [2024-02-19 19:03:34.862335] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3614](ver=0,mode=0,seq=15488153), io_fd={first_id:15488153, second_id:3614, device_handle:null}) [2024-02-19 19:03:34.862351] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=13] block manager free block(macro_id=[2942](ver=0,mode=0,seq=15488154), io_fd={first_id:15488154, second_id:2942, device_handle:null}) [2024-02-19 19:03:34.862363] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[2270](ver=0,mode=0,seq=15488155), io_fd={first_id:15488155, second_id:2270, device_handle:null}) [2024-02-19 19:03:34.862377] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=12] block manager free block(macro_id=[1598](ver=0,mode=0,seq=15488156), io_fd={first_id:15488156, second_id:1598, device_handle:null}) [2024-02-19 19:03:34.862389] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[254](ver=0,mode=0,seq=15488157), io_fd={first_id:15488157, second_id:254, device_handle:null}) [2024-02-19 19:03:34.862401] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[926](ver=0,mode=0,seq=15488158), io_fd={first_id:15488158, second_id:926, device_handle:null}) [2024-02-19 19:03:34.862418] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[758](ver=0,mode=0,seq=15488159), io_fd={first_id:15488159, second_id:758, device_handle:null}) [2024-02-19 19:03:34.862436] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=16] block manager free block(macro_id=[2102](ver=0,mode=0,seq=15488160), io_fd={first_id:15488160, second_id:2102, device_handle:null}) [2024-02-19 19:03:34.862453] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[422](ver=0,mode=0,seq=15488161), io_fd={first_id:15488161, second_id:422, device_handle:null}) [2024-02-19 19:03:34.862469] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[3351](ver=0,mode=0,seq=15488162), io_fd={first_id:15488162, second_id:3351, device_handle:null}) [2024-02-19 19:03:34.862520] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=48] block manager free block(macro_id=[2175](ver=0,mode=0,seq=15488163), io_fd={first_id:15488163, second_id:2175, device_handle:null}) [2024-02-19 19:03:34.862533] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[3519](ver=0,mode=0,seq=15488164), io_fd={first_id:15488164, second_id:3519, device_handle:null}) [2024-02-19 19:03:34.862544] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1167](ver=0,mode=0,seq=15488165), io_fd={first_id:15488165, second_id:1167, device_handle:null}) [2024-02-19 19:03:34.862555] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1839](ver=0,mode=0,seq=15488166), io_fd={first_id:15488166, second_id:1839, device_handle:null}) [2024-02-19 19:03:34.862566] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2847](ver=0,mode=0,seq=15488167), io_fd={first_id:15488167, second_id:2847, device_handle:null}) [2024-02-19 19:03:34.862577] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[159](ver=0,mode=0,seq=15488168), io_fd={first_id:15488168, second_id:159, device_handle:null}) [2024-02-19 19:03:34.862588] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3687](ver=0,mode=0,seq=15488169), io_fd={first_id:15488169, second_id:3687, device_handle:null}) [2024-02-19 19:03:34.862600] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3015](ver=0,mode=0,seq=15488170), io_fd={first_id:15488170, second_id:3015, device_handle:null}) [2024-02-19 19:03:34.862611] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[327](ver=0,mode=0,seq=15488171), io_fd={first_id:15488171, second_id:327, device_handle:null}) [2024-02-19 19:03:34.862623] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[4023](ver=0,mode=0,seq=15488172), io_fd={first_id:15488172, second_id:4023, device_handle:null}) [2024-02-19 19:03:34.862635] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[1335](ver=0,mode=0,seq=15488173), io_fd={first_id:15488173, second_id:1335, device_handle:null}) [2024-02-19 19:03:34.862645] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1576](ver=0,mode=0,seq=15488174), io_fd={first_id:15488174, second_id:1576, device_handle:null}) [2024-02-19 19:03:34.862656] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2920](ver=0,mode=0,seq=15488175), io_fd={first_id:15488175, second_id:2920, device_handle:null}) [2024-02-19 19:03:34.862667] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[400](ver=0,mode=0,seq=15488176), io_fd={first_id:15488176, second_id:400, device_handle:null}) [2024-02-19 19:03:34.862678] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1408](ver=0,mode=0,seq=15488177), io_fd={first_id:15488177, second_id:1408, device_handle:null}) [2024-02-19 19:03:34.862689] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[64](ver=0,mode=0,seq=15488178), io_fd={first_id:15488178, second_id:64, device_handle:null}) [2024-02-19 19:03:34.862700] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3760](ver=0,mode=0,seq=15489177), io_fd={first_id:15489177, second_id:3760, device_handle:null}) [2024-02-19 19:03:34.862712] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2416](ver=0,mode=0,seq=15489178), io_fd={first_id:15489178, second_id:2416, device_handle:null}) [2024-02-19 19:03:34.862723] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[1072](ver=0,mode=0,seq=15488181), io_fd={first_id:15488181, second_id:1072, device_handle:null}) [2024-02-19 19:03:34.862735] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[3256](ver=0,mode=0,seq=15488182), io_fd={first_id:15488182, second_id:3256, device_handle:null}) [2024-02-19 19:03:34.862747] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[568](ver=0,mode=0,seq=15488183), io_fd={first_id:15488183, second_id:568, device_handle:null}) [2024-02-19 19:03:34.862758] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[4001](ver=0,mode=0,seq=15488184), io_fd={first_id:15488184, second_id:4001, device_handle:null}) [2024-02-19 19:03:34.862769] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[1313](ver=0,mode=0,seq=15488185), io_fd={first_id:15488185, second_id:1313, device_handle:null}) [2024-02-19 19:03:34.862780] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2657](ver=0,mode=0,seq=15488186), io_fd={first_id:15488186, second_id:2657, device_handle:null}) [2024-02-19 19:03:34.862792] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3665](ver=0,mode=0,seq=15488187), io_fd={first_id:15488187, second_id:3665, device_handle:null}) [2024-02-19 19:03:34.862802] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[977](ver=0,mode=0,seq=15488188), io_fd={first_id:15488188, second_id:977, device_handle:null}) [2024-02-19 19:03:34.862813] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[305](ver=0,mode=0,seq=15488189), io_fd={first_id:15488189, second_id:305, device_handle:null}) [2024-02-19 19:03:34.862824] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[641](ver=0,mode=0,seq=15488190), io_fd={first_id:15488190, second_id:641, device_handle:null}) [2024-02-19 19:03:34.862835] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[3329](ver=0,mode=0,seq=15488191), io_fd={first_id:15488191, second_id:3329, device_handle:null}) [2024-02-19 19:03:34.862846] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1985](ver=0,mode=0,seq=15488192), io_fd={first_id:15488192, second_id:1985, device_handle:null}) [2024-02-19 19:03:34.862858] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[137](ver=0,mode=0,seq=15488193), io_fd={first_id:15488193, second_id:137, device_handle:null}) [2024-02-19 19:03:34.862867] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2825](ver=0,mode=0,seq=15488194), io_fd={first_id:15488194, second_id:2825, device_handle:null}) [2024-02-19 19:03:34.862874] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1481](ver=0,mode=0,seq=15488195), io_fd={first_id:15488195, second_id:1481, device_handle:null}) [2024-02-19 19:03:34.862882] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3497](ver=0,mode=0,seq=15488196), io_fd={first_id:15488196, second_id:3497, device_handle:null}) [2024-02-19 19:03:34.862890] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2153](ver=0,mode=0,seq=15488197), io_fd={first_id:15488197, second_id:2153, device_handle:null}) [2024-02-19 19:03:34.862897] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3161](ver=0,mode=0,seq=15488198), io_fd={first_id:15488198, second_id:3161, device_handle:null}) [2024-02-19 19:03:34.862904] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2058](ver=0,mode=0,seq=15488199), io_fd={first_id:15488199, second_id:2058, device_handle:null}) [2024-02-19 19:03:34.862912] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[714](ver=0,mode=0,seq=15488200), io_fd={first_id:15488200, second_id:714, device_handle:null}) [2024-02-19 19:03:34.862920] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3402](ver=0,mode=0,seq=15488201), io_fd={first_id:15488201, second_id:3402, device_handle:null}) [2024-02-19 19:03:34.862927] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[882](ver=0,mode=0,seq=15488202), io_fd={first_id:15488202, second_id:882, device_handle:null}) [2024-02-19 19:03:34.862935] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3570](ver=0,mode=0,seq=15488203), io_fd={first_id:15488203, second_id:3570, device_handle:null}) [2024-02-19 19:03:34.862946] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3906](ver=0,mode=0,seq=15488204), io_fd={first_id:15488204, second_id:3906, device_handle:null}) [2024-02-19 19:03:34.862957] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2562](ver=0,mode=0,seq=15488205), io_fd={first_id:15488205, second_id:2562, device_handle:null}) [2024-02-19 19:03:34.862966] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[1890](ver=0,mode=0,seq=15488206), io_fd={first_id:15488206, second_id:1890, device_handle:null}) [2024-02-19 19:03:34.862974] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[546](ver=0,mode=0,seq=15488207), io_fd={first_id:15488207, second_id:546, device_handle:null}) [2024-02-19 19:03:34.862981] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2898](ver=0,mode=0,seq=15488208), io_fd={first_id:15488208, second_id:2898, device_handle:null}) [2024-02-19 19:03:34.862989] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[210](ver=0,mode=0,seq=15488209), io_fd={first_id:15488209, second_id:210, device_handle:null}) [2024-02-19 19:03:34.862996] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2394](ver=0,mode=0,seq=15488210), io_fd={first_id:15488210, second_id:2394, device_handle:null}) [2024-02-19 19:03:34.863004] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3738](ver=0,mode=0,seq=15488211), io_fd={first_id:15488211, second_id:3738, device_handle:null}) [2024-02-19 19:03:34.863011] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[378](ver=0,mode=0,seq=15488212), io_fd={first_id:15488212, second_id:378, device_handle:null}) [2024-02-19 19:03:34.863019] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1722](ver=0,mode=0,seq=15488213), io_fd={first_id:15488213, second_id:1722, device_handle:null}) [2024-02-19 19:03:34.863026] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[42](ver=0,mode=0,seq=15488214), io_fd={first_id:15488214, second_id:42, device_handle:null}) [2024-02-19 19:03:34.863033] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1386](ver=0,mode=0,seq=15488215), io_fd={first_id:15488215, second_id:1386, device_handle:null}) [2024-02-19 19:03:34.863041] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[4074](ver=0,mode=0,seq=15489179), io_fd={first_id:15489179, second_id:4074, device_handle:null}) [2024-02-19 19:03:34.863048] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2971](ver=0,mode=0,seq=15489180), io_fd={first_id:15489180, second_id:2971, device_handle:null}) [2024-02-19 19:03:34.863056] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1627](ver=0,mode=0,seq=15488218), io_fd={first_id:15488218, second_id:1627, device_handle:null}) [2024-02-19 19:03:34.863064] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1795](ver=0,mode=0,seq=15488219), io_fd={first_id:15488219, second_id:1795, device_handle:null}) [2024-02-19 19:03:34.863071] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3139](ver=0,mode=0,seq=15488220), io_fd={first_id:15488220, second_id:3139, device_handle:null}) [2024-02-19 19:03:34.863078] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[451](ver=0,mode=0,seq=15488221), io_fd={first_id:15488221, second_id:451, device_handle:null}) [2024-02-19 19:03:34.863086] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[115](ver=0,mode=0,seq=15488222), io_fd={first_id:15488222, second_id:115, device_handle:null}) [2024-02-19 19:03:34.863093] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3475](ver=0,mode=0,seq=15488223), io_fd={first_id:15488223, second_id:3475, device_handle:null}) [2024-02-19 19:03:34.863101] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2131](ver=0,mode=0,seq=15488224), io_fd={first_id:15488224, second_id:2131, device_handle:null}) [2024-02-19 19:03:34.863108] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1459](ver=0,mode=0,seq=15488225), io_fd={first_id:15488225, second_id:1459, device_handle:null}) [2024-02-19 19:03:34.863115] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2467](ver=0,mode=0,seq=15488226), io_fd={first_id:15488226, second_id:2467, device_handle:null}) [2024-02-19 19:03:34.863123] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1123](ver=0,mode=0,seq=15488227), io_fd={first_id:15488227, second_id:1123, device_handle:null}) [2024-02-19 19:03:34.863131] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3811](ver=0,mode=0,seq=15488228), io_fd={first_id:15488228, second_id:3811, device_handle:null}) [2024-02-19 19:03:34.863138] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1291](ver=0,mode=0,seq=15488229), io_fd={first_id:15488229, second_id:1291, device_handle:null}) [2024-02-19 19:03:34.863146] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3979](ver=0,mode=0,seq=15488230), io_fd={first_id:15488230, second_id:3979, device_handle:null}) [2024-02-19 19:03:34.863153] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3643](ver=0,mode=0,seq=15488231), io_fd={first_id:15488231, second_id:3643, device_handle:null}) [2024-02-19 19:03:34.863160] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2299](ver=0,mode=0,seq=15488232), io_fd={first_id:15488232, second_id:2299, device_handle:null}) [2024-02-19 19:03:34.863168] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[955](ver=0,mode=0,seq=15488233), io_fd={first_id:15488233, second_id:955, device_handle:null}) [2024-02-19 19:03:34.863175] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3884](ver=0,mode=0,seq=15488234), io_fd={first_id:15488234, second_id:3884, device_handle:null}) [2024-02-19 19:03:34.863183] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2540](ver=0,mode=0,seq=15488235), io_fd={first_id:15488235, second_id:2540, device_handle:null}) [2024-02-19 19:03:34.863190] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2708](ver=0,mode=0,seq=15488236), io_fd={first_id:15488236, second_id:2708, device_handle:null}) [2024-02-19 19:03:34.863197] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[20](ver=0,mode=0,seq=15488237), io_fd={first_id:15488237, second_id:20, device_handle:null}) [2024-02-19 19:03:34.863205] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[4052](ver=0,mode=0,seq=15488238), io_fd={first_id:15488238, second_id:4052, device_handle:null}) [2024-02-19 19:03:34.863212] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1364](ver=0,mode=0,seq=15488239), io_fd={first_id:15488239, second_id:1364, device_handle:null}) [2024-02-19 19:03:34.863220] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1028](ver=0,mode=0,seq=15488240), io_fd={first_id:15488240, second_id:1028, device_handle:null}) [2024-02-19 19:03:34.863228] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3716](ver=0,mode=0,seq=15488241), io_fd={first_id:15488241, second_id:3716, device_handle:null}) [2024-02-19 19:03:34.863235] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3044](ver=0,mode=0,seq=15488242), io_fd={first_id:15488242, second_id:3044, device_handle:null}) [2024-02-19 19:03:34.863243] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2372](ver=0,mode=0,seq=15488243), io_fd={first_id:15488243, second_id:2372, device_handle:null}) [2024-02-19 19:03:34.863250] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[692](ver=0,mode=0,seq=15488244), io_fd={first_id:15488244, second_id:692, device_handle:null}) [2024-02-19 19:03:34.863257] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2876](ver=0,mode=0,seq=15488245), io_fd={first_id:15488245, second_id:2876, device_handle:null}) [2024-02-19 19:03:34.863265] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1532](ver=0,mode=0,seq=15488246), io_fd={first_id:15488246, second_id:1532, device_handle:null}) [2024-02-19 19:03:34.863272] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2204](ver=0,mode=0,seq=15488247), io_fd={first_id:15488247, second_id:2204, device_handle:null}) [2024-02-19 19:03:34.863279] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3548](ver=0,mode=0,seq=15488248), io_fd={first_id:15488248, second_id:3548, device_handle:null}) [2024-02-19 19:03:34.863287] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[524](ver=0,mode=0,seq=15489181), io_fd={first_id:15489181, second_id:524, device_handle:null}) [2024-02-19 19:03:34.863295] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1868](ver=0,mode=0,seq=15489182), io_fd={first_id:15489182, second_id:1868, device_handle:null}) [2024-02-19 19:03:34.863303] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3453](ver=0,mode=0,seq=15488251), io_fd={first_id:15488251, second_id:3453, device_handle:null}) [2024-02-19 19:03:34.863311] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3621](ver=0,mode=0,seq=15488252), io_fd={first_id:15488252, second_id:3621, device_handle:null}) [2024-02-19 19:03:34.863318] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1269](ver=0,mode=0,seq=15488253), io_fd={first_id:15488253, second_id:1269, device_handle:null}) [2024-02-19 19:03:34.863326] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[261](ver=0,mode=0,seq=15488254), io_fd={first_id:15488254, second_id:261, device_handle:null}) [2024-02-19 19:03:34.863334] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2949](ver=0,mode=0,seq=15488255), io_fd={first_id:15488255, second_id:2949, device_handle:null}) [2024-02-19 19:03:34.863341] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[429](ver=0,mode=0,seq=15488256), io_fd={first_id:15488256, second_id:429, device_handle:null}) [2024-02-19 19:03:34.863349] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1437](ver=0,mode=0,seq=15488257), io_fd={first_id:15488257, second_id:1437, device_handle:null}) [2024-02-19 19:03:34.863356] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2781](ver=0,mode=0,seq=15488258), io_fd={first_id:15488258, second_id:2781, device_handle:null}) [2024-02-19 19:03:34.863364] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1678](ver=0,mode=0,seq=15488259), io_fd={first_id:15488259, second_id:1678, device_handle:null}) [2024-02-19 19:03:34.863371] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[334](ver=0,mode=0,seq=15488260), io_fd={first_id:15488260, second_id:334, device_handle:null}) [2024-02-19 19:03:34.863379] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1846](ver=0,mode=0,seq=15488261), io_fd={first_id:15488261, second_id:1846, device_handle:null}) [2024-02-19 19:03:34.863387] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3190](ver=0,mode=0,seq=15488262), io_fd={first_id:15488262, second_id:3190, device_handle:null}) [2024-02-19 19:03:34.863395] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2854](ver=0,mode=0,seq=15488263), io_fd={first_id:15488263, second_id:2854, device_handle:null}) [2024-02-19 19:03:34.863402] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2182](ver=0,mode=0,seq=15488264), io_fd={first_id:15488264, second_id:2182, device_handle:null}) [2024-02-19 19:03:34.863409] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3526](ver=0,mode=0,seq=15488265), io_fd={first_id:15488265, second_id:3526, device_handle:null}) [2024-02-19 19:03:34.863417] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[166](ver=0,mode=0,seq=15488266), io_fd={first_id:15488266, second_id:166, device_handle:null}) [2024-02-19 19:03:34.863424] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1174](ver=0,mode=0,seq=15488267), io_fd={first_id:15488267, second_id:1174, device_handle:null}) [2024-02-19 19:03:34.863431] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3862](ver=0,mode=0,seq=15488268), io_fd={first_id:15488268, second_id:3862, device_handle:null}) [2024-02-19 19:03:34.863439] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2518](ver=0,mode=0,seq=15488269), io_fd={first_id:15488269, second_id:2518, device_handle:null}) [2024-02-19 19:03:34.863446] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[670](ver=0,mode=0,seq=15488270), io_fd={first_id:15488270, second_id:670, device_handle:null}) [2024-02-19 19:03:34.863453] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2686](ver=0,mode=0,seq=15488271), io_fd={first_id:15488271, second_id:2686, device_handle:null}) [2024-02-19 19:03:34.863461] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3935](ver=0,mode=0,seq=15488272), io_fd={first_id:15488272, second_id:3935, device_handle:null}) [2024-02-19 19:03:34.863469] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1247](ver=0,mode=0,seq=15488273), io_fd={first_id:15488273, second_id:1247, device_handle:null}) [2024-02-19 19:03:34.863476] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1415](ver=0,mode=0,seq=15488274), io_fd={first_id:15488274, second_id:1415, device_handle:null}) [2024-02-19 19:03:34.863483] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2759](ver=0,mode=0,seq=15488275), io_fd={first_id:15488275, second_id:2759, device_handle:null}) [2024-02-19 19:03:34.863491] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[71](ver=0,mode=0,seq=15488276), io_fd={first_id:15488276, second_id:71, device_handle:null}) [2024-02-19 19:03:34.863498] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3767](ver=0,mode=0,seq=15488277), io_fd={first_id:15488277, second_id:3767, device_handle:null}) [2024-02-19 19:03:34.863506] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3095](ver=0,mode=0,seq=15488278), io_fd={first_id:15488278, second_id:3095, device_handle:null}) [2024-02-19 19:03:34.863513] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[407](ver=0,mode=0,seq=15488279), io_fd={first_id:15488279, second_id:407, device_handle:null}) [2024-02-19 19:03:34.863520] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3431](ver=0,mode=0,seq=15488280), io_fd={first_id:15488280, second_id:3431, device_handle:null}) [2024-02-19 19:03:34.863528] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2927](ver=0,mode=0,seq=15488281), io_fd={first_id:15488281, second_id:2927, device_handle:null}) [2024-02-19 19:03:34.863535] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3599](ver=0,mode=0,seq=15488282), io_fd={first_id:15488282, second_id:3599, device_handle:null}) [2024-02-19 19:03:34.863543] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[575](ver=0,mode=0,seq=15488283), io_fd={first_id:15488283, second_id:575, device_handle:null}) [2024-02-19 19:03:34.863551] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3504](ver=0,mode=0,seq=15489183), io_fd={first_id:15489183, second_id:3504, device_handle:null}) [2024-02-19 19:03:34.863558] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2160](ver=0,mode=0,seq=15489184), io_fd={first_id:15489184, second_id:2160, device_handle:null}) [2024-02-19 19:03:34.863566] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[816](ver=0,mode=0,seq=15488286), io_fd={first_id:15488286, second_id:816, device_handle:null}) [2024-02-19 19:03:34.863573] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3672](ver=0,mode=0,seq=15488287), io_fd={first_id:15488287, second_id:3672, device_handle:null}) [2024-02-19 19:03:34.863580] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[984](ver=0,mode=0,seq=15488288), io_fd={first_id:15488288, second_id:984, device_handle:null}) [2024-02-19 19:03:34.863588] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1320](ver=0,mode=0,seq=15488289), io_fd={first_id:15488289, second_id:1320, device_handle:null}) [2024-02-19 19:03:34.863595] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3336](ver=0,mode=0,seq=15488290), io_fd={first_id:15488290, second_id:3336, device_handle:null}) [2024-02-19 19:03:34.863603] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2664](ver=0,mode=0,seq=15488291), io_fd={first_id:15488291, second_id:2664, device_handle:null}) [2024-02-19 19:03:34.863610] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1992](ver=0,mode=0,seq=15488292), io_fd={first_id:15488292, second_id:1992, device_handle:null}) [2024-02-19 19:03:34.863618] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1656](ver=0,mode=0,seq=15488293), io_fd={first_id:15488293, second_id:1656, device_handle:null}) [2024-02-19 19:03:34.863625] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2496](ver=0,mode=0,seq=15488294), io_fd={first_id:15488294, second_id:2496, device_handle:null}) [2024-02-19 19:03:34.863633] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1824](ver=0,mode=0,seq=15488295), io_fd={first_id:15488295, second_id:1824, device_handle:null}) [2024-02-19 19:03:34.863641] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3168](ver=0,mode=0,seq=15488296), io_fd={first_id:15488296, second_id:3168, device_handle:null}) [2024-02-19 19:03:34.863648] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2832](ver=0,mode=0,seq=15488297), io_fd={first_id:15488297, second_id:2832, device_handle:null}) [2024-02-19 19:03:34.863656] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[385](ver=0,mode=0,seq=15488298), io_fd={first_id:15488298, second_id:385, device_handle:null}) [2024-02-19 19:03:34.863663] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1897](ver=0,mode=0,seq=15488299), io_fd={first_id:15488299, second_id:1897, device_handle:null}) [2024-02-19 19:03:34.863671] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2233](ver=0,mode=0,seq=15488300), io_fd={first_id:15488300, second_id:2233, device_handle:null}) [2024-02-19 19:03:34.863678] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[217](ver=0,mode=0,seq=15488301), io_fd={first_id:15488301, second_id:217, device_handle:null}) [2024-02-19 19:03:34.863685] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3577](ver=0,mode=0,seq=15488302), io_fd={first_id:15488302, second_id:3577, device_handle:null}) [2024-02-19 19:03:34.863693] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3913](ver=0,mode=0,seq=15488303), io_fd={first_id:15488303, second_id:3913, device_handle:null}) [2024-02-19 19:03:34.863700] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2569](ver=0,mode=0,seq=15488304), io_fd={first_id:15488304, second_id:2569, device_handle:null}) [2024-02-19 19:03:34.863707] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1225](ver=0,mode=0,seq=15488305), io_fd={first_id:15488305, second_id:1225, device_handle:null}) [2024-02-19 19:03:34.863715] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2737](ver=0,mode=0,seq=15488306), io_fd={first_id:15488306, second_id:2737, device_handle:null}) [2024-02-19 19:03:34.863723] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[4081](ver=0,mode=0,seq=15488307), io_fd={first_id:15488307, second_id:4081, device_handle:null}) [2024-02-19 19:03:34.863730] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2401](ver=0,mode=0,seq=15488308), io_fd={first_id:15488308, second_id:2401, device_handle:null}) [2024-02-19 19:03:34.863738] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1298](ver=0,mode=0,seq=15488309), io_fd={first_id:15488309, second_id:1298, device_handle:null}) [2024-02-19 19:03:34.863745] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2642](ver=0,mode=0,seq=15488310), io_fd={first_id:15488310, second_id:2642, device_handle:null}) [2024-02-19 19:03:34.863752] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[122](ver=0,mode=0,seq=15488311), io_fd={first_id:15488311, second_id:122, device_handle:null}) [2024-02-19 19:03:34.863760] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2810](ver=0,mode=0,seq=15488312), io_fd={first_id:15488312, second_id:2810, device_handle:null}) [2024-02-19 19:03:34.863767] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2474](ver=0,mode=0,seq=15488313), io_fd={first_id:15488313, second_id:2474, device_handle:null}) [2024-02-19 19:03:34.863774] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[458](ver=0,mode=0,seq=15488314), io_fd={first_id:15488314, second_id:458, device_handle:null}) [2024-02-19 19:03:34.863782] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3146](ver=0,mode=0,seq=15488315), io_fd={first_id:15488315, second_id:3146, device_handle:null}) [2024-02-19 19:03:34.863790] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3482](ver=0,mode=0,seq=15488316), io_fd={first_id:15488316, second_id:3482, device_handle:null}) [2024-02-19 19:03:34.863797] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2978](ver=0,mode=0,seq=15489185), io_fd={first_id:15489185, second_id:2978, device_handle:null}) [2024-02-19 19:03:34.863805] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1634](ver=0,mode=0,seq=15488318), io_fd={first_id:15488318, second_id:1634, device_handle:null}) [2024-02-19 19:03:34.863813] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1970](ver=0,mode=0,seq=15489186), io_fd={first_id:15489186, second_id:1970, device_handle:null}) [2024-02-19 19:03:34.863820] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[626](ver=0,mode=0,seq=15488320), io_fd={first_id:15488320, second_id:626, device_handle:null}) [2024-02-19 19:03:34.863828] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3314](ver=0,mode=0,seq=15488321), io_fd={first_id:15488321, second_id:3314, device_handle:null}) [2024-02-19 19:03:34.863836] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2211](ver=0,mode=0,seq=15488322), io_fd={first_id:15488322, second_id:2211, device_handle:null}) [2024-02-19 19:03:34.863843] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1035](ver=0,mode=0,seq=15488323), io_fd={first_id:15488323, second_id:1035, device_handle:null}) [2024-02-19 19:03:34.863851] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3723](ver=0,mode=0,seq=15488324), io_fd={first_id:15488324, second_id:3723, device_handle:null}) [2024-02-19 19:03:34.863858] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3387](ver=0,mode=0,seq=15488325), io_fd={first_id:15488325, second_id:3387, device_handle:null}) [2024-02-19 19:03:34.863866] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2715](ver=0,mode=0,seq=15488326), io_fd={first_id:15488326, second_id:2715, device_handle:null}) [2024-02-19 19:03:34.863874] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1371](ver=0,mode=0,seq=15488327), io_fd={first_id:15488327, second_id:1371, device_handle:null}) [2024-02-19 19:03:34.863882] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[4059](ver=0,mode=0,seq=15488328), io_fd={first_id:15488328, second_id:4059, device_handle:null}) [2024-02-19 19:03:34.863889] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[27](ver=0,mode=0,seq=15488329), io_fd={first_id:15488329, second_id:27, device_handle:null}) [2024-02-19 19:03:34.863897] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3051](ver=0,mode=0,seq=15488330), io_fd={first_id:15488330, second_id:3051, device_handle:null}) [2024-02-19 19:03:34.863905] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[363](ver=0,mode=0,seq=15488331), io_fd={first_id:15488331, second_id:363, device_handle:null}) [2024-02-19 19:03:34.863913] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3891](ver=0,mode=0,seq=15488332), io_fd={first_id:15488332, second_id:3891, device_handle:null}) [2024-02-19 19:03:34.863920] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2547](ver=0,mode=0,seq=15488333), io_fd={first_id:15488333, second_id:2547, device_handle:null}) [2024-02-19 19:03:34.863928] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[531](ver=0,mode=0,seq=15488334), io_fd={first_id:15488334, second_id:531, device_handle:null}) [2024-02-19 19:03:34.863936] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1875](ver=0,mode=0,seq=15488335), io_fd={first_id:15488335, second_id:1875, device_handle:null}) [2024-02-19 19:03:34.863949] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3219](ver=0,mode=0,seq=15488336), io_fd={first_id:15488336, second_id:3219, device_handle:null}) [2024-02-19 19:03:34.863960] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[195](ver=0,mode=0,seq=15488337), io_fd={first_id:15488337, second_id:195, device_handle:null}) [2024-02-19 19:03:34.863971] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[436](ver=0,mode=0,seq=15488338), io_fd={first_id:15488338, second_id:436, device_handle:null}) [2024-02-19 19:03:34.863978] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1948](ver=0,mode=0,seq=15488339), io_fd={first_id:15488339, second_id:1948, device_handle:null}) [2024-02-19 19:03:34.863986] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[604](ver=0,mode=0,seq=15488340), io_fd={first_id:15488340, second_id:604, device_handle:null}) [2024-02-19 19:03:34.863993] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[268](ver=0,mode=0,seq=15488341), io_fd={first_id:15488341, second_id:268, device_handle:null}) [2024-02-19 19:03:34.864002] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3628](ver=0,mode=0,seq=15488342), io_fd={first_id:15488342, second_id:3628, device_handle:null}) [2024-02-19 19:03:34.864009] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2956](ver=0,mode=0,seq=15488343), io_fd={first_id:15488343, second_id:2956, device_handle:null}) [2024-02-19 19:03:34.864016] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2620](ver=0,mode=0,seq=15488344), io_fd={first_id:15488344, second_id:2620, device_handle:null}) [2024-02-19 19:03:34.864024] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1276](ver=0,mode=0,seq=15488345), io_fd={first_id:15488345, second_id:1276, device_handle:null}) [2024-02-19 19:03:34.864032] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[772](ver=0,mode=0,seq=15488346), io_fd={first_id:15488346, second_id:772, device_handle:null}) [2024-02-19 19:03:34.864039] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2116](ver=0,mode=0,seq=15488347), io_fd={first_id:15488347, second_id:2116, device_handle:null}) [2024-02-19 19:03:34.864047] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3460](ver=0,mode=0,seq=15488348), io_fd={first_id:15488348, second_id:3460, device_handle:null}) [2024-02-19 19:03:34.864054] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3796](ver=0,mode=0,seq=15488349), io_fd={first_id:15488349, second_id:3796, device_handle:null}) [2024-02-19 19:03:34.864062] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2452](ver=0,mode=0,seq=15488350), io_fd={first_id:15488350, second_id:2452, device_handle:null}) [2024-02-19 19:03:34.864069] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[5](ver=0,mode=0,seq=15488351), io_fd={first_id:15488351, second_id:5, device_handle:null}) [2024-02-19 19:03:34.864077] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2693](ver=0,mode=0,seq=15488352), io_fd={first_id:15488352, second_id:2693, device_handle:null}) [2024-02-19 19:03:34.864085] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1517](ver=0,mode=0,seq=15489187), io_fd={first_id:15489187, second_id:1517, device_handle:null}) [2024-02-19 19:03:34.864092] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1853](ver=0,mode=0,seq=15488354), io_fd={first_id:15488354, second_id:1853, device_handle:null}) [2024-02-19 19:03:34.864100] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3869](ver=0,mode=0,seq=15489188), io_fd={first_id:15489188, second_id:3869, device_handle:null}) [2024-02-19 19:03:34.864107] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2189](ver=0,mode=0,seq=15488356), io_fd={first_id:15488356, second_id:2189, device_handle:null}) [2024-02-19 19:03:34.864115] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3029](ver=0,mode=0,seq=15488357), io_fd={first_id:15488357, second_id:3029, device_handle:null}) [2024-02-19 19:03:34.864122] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[341](ver=0,mode=0,seq=15488358), io_fd={first_id:15488358, second_id:341, device_handle:null}) [2024-02-19 19:03:34.864130] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2357](ver=0,mode=0,seq=15488359), io_fd={first_id:15488359, second_id:2357, device_handle:null}) [2024-02-19 19:03:34.864138] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3701](ver=0,mode=0,seq=15488360), io_fd={first_id:15488360, second_id:3701, device_handle:null}) [2024-02-19 19:03:34.864145] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[677](ver=0,mode=0,seq=15488361), io_fd={first_id:15488361, second_id:677, device_handle:null}) [2024-02-19 19:03:34.864153] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2021](ver=0,mode=0,seq=15488362), io_fd={first_id:15488362, second_id:2021, device_handle:null}) [2024-02-19 19:03:34.864161] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[918](ver=0,mode=0,seq=15488363), io_fd={first_id:15488363, second_id:918, device_handle:null}) [2024-02-19 19:03:34.864169] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3606](ver=0,mode=0,seq=15488364), io_fd={first_id:15488364, second_id:3606, device_handle:null}) [2024-02-19 19:03:34.864176] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2262](ver=0,mode=0,seq=15488365), io_fd={first_id:15488365, second_id:2262, device_handle:null}) [2024-02-19 19:03:34.864184] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3774](ver=0,mode=0,seq=15488366), io_fd={first_id:15488366, second_id:3774, device_handle:null}) [2024-02-19 19:03:34.864191] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2430](ver=0,mode=0,seq=15488367), io_fd={first_id:15488367, second_id:2430, device_handle:null}) [2024-02-19 19:03:34.864199] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2766](ver=0,mode=0,seq=15488368), io_fd={first_id:15488368, second_id:2766, device_handle:null}) [2024-02-19 19:03:34.864207] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1422](ver=0,mode=0,seq=15488369), io_fd={first_id:15488369, second_id:1422, device_handle:null}) [2024-02-19 19:03:34.864214] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[78](ver=0,mode=0,seq=15488370), io_fd={first_id:15488370, second_id:78, device_handle:null}) [2024-02-19 19:03:34.864222] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[414](ver=0,mode=0,seq=15488371), io_fd={first_id:15488371, second_id:414, device_handle:null}) [2024-02-19 19:03:34.864229] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3102](ver=0,mode=0,seq=15488372), io_fd={first_id:15488372, second_id:3102, device_handle:null}) [2024-02-19 19:03:34.864237] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2598](ver=0,mode=0,seq=15488373), io_fd={first_id:15488373, second_id:2598, device_handle:null}) [2024-02-19 19:03:34.864245] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3942](ver=0,mode=0,seq=15488374), io_fd={first_id:15488374, second_id:3942, device_handle:null}) [2024-02-19 19:03:34.864252] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[582](ver=0,mode=0,seq=15488375), io_fd={first_id:15488375, second_id:582, device_handle:null}) [2024-02-19 19:03:34.864260] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1590](ver=0,mode=0,seq=15488376), io_fd={first_id:15488376, second_id:1590, device_handle:null}) [2024-02-19 19:03:34.864268] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[246](ver=0,mode=0,seq=15488377), io_fd={first_id:15488377, second_id:246, device_handle:null}) [2024-02-19 19:03:34.864275] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2934](ver=0,mode=0,seq=15488378), io_fd={first_id:15488378, second_id:2934, device_handle:null}) [2024-02-19 19:03:34.864283] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[487](ver=0,mode=0,seq=15488379), io_fd={first_id:15488379, second_id:487, device_handle:null}) [2024-02-19 19:03:34.864291] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1999](ver=0,mode=0,seq=15488380), io_fd={first_id:15488380, second_id:1999, device_handle:null}) [2024-02-19 19:03:34.864298] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3343](ver=0,mode=0,seq=15488381), io_fd={first_id:15488381, second_id:3343, device_handle:null}) [2024-02-19 19:03:34.864306] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1663](ver=0,mode=0,seq=15488382), io_fd={first_id:15488382, second_id:1663, device_handle:null}) [2024-02-19 19:03:34.864314] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1327](ver=0,mode=0,seq=15488383), io_fd={first_id:15488383, second_id:1327, device_handle:null}) [2024-02-19 19:03:34.864321] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[4015](ver=0,mode=0,seq=15488384), io_fd={first_id:15488384, second_id:4015, device_handle:null}) [2024-02-19 19:03:34.864329] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[151](ver=0,mode=0,seq=15488385), io_fd={first_id:15488385, second_id:151, device_handle:null}) [2024-02-19 19:03:34.864337] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2503](ver=0,mode=0,seq=15488386), io_fd={first_id:15488386, second_id:2503, device_handle:null}) [2024-02-19 19:03:34.864344] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2912](ver=0,mode=0,seq=15489189), io_fd={first_id:15489189, second_id:2912, device_handle:null}) [2024-02-19 19:03:34.864352] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3920](ver=0,mode=0,seq=15488388), io_fd={first_id:15488388, second_id:3920, device_handle:null}) [2024-02-19 19:03:34.864359] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2576](ver=0,mode=0,seq=15489190), io_fd={first_id:15489190, second_id:2576, device_handle:null}) [2024-02-19 19:03:34.864367] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1904](ver=0,mode=0,seq=15488390), io_fd={first_id:15488390, second_id:1904, device_handle:null}) [2024-02-19 19:03:34.864375] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2408](ver=0,mode=0,seq=15488391), io_fd={first_id:15488391, second_id:2408, device_handle:null}) [2024-02-19 19:03:34.864382] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3752](ver=0,mode=0,seq=15488392), io_fd={first_id:15488392, second_id:3752, device_handle:null}) [2024-02-19 19:03:34.864390] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2481](ver=0,mode=0,seq=15488393), io_fd={first_id:15488393, second_id:2481, device_handle:null}) [2024-02-19 19:03:34.864397] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1137](ver=0,mode=0,seq=15488394), io_fd={first_id:15488394, second_id:1137, device_handle:null}) [2024-02-19 19:03:34.864405] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1473](ver=0,mode=0,seq=15488395), io_fd={first_id:15488395, second_id:1473, device_handle:null}) [2024-02-19 19:03:34.864412] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3489](ver=0,mode=0,seq=15488396), io_fd={first_id:15488396, second_id:3489, device_handle:null}) [2024-02-19 19:03:34.864420] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3153](ver=0,mode=0,seq=15488397), io_fd={first_id:15488397, second_id:3153, device_handle:null}) [2024-02-19 19:03:34.864428] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1305](ver=0,mode=0,seq=15488398), io_fd={first_id:15488398, second_id:1305, device_handle:null}) [2024-02-19 19:03:34.864435] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2649](ver=0,mode=0,seq=15488399), io_fd={first_id:15488399, second_id:2649, device_handle:null}) [2024-02-19 19:03:34.864443] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1977](ver=0,mode=0,seq=15488400), io_fd={first_id:15488400, second_id:1977, device_handle:null}) [2024-02-19 19:03:34.864450] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3321](ver=0,mode=0,seq=15488401), io_fd={first_id:15488401, second_id:3321, device_handle:null}) [2024-02-19 19:03:34.864458] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[297](ver=0,mode=0,seq=15488402), io_fd={first_id:15488402, second_id:297, device_handle:null}) [2024-02-19 19:03:34.864466] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2985](ver=0,mode=0,seq=15488403), io_fd={first_id:15488403, second_id:2985, device_handle:null}) [2024-02-19 19:03:34.864473] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[538](ver=0,mode=0,seq=15488404), io_fd={first_id:15488404, second_id:538, device_handle:null}) [2024-02-19 19:03:34.864481] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1882](ver=0,mode=0,seq=15488405), io_fd={first_id:15488405, second_id:1882, device_handle:null}) [2024-02-19 19:03:34.864491] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2386](ver=0,mode=0,seq=15488406), io_fd={first_id:15488406, second_id:2386, device_handle:null}) [2024-02-19 19:03:34.864502] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[4066](ver=0,mode=0,seq=15488407), io_fd={first_id:15488407, second_id:4066, device_handle:null}) [2024-02-19 19:03:34.864514] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[34](ver=0,mode=0,seq=15488408), io_fd={first_id:15488408, second_id:34, device_handle:null}) [2024-02-19 19:03:34.864526] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1378](ver=0,mode=0,seq=15488409), io_fd={first_id:15488409, second_id:1378, device_handle:null}) [2024-02-19 19:03:34.864538] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[3562](ver=0,mode=0,seq=15488410), io_fd={first_id:15488410, second_id:3562, device_handle:null}) [2024-02-19 19:03:34.864574] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3898](ver=0,mode=0,seq=15488411), io_fd={first_id:15488411, second_id:3898, device_handle:null}) [2024-02-19 19:03:34.864582] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2795](ver=0,mode=0,seq=15488412), io_fd={first_id:15488412, second_id:2795, device_handle:null}) [2024-02-19 19:03:34.864590] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[107](ver=0,mode=0,seq=15488413), io_fd={first_id:15488413, second_id:107, device_handle:null}) [2024-02-19 19:03:34.864597] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[275](ver=0,mode=0,seq=15488414), io_fd={first_id:15488414, second_id:275, device_handle:null}) [2024-02-19 19:03:34.864605] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3971](ver=0,mode=0,seq=15488415), io_fd={first_id:15488415, second_id:3971, device_handle:null}) [2024-02-19 19:03:34.864612] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1955](ver=0,mode=0,seq=15488416), io_fd={first_id:15488416, second_id:1955, device_handle:null}) [2024-02-19 19:03:34.864620] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1283](ver=0,mode=0,seq=15488417), io_fd={first_id:15488417, second_id:1283, device_handle:null}) [2024-02-19 19:03:34.864628] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3635](ver=0,mode=0,seq=15488418), io_fd={first_id:15488418, second_id:3635, device_handle:null}) [2024-02-19 19:03:34.864635] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[443](ver=0,mode=0,seq=15488419), io_fd={first_id:15488419, second_id:443, device_handle:null}) [2024-02-19 19:03:34.864643] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1787](ver=0,mode=0,seq=15488420), io_fd={first_id:15488420, second_id:1787, device_handle:null}) [2024-02-19 19:03:34.864650] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3467](ver=0,mode=0,seq=15488421), io_fd={first_id:15488421, second_id:3467, device_handle:null}) [2024-02-19 19:03:34.864657] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2532](ver=0,mode=0,seq=15488422), io_fd={first_id:15488422, second_id:2532, device_handle:null}) [2024-02-19 19:03:34.864665] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3876](ver=0,mode=0,seq=15488423), io_fd={first_id:15488423, second_id:3876, device_handle:null}) [2024-02-19 19:03:34.864675] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3540](ver=0,mode=0,seq=15489191), io_fd={first_id:15489191, second_id:3540, device_handle:null}) [2024-02-19 19:03:34.864686] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2868](ver=0,mode=0,seq=15488425), io_fd={first_id:15488425, second_id:2868, device_handle:null}) [2024-02-19 19:03:34.864697] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2196](ver=0,mode=0,seq=15489192), io_fd={first_id:15489192, second_id:2196, device_handle:null}) [2024-02-19 19:03:34.864708] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1524](ver=0,mode=0,seq=15488427), io_fd={first_id:15488427, second_id:1524, device_handle:null}) [2024-02-19 19:03:34.864719] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3204](ver=0,mode=0,seq=15488428), io_fd={first_id:15488428, second_id:3204, device_handle:null}) [2024-02-19 19:03:34.864730] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[4044](ver=0,mode=0,seq=15488429), io_fd={first_id:15488429, second_id:4044, device_handle:null}) [2024-02-19 19:03:34.864742] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[12](ver=0,mode=0,seq=15488430), io_fd={first_id:15488430, second_id:12, device_handle:null}) [2024-02-19 19:03:34.864753] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2700](ver=0,mode=0,seq=15488431), io_fd={first_id:15488431, second_id:2700, device_handle:null}) [2024-02-19 19:03:34.864764] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[684](ver=0,mode=0,seq=15488432), io_fd={first_id:15488432, second_id:684, device_handle:null}) [2024-02-19 19:03:34.864775] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3372](ver=0,mode=0,seq=15488433), io_fd={first_id:15488433, second_id:3372, device_handle:null}) [2024-02-19 19:03:34.864786] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[348](ver=0,mode=0,seq=15488434), io_fd={first_id:15488434, second_id:348, device_handle:null}) [2024-02-19 19:03:34.864797] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1692](ver=0,mode=0,seq=15488435), io_fd={first_id:15488435, second_id:1692, device_handle:null}) [2024-02-19 19:03:34.864808] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3277](ver=0,mode=0,seq=15488436), io_fd={first_id:15488436, second_id:3277, device_handle:null}) [2024-02-19 19:03:34.864820] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1933](ver=0,mode=0,seq=15488437), io_fd={first_id:15488437, second_id:1933, device_handle:null}) [2024-02-19 19:03:34.864831] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[589](ver=0,mode=0,seq=15488438), io_fd={first_id:15488438, second_id:589, device_handle:null}) [2024-02-19 19:03:34.864842] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2101](ver=0,mode=0,seq=15488439), io_fd={first_id:15488439, second_id:2101, device_handle:null}) [2024-02-19 19:03:34.864853] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[421](ver=0,mode=0,seq=15488440), io_fd={first_id:15488440, second_id:421, device_handle:null}) [2024-02-19 19:03:34.864865] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3781](ver=0,mode=0,seq=15488441), io_fd={first_id:15488441, second_id:3781, device_handle:null}) [2024-02-19 19:03:34.864879] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3109](ver=0,mode=0,seq=15488442), io_fd={first_id:15488442, second_id:3109, device_handle:null}) [2024-02-19 19:03:34.864890] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=11] block manager free block(macro_id=[2437](ver=0,mode=0,seq=15488443), io_fd={first_id:15488443, second_id:2437, device_handle:null}) [2024-02-19 19:03:34.864901] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1765](ver=0,mode=0,seq=15488444), io_fd={first_id:15488444, second_id:1765, device_handle:null}) [2024-02-19 19:03:34.864912] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[85](ver=0,mode=0,seq=15488445), io_fd={first_id:15488445, second_id:85, device_handle:null}) [2024-02-19 19:03:34.864940] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1429](ver=0,mode=0,seq=15488446), io_fd={first_id:15488446, second_id:1429, device_handle:null}) [2024-02-19 19:03:34.864954] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=11] block manager free block(macro_id=[2269](ver=0,mode=0,seq=15488447), io_fd={first_id:15488447, second_id:2269, device_handle:null}) [2024-02-19 19:03:34.864966] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[253](ver=0,mode=0,seq=15488448), io_fd={first_id:15488448, second_id:253, device_handle:null}) [2024-02-19 19:03:34.864977] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[3949](ver=0,mode=0,seq=15488449), io_fd={first_id:15488449, second_id:3949, device_handle:null}) [2024-02-19 19:03:34.864989] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2605](ver=0,mode=0,seq=15488450), io_fd={first_id:15488450, second_id:2605, device_handle:null}) [2024-02-19 19:03:34.865000] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1502](ver=0,mode=0,seq=15488451), io_fd={first_id:15488451, second_id:1502, device_handle:null}) [2024-02-19 19:03:34.865011] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3014](ver=0,mode=0,seq=15488452), io_fd={first_id:15488452, second_id:3014, device_handle:null}) [2024-02-19 19:03:34.865022] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1670](ver=0,mode=0,seq=15488453), io_fd={first_id:15488453, second_id:1670, device_handle:null}) [2024-02-19 19:03:34.865033] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1334](ver=0,mode=0,seq=15488454), io_fd={first_id:15488454, second_id:1334, device_handle:null}) [2024-02-19 19:03:34.865044] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[662](ver=0,mode=0,seq=15488455), io_fd={first_id:15488455, second_id:662, device_handle:null}) [2024-02-19 19:03:34.865055] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[4022](ver=0,mode=0,seq=15489194), io_fd={first_id:15489194, second_id:4022, device_handle:null}) [2024-02-19 19:03:34.865067] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3350](ver=0,mode=0,seq=15489193), io_fd={first_id:15489193, second_id:3350, device_handle:null}) [2024-02-19 19:03:34.865078] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2678](ver=0,mode=0,seq=15488458), io_fd={first_id:15488458, second_id:2678, device_handle:null}) [2024-02-19 19:03:34.865090] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3686](ver=0,mode=0,seq=15488459), io_fd={first_id:15488459, second_id:3686, device_handle:null}) [2024-02-19 19:03:34.865099] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2342](ver=0,mode=0,seq=15488460), io_fd={first_id:15488460, second_id:2342, device_handle:null}) [2024-02-19 19:03:34.865107] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[998](ver=0,mode=0,seq=15488461), io_fd={first_id:15488461, second_id:998, device_handle:null}) [2024-02-19 19:03:34.865115] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1166](ver=0,mode=0,seq=15488462), io_fd={first_id:15488462, second_id:1166, device_handle:null}) [2024-02-19 19:03:34.865123] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3927](ver=0,mode=0,seq=15488463), io_fd={first_id:15488463, second_id:3927, device_handle:null}) [2024-02-19 19:03:34.865131] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2583](ver=0,mode=0,seq=15488464), io_fd={first_id:15488464, second_id:2583, device_handle:null}) [2024-02-19 19:03:34.865138] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2247](ver=0,mode=0,seq=15488465), io_fd={first_id:15488465, second_id:2247, device_handle:null}) [2024-02-19 19:03:34.865146] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1575](ver=0,mode=0,seq=15488466), io_fd={first_id:15488466, second_id:1575, device_handle:null}) [2024-02-19 19:03:34.865153] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[231](ver=0,mode=0,seq=15488467), io_fd={first_id:15488467, second_id:231, device_handle:null}) [2024-02-19 19:03:34.865161] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2919](ver=0,mode=0,seq=15488468), io_fd={first_id:15488468, second_id:2919, device_handle:null}) [2024-02-19 19:03:34.865168] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3591](ver=0,mode=0,seq=15488469), io_fd={first_id:15488469, second_id:3591, device_handle:null}) [2024-02-19 19:03:34.865175] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[567](ver=0,mode=0,seq=15488470), io_fd={first_id:15488470, second_id:567, device_handle:null}) [2024-02-19 19:03:34.865183] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3255](ver=0,mode=0,seq=15488471), io_fd={first_id:15488471, second_id:3255, device_handle:null}) [2024-02-19 19:03:34.865190] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3087](ver=0,mode=0,seq=15488472), io_fd={first_id:15488472, second_id:3087, device_handle:null}) [2024-02-19 19:03:34.865200] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[640](ver=0,mode=0,seq=15488473), io_fd={first_id:15488473, second_id:640, device_handle:null}) [2024-02-19 19:03:34.865211] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3160](ver=0,mode=0,seq=15488474), io_fd={first_id:15488474, second_id:3160, device_handle:null}) [2024-02-19 19:03:34.865219] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2488](ver=0,mode=0,seq=15488475), io_fd={first_id:15488475, second_id:2488, device_handle:null}) [2024-02-19 19:03:34.865226] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[304](ver=0,mode=0,seq=15488476), io_fd={first_id:15488476, second_id:304, device_handle:null}) [2024-02-19 19:03:34.865234] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2897](ver=0,mode=0,seq=15488477), io_fd={first_id:15488477, second_id:2897, device_handle:null}) [2024-02-19 19:03:34.865241] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[41](ver=0,mode=0,seq=15488478), io_fd={first_id:15488478, second_id:41, device_handle:null}) [2024-02-19 19:03:34.865249] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[713](ver=0,mode=0,seq=15488479), io_fd={first_id:15488479, second_id:713, device_handle:null}) [2024-02-19 19:03:34.865256] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1385](ver=0,mode=0,seq=15488480), io_fd={first_id:15488480, second_id:1385, device_handle:null}) [2024-02-19 19:03:34.865264] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2393](ver=0,mode=0,seq=15488481), io_fd={first_id:15488481, second_id:2393, device_handle:null}) [2024-02-19 19:03:34.865271] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3737](ver=0,mode=0,seq=15488482), io_fd={first_id:15488482, second_id:3737, device_handle:null}) [2024-02-19 19:03:34.865279] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1217](ver=0,mode=0,seq=15488483), io_fd={first_id:15488483, second_id:1217, device_handle:null}) [2024-02-19 19:03:34.865287] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3905](ver=0,mode=0,seq=15488484), io_fd={first_id:15488484, second_id:3905, device_handle:null}) [2024-02-19 19:03:34.865295] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2225](ver=0,mode=0,seq=15488485), io_fd={first_id:15488485, second_id:2225, device_handle:null}) [2024-02-19 19:03:34.865302] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3810](ver=0,mode=0,seq=15488486), io_fd={first_id:15488486, second_id:3810, device_handle:null}) [2024-02-19 19:03:34.865310] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2466](ver=0,mode=0,seq=15489196), io_fd={first_id:15489196, second_id:2466, device_handle:null}) [2024-02-19 19:03:34.865318] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2634](ver=0,mode=0,seq=15488488), io_fd={first_id:15488488, second_id:2634, device_handle:null}) [2024-02-19 19:03:34.865325] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3978](ver=0,mode=0,seq=15489195), io_fd={first_id:15489195, second_id:3978, device_handle:null}) [2024-02-19 19:03:34.865333] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1290](ver=0,mode=0,seq=15488490), io_fd={first_id:15488490, second_id:1290, device_handle:null}) [2024-02-19 19:03:34.865340] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[954](ver=0,mode=0,seq=15488491), io_fd={first_id:15488491, second_id:954, device_handle:null}) [2024-02-19 19:03:34.865348] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[282](ver=0,mode=0,seq=15488492), io_fd={first_id:15488492, second_id:282, device_handle:null}) [2024-02-19 19:03:34.865355] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3642](ver=0,mode=0,seq=15488493), io_fd={first_id:15488493, second_id:3642, device_handle:null}) [2024-02-19 19:03:34.865363] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2970](ver=0,mode=0,seq=15488494), io_fd={first_id:15488494, second_id:2970, device_handle:null}) [2024-02-19 19:03:34.865370] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1962](ver=0,mode=0,seq=15488495), io_fd={first_id:15488495, second_id:1962, device_handle:null}) [2024-02-19 19:03:34.865378] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1458](ver=0,mode=0,seq=15488496), io_fd={first_id:15488496, second_id:1458, device_handle:null}) [2024-02-19 19:03:34.865386] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2802](ver=0,mode=0,seq=15488497), io_fd={first_id:15488497, second_id:2802, device_handle:null}) [2024-02-19 19:03:34.865393] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[114](ver=0,mode=0,seq=15488498), io_fd={first_id:15488498, second_id:114, device_handle:null}) [2024-02-19 19:03:34.865400] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2130](ver=0,mode=0,seq=15488499), io_fd={first_id:15488499, second_id:2130, device_handle:null}) [2024-02-19 19:03:34.865408] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[450](ver=0,mode=0,seq=15488500), io_fd={first_id:15488500, second_id:450, device_handle:null}) [2024-02-19 19:03:34.865415] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3138](ver=0,mode=0,seq=15488501), io_fd={first_id:15488501, second_id:3138, device_handle:null}) [2024-02-19 19:03:34.865423] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1794](ver=0,mode=0,seq=15488502), io_fd={first_id:15488502, second_id:1794, device_handle:null}) [2024-02-19 19:03:34.865430] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[691](ver=0,mode=0,seq=15488503), io_fd={first_id:15488503, second_id:691, device_handle:null}) [2024-02-19 19:03:34.865438] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3547](ver=0,mode=0,seq=15488504), io_fd={first_id:15488504, second_id:3547, device_handle:null}) [2024-02-19 19:03:34.865445] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1867](ver=0,mode=0,seq=15488505), io_fd={first_id:15488505, second_id:1867, device_handle:null}) [2024-02-19 19:03:34.865453] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2875](ver=0,mode=0,seq=15488506), io_fd={first_id:15488506, second_id:2875, device_handle:null}) [2024-02-19 19:03:34.865461] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2371](ver=0,mode=0,seq=15488507), io_fd={first_id:15488507, second_id:2371, device_handle:null}) [2024-02-19 19:03:34.865468] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3715](ver=0,mode=0,seq=15488508), io_fd={first_id:15488508, second_id:3715, device_handle:null}) [2024-02-19 19:03:34.865476] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3043](ver=0,mode=0,seq=15488509), io_fd={first_id:15488509, second_id:3043, device_handle:null}) [2024-02-19 19:03:34.865483] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[355](ver=0,mode=0,seq=15488510), io_fd={first_id:15488510, second_id:355, device_handle:null}) [2024-02-19 19:03:34.865491] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1699](ver=0,mode=0,seq=15488511), io_fd={first_id:15488511, second_id:1699, device_handle:null}) [2024-02-19 19:03:34.865498] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1363](ver=0,mode=0,seq=15488512), io_fd={first_id:15488512, second_id:1363, device_handle:null}) [2024-02-19 19:03:34.865505] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[19](ver=0,mode=0,seq=15488513), io_fd={first_id:15488513, second_id:19, device_handle:null}) [2024-02-19 19:03:34.865513] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2707](ver=0,mode=0,seq=15488514), io_fd={first_id:15488514, second_id:2707, device_handle:null}) [2024-02-19 19:03:34.865520] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2948](ver=0,mode=0,seq=15488515), io_fd={first_id:15488515, second_id:2948, device_handle:null}) [2024-02-19 19:03:34.865528] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[260](ver=0,mode=0,seq=15488516), io_fd={first_id:15488516, second_id:260, device_handle:null}) [2024-02-19 19:03:34.865535] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1436](ver=0,mode=0,seq=15488517), io_fd={first_id:15488517, second_id:1436, device_handle:null}) [2024-02-19 19:03:34.865543] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2108](ver=0,mode=0,seq=15488518), io_fd={first_id:15488518, second_id:2108, device_handle:null}) [2024-02-19 19:03:34.865551] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1100](ver=0,mode=0,seq=15488519), io_fd={first_id:15488519, second_id:1100, device_handle:null}) [2024-02-19 19:03:34.865558] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3788](ver=0,mode=0,seq=15488520), io_fd={first_id:15488520, second_id:3788, device_handle:null}) [2024-02-19 19:03:34.865566] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2444](ver=0,mode=0,seq=15489198), io_fd={first_id:15489198, second_id:2444, device_handle:null}) [2024-02-19 19:03:34.865573] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3956](ver=0,mode=0,seq=15489197), io_fd={first_id:15489197, second_id:3956, device_handle:null}) [2024-02-19 19:03:34.865581] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1268](ver=0,mode=0,seq=15488523), io_fd={first_id:15488523, second_id:1268, device_handle:null}) [2024-02-19 19:03:34.865588] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2612](ver=0,mode=0,seq=15488524), io_fd={first_id:15488524, second_id:2612, device_handle:null}) [2024-02-19 19:03:34.865596] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3620](ver=0,mode=0,seq=15488525), io_fd={first_id:15488525, second_id:3620, device_handle:null}) [2024-02-19 19:03:34.865603] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2517](ver=0,mode=0,seq=15488526), io_fd={first_id:15488526, second_id:2517, device_handle:null}) [2024-02-19 19:03:34.865611] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1173](ver=0,mode=0,seq=15488527), io_fd={first_id:15488527, second_id:1173, device_handle:null}) [2024-02-19 19:03:34.865618] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3693](ver=0,mode=0,seq=15488528), io_fd={first_id:15488528, second_id:3693, device_handle:null}) [2024-02-19 19:03:34.865626] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2349](ver=0,mode=0,seq=15488529), io_fd={first_id:15488529, second_id:2349, device_handle:null}) [2024-02-19 19:03:34.865634] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1677](ver=0,mode=0,seq=15488530), io_fd={first_id:15488530, second_id:1677, device_handle:null}) [2024-02-19 19:03:34.865641] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[333](ver=0,mode=0,seq=15488531), io_fd={first_id:15488531, second_id:333, device_handle:null}) [2024-02-19 19:03:34.865649] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2013](ver=0,mode=0,seq=15488532), io_fd={first_id:15488532, second_id:2013, device_handle:null}) [2024-02-19 19:03:34.865656] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[669](ver=0,mode=0,seq=15488533), io_fd={first_id:15488533, second_id:669, device_handle:null}) [2024-02-19 19:03:34.865664] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[165](ver=0,mode=0,seq=15488534), io_fd={first_id:15488534, second_id:165, device_handle:null}) [2024-02-19 19:03:34.865672] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1509](ver=0,mode=0,seq=15488535), io_fd={first_id:15488535, second_id:1509, device_handle:null}) [2024-02-19 19:03:34.865679] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2853](ver=0,mode=0,seq=15488536), io_fd={first_id:15488536, second_id:2853, device_handle:null}) [2024-02-19 19:03:34.865687] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[837](ver=0,mode=0,seq=15488537), io_fd={first_id:15488537, second_id:837, device_handle:null}) [2024-02-19 19:03:34.865695] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2181](ver=0,mode=0,seq=15488538), io_fd={first_id:15488538, second_id:2181, device_handle:null}) [2024-02-19 19:03:34.865702] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3525](ver=0,mode=0,seq=15488539), io_fd={first_id:15488539, second_id:3525, device_handle:null}) [2024-02-19 19:03:34.865710] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3189](ver=0,mode=0,seq=15488540), io_fd={first_id:15488540, second_id:3189, device_handle:null}) [2024-02-19 19:03:34.865718] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1845](ver=0,mode=0,seq=15488541), io_fd={first_id:15488541, second_id:1845, device_handle:null}) [2024-02-19 19:03:34.865725] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3430](ver=0,mode=0,seq=15488542), io_fd={first_id:15488542, second_id:3430, device_handle:null}) [2024-02-19 19:03:34.865733] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2086](ver=0,mode=0,seq=15488543), io_fd={first_id:15488543, second_id:2086, device_handle:null}) [2024-02-19 19:03:34.865740] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[742](ver=0,mode=0,seq=15488544), io_fd={first_id:15488544, second_id:742, device_handle:null}) [2024-02-19 19:03:34.865747] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2254](ver=0,mode=0,seq=15488545), io_fd={first_id:15488545, second_id:2254, device_handle:null}) [2024-02-19 19:03:34.865755] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3598](ver=0,mode=0,seq=15488546), io_fd={first_id:15488546, second_id:3598, device_handle:null}) [2024-02-19 19:03:34.865762] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2590](ver=0,mode=0,seq=15488547), io_fd={first_id:15488547, second_id:2590, device_handle:null}) [2024-02-19 19:03:34.865770] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3934](ver=0,mode=0,seq=15488548), io_fd={first_id:15488548, second_id:3934, device_handle:null}) [2024-02-19 19:03:34.865777] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3262](ver=0,mode=0,seq=15488549), io_fd={first_id:15488549, second_id:3262, device_handle:null}) [2024-02-19 19:03:34.865784] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2926](ver=0,mode=0,seq=15488550), io_fd={first_id:15488550, second_id:2926, device_handle:null}) [2024-02-19 19:03:34.865792] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1582](ver=0,mode=0,seq=15488551), io_fd={first_id:15488551, second_id:1582, device_handle:null}) [2024-02-19 19:03:34.865799] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[238](ver=0,mode=0,seq=15488552), io_fd={first_id:15488552, second_id:238, device_handle:null}) [2024-02-19 19:03:34.865807] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1078](ver=0,mode=0,seq=15488553), io_fd={first_id:15488553, second_id:1078, device_handle:null}) [2024-02-19 19:03:34.865814] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2422](ver=0,mode=0,seq=15489200), io_fd={first_id:15489200, second_id:2422, device_handle:null}) [2024-02-19 19:03:34.865822] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3766](ver=0,mode=0,seq=15488555), io_fd={first_id:15488555, second_id:3766, device_handle:null}) [2024-02-19 19:03:34.865829] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1750](ver=0,mode=0,seq=15489199), io_fd={first_id:15489199, second_id:1750, device_handle:null}) [2024-02-19 19:03:34.865836] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3094](ver=0,mode=0,seq=15488557), io_fd={first_id:15488557, second_id:3094, device_handle:null}) [2024-02-19 19:03:34.865844] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[406](ver=0,mode=0,seq=15488558), io_fd={first_id:15488558, second_id:406, device_handle:null}) [2024-02-19 19:03:34.865851] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1414](ver=0,mode=0,seq=15488559), io_fd={first_id:15488559, second_id:1414, device_handle:null}) [2024-02-19 19:03:34.865859] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2758](ver=0,mode=0,seq=15488560), io_fd={first_id:15488560, second_id:2758, device_handle:null}) [2024-02-19 19:03:34.865866] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1823](ver=0,mode=0,seq=15488561), io_fd={first_id:15488561, second_id:1823, device_handle:null}) [2024-02-19 19:03:34.865874] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[143](ver=0,mode=0,seq=15488562), io_fd={first_id:15488562, second_id:143, device_handle:null}) [2024-02-19 19:03:34.865882] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2495](ver=0,mode=0,seq=15488563), io_fd={first_id:15488563, second_id:2495, device_handle:null}) [2024-02-19 19:03:34.865890] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1991](ver=0,mode=0,seq=15488564), io_fd={first_id:15488564, second_id:1991, device_handle:null}) [2024-02-19 19:03:34.865897] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3335](ver=0,mode=0,seq=15488565), io_fd={first_id:15488565, second_id:3335, device_handle:null}) [2024-02-19 19:03:34.865905] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[647](ver=0,mode=0,seq=15488566), io_fd={first_id:15488566, second_id:647, device_handle:null}) [2024-02-19 19:03:34.865912] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2663](ver=0,mode=0,seq=15488567), io_fd={first_id:15488567, second_id:2663, device_handle:null}) [2024-02-19 19:03:34.865919] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1319](ver=0,mode=0,seq=15488568), io_fd={first_id:15488568, second_id:1319, device_handle:null}) [2024-02-19 19:03:34.865927] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[983](ver=0,mode=0,seq=15488569), io_fd={first_id:15488569, second_id:983, device_handle:null}) [2024-02-19 19:03:34.865934] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3671](ver=0,mode=0,seq=15488570), io_fd={first_id:15488570, second_id:3671, device_handle:null}) [2024-02-19 19:03:34.865945] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2327](ver=0,mode=0,seq=15488571), io_fd={first_id:15488571, second_id:2327, device_handle:null}) [2024-02-19 19:03:34.865957] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3912](ver=0,mode=0,seq=15488572), io_fd={first_id:15488572, second_id:3912, device_handle:null}) [2024-02-19 19:03:34.865965] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[1392](ver=0,mode=0,seq=15488573), io_fd={first_id:15488573, second_id:1392, device_handle:null}) [2024-02-19 19:03:34.865984] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=16] block manager free block(macro_id=[2400](ver=0,mode=0,seq=15488574), io_fd={first_id:15488574, second_id:2400, device_handle:null}) [2024-02-19 19:03:34.865995] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[384](ver=0,mode=0,seq=15488575), io_fd={first_id:15488575, second_id:384, device_handle:null}) [2024-02-19 19:03:34.866009] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3744](ver=0,mode=0,seq=15488576), io_fd={first_id:15488576, second_id:3744, device_handle:null}) [2024-02-19 19:03:34.866019] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[720](ver=0,mode=0,seq=15488577), io_fd={first_id:15488577, second_id:720, device_handle:null}) [2024-02-19 19:03:34.866033] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=12] block manager free block(macro_id=[2904](ver=0,mode=0,seq=15488578), io_fd={first_id:15488578, second_id:2904, device_handle:null}) [2024-02-19 19:03:34.866044] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[216](ver=0,mode=0,seq=15488579), io_fd={first_id:15488579, second_id:216, device_handle:null}) [2024-02-19 19:03:34.866054] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[888](ver=0,mode=0,seq=15488580), io_fd={first_id:15488580, second_id:888, device_handle:null}) [2024-02-19 19:03:34.866065] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1896](ver=0,mode=0,seq=15488581), io_fd={first_id:15488581, second_id:1896, device_handle:null}) [2024-02-19 19:03:34.866075] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[552](ver=0,mode=0,seq=15488582), io_fd={first_id:15488582, second_id:552, device_handle:null}) [2024-02-19 19:03:34.866086] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3240](ver=0,mode=0,seq=15488583), io_fd={first_id:15488583, second_id:3240, device_handle:null}) [2024-02-19 19:03:34.866097] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3481](ver=0,mode=0,seq=15488584), io_fd={first_id:15488584, second_id:3481, device_handle:null}) [2024-02-19 19:03:34.866152] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=52] block manager free block(macro_id=[961](ver=0,mode=0,seq=15488585), io_fd={first_id:15488585, second_id:961, device_handle:null}) [2024-02-19 19:03:34.866243] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=88] block manager free block(macro_id=[2641](ver=0,mode=0,seq=15488586), io_fd={first_id:15488586, second_id:2641, device_handle:null}) [2024-02-19 19:03:34.866252] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[1129](ver=0,mode=0,seq=15489201), io_fd={first_id:15489201, second_id:1129, device_handle:null}) [2024-02-19 19:03:34.866260] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2473](ver=0,mode=0,seq=15489202), io_fd={first_id:15489202, second_id:2473, device_handle:null}) [2024-02-19 19:03:34.866268] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2809](ver=0,mode=0,seq=15488589), io_fd={first_id:15488589, second_id:2809, device_handle:null}) [2024-02-19 19:03:34.866428] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.866451] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=23] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614866418}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.866442] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=172] block manager free block(macro_id=[3050](ver=0,mode=0,seq=15488590), io_fd={first_id:15488590, second_id:3050, device_handle:null}) [2024-02-19 19:03:34.866475] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=22] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614866418}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.866487] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=42] block manager free block(macro_id=[1706](ver=0,mode=0,seq=15488591), io_fd={first_id:15488591, second_id:1706, device_handle:null}) [2024-02-19 19:03:34.866496] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[1538](ver=0,mode=0,seq=15488592), io_fd={first_id:15488592, second_id:1538, device_handle:null}) [2024-02-19 19:03:34.866551] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[698](ver=0,mode=0,seq=15488593), io_fd={first_id:15488593, second_id:698, device_handle:null}) [2024-02-19 19:03:34.866605] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=52] block manager free block(macro_id=[2042](ver=0,mode=0,seq=15488594), io_fd={first_id:15488594, second_id:2042, device_handle:null}) [2024-02-19 19:03:34.866614] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[26](ver=0,mode=0,seq=15488595), io_fd={first_id:15488595, second_id:26, device_handle:null}) [2024-02-19 19:03:34.866730] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=113] block manager free block(macro_id=[4058](ver=0,mode=0,seq=15488596), io_fd={first_id:15488596, second_id:4058, device_handle:null}) [2024-02-19 19:03:34.866738] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3722](ver=0,mode=0,seq=15488597), io_fd={first_id:15488597, second_id:3722, device_handle:null}) [2024-02-19 19:03:34.866746] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3963](ver=0,mode=0,seq=15488598), io_fd={first_id:15488598, second_id:3963, device_handle:null}) [2024-02-19 19:03:34.866753] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1275](ver=0,mode=0,seq=15488599), io_fd={first_id:15488599, second_id:1275, device_handle:null}) [2024-02-19 19:03:34.866768] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=13] block manager free block(macro_id=[2787](ver=0,mode=0,seq=15488600), io_fd={first_id:15488600, second_id:2787, device_handle:null}) [2024-02-19 19:03:34.866776] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1443](ver=0,mode=0,seq=15488601), io_fd={first_id:15488601, second_id:1443, device_handle:null}) [2024-02-19 19:03:34.866784] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[435](ver=0,mode=0,seq=15488602), io_fd={first_id:15488602, second_id:435, device_handle:null}) [2024-02-19 19:03:34.866792] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3795](ver=0,mode=0,seq=15488603), io_fd={first_id:15488603, second_id:3795, device_handle:null}) [2024-02-19 19:03:34.866799] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3123](ver=0,mode=0,seq=15488604), io_fd={first_id:15488604, second_id:3123, device_handle:null}) [2024-02-19 19:03:34.866807] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1107](ver=0,mode=0,seq=15488605), io_fd={first_id:15488605, second_id:1107, device_handle:null}) [2024-02-19 19:03:34.866815] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1779](ver=0,mode=0,seq=15488606), io_fd={first_id:15488606, second_id:1779, device_handle:null}) [2024-02-19 19:03:34.866822] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3459](ver=0,mode=0,seq=15488607), io_fd={first_id:15488607, second_id:3459, device_handle:null}) [2024-02-19 19:03:34.866830] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2115](ver=0,mode=0,seq=15488608), io_fd={first_id:15488608, second_id:2115, device_handle:null}) [2024-02-19 19:03:34.866838] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[267](ver=0,mode=0,seq=15488609), io_fd={first_id:15488609, second_id:267, device_handle:null}) [2024-02-19 19:03:34.866846] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3627](ver=0,mode=0,seq=15488610), io_fd={first_id:15488610, second_id:3627, device_handle:null}) [2024-02-19 19:03:34.866854] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3291](ver=0,mode=0,seq=15488611), io_fd={first_id:15488611, second_id:3291, device_handle:null}) [2024-02-19 19:03:34.866870] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[3700](ver=0,mode=0,seq=15488612), io_fd={first_id:15488612, second_id:3700, device_handle:null}) [2024-02-19 19:03:34.866878] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2356](ver=0,mode=0,seq=15488613), io_fd={first_id:15488613, second_id:2356, device_handle:null}) [2024-02-19 19:03:34.866885] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2020](ver=0,mode=0,seq=15488614), io_fd={first_id:15488614, second_id:2020, device_handle:null}) [2024-02-19 19:03:34.866892] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2692](ver=0,mode=0,seq=15488615), io_fd={first_id:15488615, second_id:2692, device_handle:null}) [2024-02-19 19:03:34.866900] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1684](ver=0,mode=0,seq=15488616), io_fd={first_id:15488616, second_id:1684, device_handle:null}) [2024-02-19 19:03:34.866907] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3028](ver=0,mode=0,seq=15488617), io_fd={first_id:15488617, second_id:3028, device_handle:null}) [2024-02-19 19:03:34.866915] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2860](ver=0,mode=0,seq=15488618), io_fd={first_id:15488618, second_id:2860, device_handle:null}) [2024-02-19 19:03:34.866922] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[413](ver=0,mode=0,seq=15488619), io_fd={first_id:15488619, second_id:413, device_handle:null}) [2024-02-19 19:03:34.866930] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[581](ver=0,mode=0,seq=15488620), io_fd={first_id:15488620, second_id:581, device_handle:null}) [2024-02-19 19:03:34.866939] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1925](ver=0,mode=0,seq=15489204), io_fd={first_id:15489204, second_id:1925, device_handle:null}) [2024-02-19 19:03:34.866950] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2933](ver=0,mode=0,seq=15489203), io_fd={first_id:15489203, second_id:2933, device_handle:null}) [2024-02-19 19:03:34.866962] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1589](ver=0,mode=0,seq=15488623), io_fd={first_id:15488623, second_id:1589, device_handle:null}) [2024-02-19 19:03:34.866970] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[917](ver=0,mode=0,seq=15488624), io_fd={first_id:15488624, second_id:917, device_handle:null}) [2024-02-19 19:03:34.866978] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1253](ver=0,mode=0,seq=15488625), io_fd={first_id:15488625, second_id:1253, device_handle:null}) [2024-02-19 19:03:34.866986] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3941](ver=0,mode=0,seq=15488626), io_fd={first_id:15488626, second_id:3941, device_handle:null}) [2024-02-19 19:03:34.866993] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2597](ver=0,mode=0,seq=15488627), io_fd={first_id:15488627, second_id:2597, device_handle:null}) [2024-02-19 19:03:34.867000] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3437](ver=0,mode=0,seq=15488628), io_fd={first_id:15488628, second_id:3437, device_handle:null}) [2024-02-19 19:03:34.867008] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[749](ver=0,mode=0,seq=15488629), io_fd={first_id:15488629, second_id:749, device_handle:null}) [2024-02-19 19:03:34.867015] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2093](ver=0,mode=0,seq=15488630), io_fd={first_id:15488630, second_id:2093, device_handle:null}) [2024-02-19 19:03:34.867023] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[77](ver=0,mode=0,seq=15488631), io_fd={first_id:15488631, second_id:77, device_handle:null}) [2024-02-19 19:03:34.867030] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1421](ver=0,mode=0,seq=15488632), io_fd={first_id:15488632, second_id:1421, device_handle:null}) [2024-02-19 19:03:34.867038] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3773](ver=0,mode=0,seq=15488633), io_fd={first_id:15488633, second_id:3773, device_handle:null}) [2024-02-19 19:03:34.867045] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1085](ver=0,mode=0,seq=15488634), io_fd={first_id:15488634, second_id:1085, device_handle:null}) [2024-02-19 19:03:34.867053] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2670](ver=0,mode=0,seq=15488635), io_fd={first_id:15488635, second_id:2670, device_handle:null}) [2024-02-19 19:03:34.867060] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1326](ver=0,mode=0,seq=15488636), io_fd={first_id:15488636, second_id:1326, device_handle:null}) [2024-02-19 19:03:34.867068] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1494](ver=0,mode=0,seq=15488637), io_fd={first_id:15488637, second_id:1494, device_handle:null}) [2024-02-19 19:03:34.867075] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2838](ver=0,mode=0,seq=15488638), io_fd={first_id:15488638, second_id:2838, device_handle:null}) [2024-02-19 19:03:34.867083] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[486](ver=0,mode=0,seq=15488639), io_fd={first_id:15488639, second_id:486, device_handle:null}) [2024-02-19 19:03:34.867090] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2502](ver=0,mode=0,seq=15488640), io_fd={first_id:15488640, second_id:2502, device_handle:null}) [2024-02-19 19:03:34.867101] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1830](ver=0,mode=0,seq=15488641), io_fd={first_id:15488641, second_id:1830, device_handle:null}) [2024-02-19 19:03:34.867113] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[318](ver=0,mode=0,seq=15488642), io_fd={first_id:15488642, second_id:318, device_handle:null}) [2024-02-19 19:03:34.867124] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[990](ver=0,mode=0,seq=15488643), io_fd={first_id:15488643, second_id:990, device_handle:null}) [2024-02-19 19:03:34.867142] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[2334](ver=0,mode=0,seq=15488644), io_fd={first_id:15488644, second_id:2334, device_handle:null}) [2024-02-19 19:03:34.867153] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[654](ver=0,mode=0,seq=15488645), io_fd={first_id:15488645, second_id:654, device_handle:null}) [2024-02-19 19:03:34.867165] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2239](ver=0,mode=0,seq=15488646), io_fd={first_id:15488646, second_id:2239, device_handle:null}) [2024-02-19 19:03:34.867176] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3751](ver=0,mode=0,seq=15488647), io_fd={first_id:15488647, second_id:3751, device_handle:null}) [2024-02-19 19:03:34.867187] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[1063](ver=0,mode=0,seq=15488648), io_fd={first_id:15488648, second_id:1063, device_handle:null}) [2024-02-19 19:03:34.867198] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[4087](ver=0,mode=0,seq=15488649), io_fd={first_id:15488649, second_id:4087, device_handle:null}) [2024-02-19 19:03:34.867209] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3415](ver=0,mode=0,seq=15488650), io_fd={first_id:15488650, second_id:3415, device_handle:null}) [2024-02-19 19:03:34.867220] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2743](ver=0,mode=0,seq=15488651), io_fd={first_id:15488651, second_id:2743, device_handle:null}) [2024-02-19 19:03:34.867231] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[727](ver=0,mode=0,seq=15488652), io_fd={first_id:15488652, second_id:727, device_handle:null}) [2024-02-19 19:03:34.867242] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[1399](ver=0,mode=0,seq=15488653), io_fd={first_id:15488653, second_id:1399, device_handle:null}) [2024-02-19 19:03:34.867253] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[1735](ver=0,mode=0,seq=15488654), io_fd={first_id:15488654, second_id:1735, device_handle:null}) [2024-02-19 19:03:34.867264] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1231](ver=0,mode=0,seq=15488655), io_fd={first_id:15488655, second_id:1231, device_handle:null}) [2024-02-19 19:03:34.867276] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2575](ver=0,mode=0,seq=15489205), io_fd={first_id:15489205, second_id:2575, device_handle:null}) [2024-02-19 19:03:34.867288] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[559](ver=0,mode=0,seq=15489206), io_fd={first_id:15489206, second_id:559, device_handle:null}) [2024-02-19 19:03:34.867299] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[223](ver=0,mode=0,seq=15488658), io_fd={first_id:15488658, second_id:223, device_handle:null}) [2024-02-19 19:03:34.867310] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2911](ver=0,mode=0,seq=15488659), io_fd={first_id:15488659, second_id:2911, device_handle:null}) [2024-02-19 19:03:34.867322] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1808](ver=0,mode=0,seq=15488660), io_fd={first_id:15488660, second_id:1808, device_handle:null}) [2024-02-19 19:03:34.867333] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[632](ver=0,mode=0,seq=15488661), io_fd={first_id:15488661, second_id:632, device_handle:null}) [2024-02-19 19:03:34.867344] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1976](ver=0,mode=0,seq=15488662), io_fd={first_id:15488662, second_id:1976, device_handle:null}) [2024-02-19 19:03:34.867355] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[968](ver=0,mode=0,seq=15488663), io_fd={first_id:15488663, second_id:968, device_handle:null}) [2024-02-19 19:03:34.867367] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3656](ver=0,mode=0,seq=15488664), io_fd={first_id:15488664, second_id:3656, device_handle:null}) [2024-02-19 19:03:34.867379] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2312](ver=0,mode=0,seq=15488665), io_fd={first_id:15488665, second_id:2312, device_handle:null}) [2024-02-19 19:03:34.867391] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2984](ver=0,mode=0,seq=15488666), io_fd={first_id:15488666, second_id:2984, device_handle:null}) [2024-02-19 19:03:34.867403] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3992](ver=0,mode=0,seq=15488667), io_fd={first_id:15488667, second_id:3992, device_handle:null}) [2024-02-19 19:03:34.867415] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1304](ver=0,mode=0,seq=15488668), io_fd={first_id:15488668, second_id:1304, device_handle:null}) [2024-02-19 19:03:34.867427] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2816](ver=0,mode=0,seq=15488669), io_fd={first_id:15488669, second_id:2816, device_handle:null}) [2024-02-19 19:03:34.867438] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3824](ver=0,mode=0,seq=15488670), io_fd={first_id:15488670, second_id:3824, device_handle:null}) [2024-02-19 19:03:34.867450] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1377](ver=0,mode=0,seq=15488671), io_fd={first_id:15488671, second_id:1377, device_handle:null}) [2024-02-19 19:03:34.867461] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[3897](ver=0,mode=0,seq=15488672), io_fd={first_id:15488672, second_id:3897, device_handle:null}) [2024-02-19 19:03:34.867472] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2794](ver=0,mode=0,seq=15488673), io_fd={first_id:15488673, second_id:2794, device_handle:null}) [2024-02-19 19:03:34.867483] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[106](ver=0,mode=0,seq=15488674), io_fd={first_id:15488674, second_id:106, device_handle:null}) [2024-02-19 19:03:34.867495] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[778](ver=0,mode=0,seq=15488675), io_fd={first_id:15488675, second_id:778, device_handle:null}) [2024-02-19 19:03:34.867506] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1786](ver=0,mode=0,seq=15488676), io_fd={first_id:15488676, second_id:1786, device_handle:null}) [2024-02-19 19:03:34.867514] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[442](ver=0,mode=0,seq=15488677), io_fd={first_id:15488677, second_id:442, device_handle:null}) [2024-02-19 19:03:34.867522] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3130](ver=0,mode=0,seq=15488678), io_fd={first_id:15488678, second_id:3130, device_handle:null}) [2024-02-19 19:03:34.867530] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[610](ver=0,mode=0,seq=15488679), io_fd={first_id:15488679, second_id:610, device_handle:null}) [2024-02-19 19:03:34.867538] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1618](ver=0,mode=0,seq=15488680), io_fd={first_id:15488680, second_id:1618, device_handle:null}) [2024-02-19 19:03:34.867546] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3203](ver=0,mode=0,seq=15488681), io_fd={first_id:15488681, second_id:3203, device_handle:null}) [2024-02-19 19:03:34.867553] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[683](ver=0,mode=0,seq=15488682), io_fd={first_id:15488682, second_id:683, device_handle:null}) [2024-02-19 19:03:34.867561] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[347](ver=0,mode=0,seq=15488683), io_fd={first_id:15488683, second_id:347, device_handle:null}) [2024-02-19 19:03:34.867569] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2363](ver=0,mode=0,seq=15488684), io_fd={first_id:15488684, second_id:2363, device_handle:null}) [2024-02-19 19:03:34.867577] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2699](ver=0,mode=0,seq=15488685), io_fd={first_id:15488685, second_id:2699, device_handle:null}) [2024-02-19 19:03:34.867587] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1355](ver=0,mode=0,seq=15488686), io_fd={first_id:15488686, second_id:1355, device_handle:null}) [2024-02-19 19:03:34.867595] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[11](ver=0,mode=0,seq=15488687), io_fd={first_id:15488687, second_id:11, device_handle:null}) [2024-02-19 19:03:34.867603] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2195](ver=0,mode=0,seq=15488688), io_fd={first_id:15488688, second_id:2195, device_handle:null}) [2024-02-19 19:03:34.867612] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3539](ver=0,mode=0,seq=15488689), io_fd={first_id:15488689, second_id:3539, device_handle:null}) [2024-02-19 19:03:34.867623] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2867](ver=0,mode=0,seq=15488690), io_fd={first_id:15488690, second_id:2867, device_handle:null}) [2024-02-19 19:03:34.867631] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2772](ver=0,mode=0,seq=15488691), io_fd={first_id:15488691, second_id:2772, device_handle:null}) [2024-02-19 19:03:34.867640] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[588](ver=0,mode=0,seq=15488692), io_fd={first_id:15488692, second_id:588, device_handle:null}) [2024-02-19 19:03:34.867647] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1932](ver=0,mode=0,seq=15489207), io_fd={first_id:15489207, second_id:1932, device_handle:null}) [2024-02-19 19:03:34.867655] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2268](ver=0,mode=0,seq=15488694), io_fd={first_id:15488694, second_id:2268, device_handle:null}) [2024-02-19 19:03:34.867662] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3108](ver=0,mode=0,seq=15489208), io_fd={first_id:15489208, second_id:3108, device_handle:null}) [2024-02-19 19:03:34.867670] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3780](ver=0,mode=0,seq=15488696), io_fd={first_id:15488696, second_id:3780, device_handle:null}) [2024-02-19 19:03:34.867678] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3853](ver=0,mode=0,seq=15488697), io_fd={first_id:15488697, second_id:3853, device_handle:null}) [2024-02-19 19:03:34.867686] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2845](ver=0,mode=0,seq=15488698), io_fd={first_id:15488698, second_id:2845, device_handle:null}) [2024-02-19 19:03:34.867693] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1910](ver=0,mode=0,seq=15488699), io_fd={first_id:15488699, second_id:1910, device_handle:null}) [2024-02-19 19:03:34.867701] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[566](ver=0,mode=0,seq=15488700), io_fd={first_id:15488700, second_id:566, device_handle:null}) [2024-02-19 19:03:34.867709] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3086](ver=0,mode=0,seq=15488701), io_fd={first_id:15488701, second_id:3086, device_handle:null}) [2024-02-19 19:03:34.867717] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2246](ver=0,mode=0,seq=15488702), io_fd={first_id:15488702, second_id:2246, device_handle:null}) [2024-02-19 19:03:34.867725] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[230](ver=0,mode=0,seq=15488703), io_fd={first_id:15488703, second_id:230, device_handle:null}) [2024-02-19 19:03:34.867733] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1574](ver=0,mode=0,seq=15488704), io_fd={first_id:15488704, second_id:1574, device_handle:null}) [2024-02-19 19:03:34.867741] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1238](ver=0,mode=0,seq=15488705), io_fd={first_id:15488705, second_id:1238, device_handle:null}) [2024-02-19 19:03:34.867749] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2823](ver=0,mode=0,seq=15488706), io_fd={first_id:15488706, second_id:2823, device_handle:null}) [2024-02-19 19:03:34.867757] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[135](ver=0,mode=0,seq=15488707), io_fd={first_id:15488707, second_id:135, device_handle:null}) [2024-02-19 19:03:34.867764] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1983](ver=0,mode=0,seq=15488708), io_fd={first_id:15488708, second_id:1983, device_handle:null}) [2024-02-19 19:03:34.868081] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=312] block manager free block(macro_id=[471](ver=0,mode=0,seq=15488709), io_fd={first_id:15488709, second_id:471, device_handle:null}) [2024-02-19 19:03:34.868094] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=11] block manager free block(macro_id=[1143](ver=0,mode=0,seq=15488710), io_fd={first_id:15488710, second_id:1143, device_handle:null}) [2024-02-19 19:03:34.868107] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[807](ver=0,mode=0,seq=15488711), io_fd={first_id:15488711, second_id:807, device_handle:null}) [2024-02-19 19:03:34.868119] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[3736](ver=0,mode=0,seq=15488712), io_fd={first_id:15488712, second_id:3736, device_handle:null}) [2024-02-19 19:03:34.868131] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2392](ver=0,mode=0,seq=15488713), io_fd={first_id:15488713, second_id:2392, device_handle:null}) [2024-02-19 19:03:34.868141] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1048](ver=0,mode=0,seq=15488714), io_fd={first_id:15488714, second_id:1048, device_handle:null}) [2024-02-19 19:03:34.868152] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1216](ver=0,mode=0,seq=15488715), io_fd={first_id:15488715, second_id:1216, device_handle:null}) [2024-02-19 19:03:34.868160] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2728](ver=0,mode=0,seq=15488716), io_fd={first_id:15488716, second_id:2728, device_handle:null}) [2024-02-19 19:03:34.868168] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[40](ver=0,mode=0,seq=15488717), io_fd={first_id:15488717, second_id:40, device_handle:null}) [2024-02-19 19:03:34.868176] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[4072](ver=0,mode=0,seq=15488718), io_fd={first_id:15488718, second_id:4072, device_handle:null}) [2024-02-19 19:03:34.868184] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3400](ver=0,mode=0,seq=15488719), io_fd={first_id:15488719, second_id:3400, device_handle:null}) [2024-02-19 19:03:34.868192] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[712](ver=0,mode=0,seq=15488720), io_fd={first_id:15488720, second_id:712, device_handle:null}) [2024-02-19 19:03:34.868199] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[617](ver=0,mode=0,seq=15488721), io_fd={first_id:15488721, second_id:617, device_handle:null}) [2024-02-19 19:03:34.868207] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1961](ver=0,mode=0,seq=15488722), io_fd={first_id:15488722, second_id:1961, device_handle:null}) [2024-02-19 19:03:34.868215] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1121](ver=0,mode=0,seq=15488723), io_fd={first_id:15488723, second_id:1121, device_handle:null}) [2024-02-19 19:03:34.868223] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[113](ver=0,mode=0,seq=15488724), io_fd={first_id:15488724, second_id:113, device_handle:null}) [2024-02-19 19:03:34.868230] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1698](ver=0,mode=0,seq=15488725), io_fd={first_id:15488725, second_id:1698, device_handle:null}) [2024-02-19 19:03:34.868238] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3378](ver=0,mode=0,seq=15488726), io_fd={first_id:15488726, second_id:3378, device_handle:null}) [2024-02-19 19:03:34.868246] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[4050](ver=0,mode=0,seq=15488727), io_fd={first_id:15488727, second_id:4050, device_handle:null}) [2024-02-19 19:03:34.868254] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2538](ver=0,mode=0,seq=15488728), io_fd={first_id:15488728, second_id:2538, device_handle:null}) [2024-02-19 19:03:34.868261] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3787](ver=0,mode=0,seq=15489209), io_fd={first_id:15489209, second_id:3787, device_handle:null}) [2024-02-19 19:03:34.868269] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1099](ver=0,mode=0,seq=15489210), io_fd={first_id:15489210, second_id:1099, device_handle:null}) [2024-02-19 19:03:34.868277] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1267](ver=0,mode=0,seq=15488731), io_fd={first_id:15488731, second_id:1267, device_handle:null}) [2024-02-19 19:03:34.868285] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[259](ver=0,mode=0,seq=15488732), io_fd={first_id:15488732, second_id:259, device_handle:null}) [2024-02-19 19:03:34.868298] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=11] block manager free block(macro_id=[1603](ver=0,mode=0,seq=15488733), io_fd={first_id:15488733, second_id:1603, device_handle:null}) [2024-02-19 19:03:34.868306] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[931](ver=0,mode=0,seq=15488734), io_fd={first_id:15488734, second_id:931, device_handle:null}) [2024-02-19 19:03:34.868314] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[595](ver=0,mode=0,seq=15488735), io_fd={first_id:15488735, second_id:595, device_handle:null}) [2024-02-19 19:03:34.868322] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[91](ver=0,mode=0,seq=15488736), io_fd={first_id:15488736, second_id:91, device_handle:null}) [2024-02-19 19:03:34.868330] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1435](ver=0,mode=0,seq=15488737), io_fd={first_id:15488737, second_id:1435, device_handle:null}) [2024-02-19 19:03:34.868337] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2779](ver=0,mode=0,seq=15488738), io_fd={first_id:15488738, second_id:2779, device_handle:null}) [2024-02-19 19:03:34.868345] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[763](ver=0,mode=0,seq=15488739), io_fd={first_id:15488739, second_id:763, device_handle:null}) [2024-02-19 19:03:34.868353] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2107](ver=0,mode=0,seq=15488740), io_fd={first_id:15488740, second_id:2107, device_handle:null}) [2024-02-19 19:03:34.868360] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1771](ver=0,mode=0,seq=15488741), io_fd={first_id:15488741, second_id:1771, device_handle:null}) [2024-02-19 19:03:34.868368] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3356](ver=0,mode=0,seq=15488742), io_fd={first_id:15488742, second_id:3356, device_handle:null}) [2024-02-19 19:03:34.868376] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3524](ver=0,mode=0,seq=15488743), io_fd={first_id:15488743, second_id:3524, device_handle:null}) [2024-02-19 19:03:34.868384] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[500](ver=0,mode=0,seq=15488744), io_fd={first_id:15488744, second_id:500, device_handle:null}) [2024-02-19 19:03:34.868392] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1844](ver=0,mode=0,seq=15488745), io_fd={first_id:15488745, second_id:1844, device_handle:null}) [2024-02-19 19:03:34.868400] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1172](ver=0,mode=0,seq=15488746), io_fd={first_id:15488746, second_id:1172, device_handle:null}) [2024-02-19 19:03:34.868408] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2852](ver=0,mode=0,seq=15488747), io_fd={first_id:15488747, second_id:2852, device_handle:null}) [2024-02-19 19:03:34.868416] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[164](ver=0,mode=0,seq=15488748), io_fd={first_id:15488748, second_id:164, device_handle:null}) [2024-02-19 19:03:34.868424] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1004](ver=0,mode=0,seq=15488749), io_fd={first_id:15488749, second_id:1004, device_handle:null}) [2024-02-19 19:03:34.868431] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2348](ver=0,mode=0,seq=15488750), io_fd={first_id:15488750, second_id:2348, device_handle:null}) [2024-02-19 19:03:34.868439] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3692](ver=0,mode=0,seq=15488751), io_fd={first_id:15488751, second_id:3692, device_handle:null}) [2024-02-19 19:03:34.868447] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[332](ver=0,mode=0,seq=15488752), io_fd={first_id:15488752, second_id:332, device_handle:null}) [2024-02-19 19:03:34.868455] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[4028](ver=0,mode=0,seq=15488753), io_fd={first_id:15488753, second_id:4028, device_handle:null}) [2024-02-19 19:03:34.868463] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[237](ver=0,mode=0,seq=15488754), io_fd={first_id:15488754, second_id:237, device_handle:null}) [2024-02-19 19:03:34.868471] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[405](ver=0,mode=0,seq=15488755), io_fd={first_id:15488755, second_id:405, device_handle:null}) [2024-02-19 19:03:34.868479] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1749](ver=0,mode=0,seq=15488756), io_fd={first_id:15488756, second_id:1749, device_handle:null}) [2024-02-19 19:03:34.868488] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[1413](ver=0,mode=0,seq=15488757), io_fd={first_id:15488757, second_id:1413, device_handle:null}) [2024-02-19 19:03:34.868496] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3765](ver=0,mode=0,seq=15488758), io_fd={first_id:15488758, second_id:3765, device_handle:null}) [2024-02-19 19:03:34.868503] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1077](ver=0,mode=0,seq=15488759), io_fd={first_id:15488759, second_id:1077, device_handle:null}) [2024-02-19 19:03:34.868515] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3261](ver=0,mode=0,seq=15488760), io_fd={first_id:15488760, second_id:3261, device_handle:null}) [2024-02-19 19:03:34.868522] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3933](ver=0,mode=0,seq=15488761), io_fd={first_id:15488761, second_id:3933, device_handle:null}) [2024-02-19 19:03:34.868509] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.868529] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2253](ver=0,mode=0,seq=15488762), io_fd={first_id:15488762, second_id:2253, device_handle:null}) [2024-02-19 19:03:34.868537] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[4006](ver=0,mode=0,seq=15489211), io_fd={first_id:15489211, second_id:4006, device_handle:null}) [2024-02-19 19:03:34.868545] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1318](ver=0,mode=0,seq=15488764), io_fd={first_id:15488764, second_id:1318, device_handle:null}) [2024-02-19 19:03:34.868553] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2662](ver=0,mode=0,seq=15489212), io_fd={first_id:15489212, second_id:2662, device_handle:null}) [2024-02-19 19:03:34.868538] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.868560] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3334](ver=0,mode=0,seq=15488766), io_fd={first_id:15488766, second_id:3334, device_handle:null}) [2024-02-19 19:03:34.868569] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3502](ver=0,mode=0,seq=15488767), io_fd={first_id:15488767, second_id:3502, device_handle:null}) [2024-02-19 19:03:34.868576] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[814](ver=0,mode=0,seq=15488768), io_fd={first_id:15488768, second_id:814, device_handle:null}) [2024-02-19 19:03:34.868584] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2158](ver=0,mode=0,seq=15488769), io_fd={first_id:15488769, second_id:2158, device_handle:null}) [2024-02-19 19:03:34.868591] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1822](ver=0,mode=0,seq=15488770), io_fd={first_id:15488770, second_id:1822, device_handle:null}) [2024-02-19 19:03:34.868599] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[551](ver=0,mode=0,seq=15488771), io_fd={first_id:15488771, second_id:551, device_handle:null}) [2024-02-19 19:03:34.868606] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2903](ver=0,mode=0,seq=15488772), io_fd={first_id:15488772, second_id:2903, device_handle:null}) [2024-02-19 19:03:34.868614] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1055](ver=0,mode=0,seq=15488773), io_fd={first_id:15488773, second_id:1055, device_handle:null}) [2024-02-19 19:03:34.868621] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[383](ver=0,mode=0,seq=15488774), io_fd={first_id:15488774, second_id:383, device_handle:null}) [2024-02-19 19:03:34.868629] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1727](ver=0,mode=0,seq=15488775), io_fd={first_id:15488775, second_id:1727, device_handle:null}) [2024-02-19 19:03:34.868636] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[47](ver=0,mode=0,seq=15488776), io_fd={first_id:15488776, second_id:47, device_handle:null}) [2024-02-19 19:03:34.868644] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1391](ver=0,mode=0,seq=15488777), io_fd={first_id:15488777, second_id:1391, device_handle:null}) [2024-02-19 19:03:34.868652] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3144](ver=0,mode=0,seq=15488778), io_fd={first_id:15488778, second_id:3144, device_handle:null}) [2024-02-19 19:03:34.868660] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[456](ver=0,mode=0,seq=15488779), io_fd={first_id:15488779, second_id:456, device_handle:null}) [2024-02-19 19:03:34.868668] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3480](ver=0,mode=0,seq=15488780), io_fd={first_id:15488780, second_id:3480, device_handle:null}) [2024-02-19 19:03:34.868676] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2808](ver=0,mode=0,seq=15488781), io_fd={first_id:15488781, second_id:2808, device_handle:null}) [2024-02-19 19:03:34.868683] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1128](ver=0,mode=0,seq=15488782), io_fd={first_id:15488782, second_id:1128, device_handle:null}) [2024-02-19 19:03:34.868711] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1296](ver=0,mode=0,seq=15488783), io_fd={first_id:15488783, second_id:1296, device_handle:null}) [2024-02-19 19:03:34.868719] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2304](ver=0,mode=0,seq=15488784), io_fd={first_id:15488784, second_id:2304, device_handle:null}) [2024-02-19 19:03:34.868727] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3648](ver=0,mode=0,seq=15488785), io_fd={first_id:15488785, second_id:3648, device_handle:null}) [2024-02-19 19:03:34.868735] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[960](ver=0,mode=0,seq=15488786), io_fd={first_id:15488786, second_id:960, device_handle:null}) [2024-02-19 19:03:34.868742] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3889](ver=0,mode=0,seq=15488787), io_fd={first_id:15488787, second_id:3889, device_handle:null}) [2024-02-19 19:03:34.868750] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2713](ver=0,mode=0,seq=15488788), io_fd={first_id:15488788, second_id:2713, device_handle:null}) [2024-02-19 19:03:34.868758] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[4057](ver=0,mode=0,seq=15488789), io_fd={first_id:15488789, second_id:4057, device_handle:null}) [2024-02-19 19:03:34.868766] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[25](ver=0,mode=0,seq=15488790), io_fd={first_id:15488790, second_id:25, device_handle:null}) [2024-02-19 19:03:34.868774] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[361](ver=0,mode=0,seq=15488791), io_fd={first_id:15488791, second_id:361, device_handle:null}) [2024-02-19 19:03:34.868782] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3721](ver=0,mode=0,seq=15488792), io_fd={first_id:15488792, second_id:3721, device_handle:null}) [2024-02-19 19:03:34.868790] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3049](ver=0,mode=0,seq=15488793), io_fd={first_id:15488793, second_id:3049, device_handle:null}) [2024-02-19 19:03:34.868797] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1033](ver=0,mode=0,seq=15488794), io_fd={first_id:15488794, second_id:1033, device_handle:null}) [2024-02-19 19:03:34.868805] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3385](ver=0,mode=0,seq=15488795), io_fd={first_id:15488795, second_id:3385, device_handle:null}) [2024-02-19 19:03:34.868812] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2041](ver=0,mode=0,seq=15489214), io_fd={first_id:15489214, second_id:2041, device_handle:null}) [2024-02-19 19:03:34.868819] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2881](ver=0,mode=0,seq=15489213), io_fd={first_id:15489213, second_id:2881, device_handle:null}) [2024-02-19 19:03:34.868826] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[193](ver=0,mode=0,seq=15488798), io_fd={first_id:15488798, second_id:193, device_handle:null}) [2024-02-19 19:03:34.868833] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2209](ver=0,mode=0,seq=15488799), io_fd={first_id:15488799, second_id:2209, device_handle:null}) [2024-02-19 19:03:34.868840] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3553](ver=0,mode=0,seq=15488800), io_fd={first_id:15488800, second_id:3553, device_handle:null}) [2024-02-19 19:03:34.868848] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3217](ver=0,mode=0,seq=15488801), io_fd={first_id:15488801, second_id:3217, device_handle:null}) [2024-02-19 19:03:34.868855] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1873](ver=0,mode=0,seq=15488802), io_fd={first_id:15488802, second_id:1873, device_handle:null}) [2024-02-19 19:03:34.868862] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1946](ver=0,mode=0,seq=15488803), io_fd={first_id:15488803, second_id:1946, device_handle:null}) [2024-02-19 19:03:34.868869] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3962](ver=0,mode=0,seq=15488804), io_fd={first_id:15488804, second_id:3962, device_handle:null}) [2024-02-19 19:03:34.868876] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1610](ver=0,mode=0,seq=15488805), io_fd={first_id:15488805, second_id:1610, device_handle:null}) [2024-02-19 19:03:34.868883] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2954](ver=0,mode=0,seq=15488806), io_fd={first_id:15488806, second_id:2954, device_handle:null}) [2024-02-19 19:03:34.868890] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3794](ver=0,mode=0,seq=15488807), io_fd={first_id:15488807, second_id:3794, device_handle:null}) [2024-02-19 19:03:34.868898] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1106](ver=0,mode=0,seq=15488808), io_fd={first_id:15488808, second_id:1106, device_handle:null}) [2024-02-19 19:03:34.868905] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3027](ver=0,mode=0,seq=15488809), io_fd={first_id:15488809, second_id:3027, device_handle:null}) [2024-02-19 19:03:34.868912] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[507](ver=0,mode=0,seq=15488810), io_fd={first_id:15488810, second_id:507, device_handle:null}) [2024-02-19 19:03:34.868919] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3195](ver=0,mode=0,seq=15488811), io_fd={first_id:15488811, second_id:3195, device_handle:null}) [2024-02-19 19:03:34.868955] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=25] block manager free block(macro_id=[2187](ver=0,mode=0,seq=15488812), io_fd={first_id:15488812, second_id:2187, device_handle:null}) [2024-02-19 19:03:34.868965] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[171](ver=0,mode=0,seq=15488813), io_fd={first_id:15488813, second_id:171, device_handle:null}) [2024-02-19 19:03:34.868973] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[675](ver=0,mode=0,seq=15488814), io_fd={first_id:15488814, second_id:675, device_handle:null}) [2024-02-19 19:03:34.868980] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2019](ver=0,mode=0,seq=15488815), io_fd={first_id:15488815, second_id:2019, device_handle:null}) [2024-02-19 19:03:34.868987] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2691](ver=0,mode=0,seq=15488816), io_fd={first_id:15488816, second_id:2691, device_handle:null}) [2024-02-19 19:03:34.868995] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3699](ver=0,mode=0,seq=15488817), io_fd={first_id:15488817, second_id:3699, device_handle:null}) [2024-02-19 19:03:34.869002] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1420](ver=0,mode=0,seq=15488818), io_fd={first_id:15488818, second_id:1420, device_handle:null}) [2024-02-19 19:03:34.869009] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[76](ver=0,mode=0,seq=15488819), io_fd={first_id:15488819, second_id:76, device_handle:null}) [2024-02-19 19:03:34.869016] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3100](ver=0,mode=0,seq=15488820), io_fd={first_id:15488820, second_id:3100, device_handle:null}) [2024-02-19 19:03:34.869024] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2428](ver=0,mode=0,seq=15488821), io_fd={first_id:15488821, second_id:2428, device_handle:null}) [2024-02-19 19:03:34.869034] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[244](ver=0,mode=0,seq=15488822), io_fd={first_id:15488822, second_id:244, device_handle:null}) [2024-02-19 19:03:34.869042] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3509](ver=0,mode=0,seq=15488823), io_fd={first_id:15488823, second_id:3509, device_handle:null}) [2024-02-19 19:03:34.869049] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2165](ver=0,mode=0,seq=15488824), io_fd={first_id:15488824, second_id:2165, device_handle:null}) [2024-02-19 19:03:34.869056] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1325](ver=0,mode=0,seq=15488825), io_fd={first_id:15488825, second_id:1325, device_handle:null}) [2024-02-19 19:03:34.869063] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3341](ver=0,mode=0,seq=15488826), io_fd={first_id:15488826, second_id:3341, device_handle:null}) [2024-02-19 19:03:34.869070] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[317](ver=0,mode=0,seq=15488827), io_fd={first_id:15488827, second_id:317, device_handle:null}) [2024-02-19 19:03:34.869078] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1157](ver=0,mode=0,seq=15488828), io_fd={first_id:15488828, second_id:1157, device_handle:null}) [2024-02-19 19:03:34.869085] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3173](ver=0,mode=0,seq=15489215), io_fd={first_id:15489215, second_id:3173, device_handle:null}) [2024-02-19 19:03:34.869093] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1734](ver=0,mode=0,seq=15489216), io_fd={first_id:15489216, second_id:1734, device_handle:null}) [2024-02-19 19:03:34.869100] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3078](ver=0,mode=0,seq=15488831), io_fd={first_id:15488831, second_id:3078, device_handle:null}) [2024-02-19 19:03:34.869107] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3246](ver=0,mode=0,seq=15488832), io_fd={first_id:15488832, second_id:3246, device_handle:null}) [2024-02-19 19:03:34.869115] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[894](ver=0,mode=0,seq=15488833), io_fd={first_id:15488833, second_id:894, device_handle:null}) [2024-02-19 19:03:34.869122] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1566](ver=0,mode=0,seq=15488834), io_fd={first_id:15488834, second_id:1566, device_handle:null}) [2024-02-19 19:03:34.869129] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2742](ver=0,mode=0,seq=15488835), io_fd={first_id:15488835, second_id:2742, device_handle:null}) [2024-02-19 19:03:34.869137] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[4086](ver=0,mode=0,seq=15488836), io_fd={first_id:15488836, second_id:4086, device_handle:null}) [2024-02-19 19:03:34.869144] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3991](ver=0,mode=0,seq=15488837), io_fd={first_id:15488837, second_id:3991, device_handle:null}) [2024-02-19 19:03:34.869151] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2479](ver=0,mode=0,seq=15488838), io_fd={first_id:15488838, second_id:2479, device_handle:null}) [2024-02-19 19:03:34.869158] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3151](ver=0,mode=0,seq=15488839), io_fd={first_id:15488839, second_id:3151, device_handle:null}) [2024-02-19 19:03:34.869165] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[799](ver=0,mode=0,seq=15488840), io_fd={first_id:15488840, second_id:799, device_handle:null}) [2024-02-19 19:03:34.869173] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2983](ver=0,mode=0,seq=15488841), io_fd={first_id:15488841, second_id:2983, device_handle:null}) [2024-02-19 19:03:34.869180] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3655](ver=0,mode=0,seq=15488842), io_fd={first_id:15488842, second_id:3655, device_handle:null}) [2024-02-19 19:03:34.869187] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2311](ver=0,mode=0,seq=15488843), io_fd={first_id:15488843, second_id:2311, device_handle:null}) [2024-02-19 19:03:34.869194] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[631](ver=0,mode=0,seq=15488844), io_fd={first_id:15488844, second_id:631, device_handle:null}) [2024-02-19 19:03:34.869202] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1975](ver=0,mode=0,seq=15488845), io_fd={first_id:15488845, second_id:1975, device_handle:null}) [2024-02-19 19:03:34.869209] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3319](ver=0,mode=0,seq=15488846), io_fd={first_id:15488846, second_id:3319, device_handle:null}) [2024-02-19 19:03:34.869216] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3560](ver=0,mode=0,seq=15488847), io_fd={first_id:15488847, second_id:3560, device_handle:null}) [2024-02-19 19:03:34.869223] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3728](ver=0,mode=0,seq=15488848), io_fd={first_id:15488848, second_id:3728, device_handle:null}) [2024-02-19 19:03:34.869230] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3392](ver=0,mode=0,seq=15488849), io_fd={first_id:15488849, second_id:3392, device_handle:null}) [2024-02-19 19:03:34.869238] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3896](ver=0,mode=0,seq=15488850), io_fd={first_id:15488850, second_id:3896, device_handle:null}) [2024-02-19 19:03:34.869245] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1208](ver=0,mode=0,seq=15488851), io_fd={first_id:15488851, second_id:1208, device_handle:null}) [2024-02-19 19:03:34.869252] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[536](ver=0,mode=0,seq=15488852), io_fd={first_id:15488852, second_id:536, device_handle:null}) [2024-02-19 19:03:34.869259] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1880](ver=0,mode=0,seq=15488853), io_fd={first_id:15488853, second_id:1880, device_handle:null}) [2024-02-19 19:03:34.869267] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3129](ver=0,mode=0,seq=15488854), io_fd={first_id:15488854, second_id:3129, device_handle:null}) [2024-02-19 19:03:34.869274] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1785](ver=0,mode=0,seq=15488855), io_fd={first_id:15488855, second_id:1785, device_handle:null}) [2024-02-19 19:03:34.869282] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[441](ver=0,mode=0,seq=15488856), io_fd={first_id:15488856, second_id:441, device_handle:null}) [2024-02-19 19:03:34.869289] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3633](ver=0,mode=0,seq=15488857), io_fd={first_id:15488857, second_id:3633, device_handle:null}) [2024-02-19 19:03:34.869297] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2961](ver=0,mode=0,seq=15488858), io_fd={first_id:15488858, second_id:2961, device_handle:null}) [2024-02-19 19:03:34.869304] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2289](ver=0,mode=0,seq=15488859), io_fd={first_id:15488859, second_id:2289, device_handle:null}) [2024-02-19 19:03:34.869311] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1617](ver=0,mode=0,seq=15489218), io_fd={first_id:15489218, second_id:1617, device_handle:null}) [2024-02-19 19:03:34.869318] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[273](ver=0,mode=0,seq=15489217), io_fd={first_id:15489217, second_id:273, device_handle:null}) [2024-02-19 19:03:34.869325] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2625](ver=0,mode=0,seq=15488862), io_fd={first_id:15488862, second_id:2625, device_handle:null}) [2024-02-19 19:03:34.869332] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1281](ver=0,mode=0,seq=15488863), io_fd={first_id:15488863, second_id:1281, device_handle:null}) [2024-02-19 19:03:34.869339] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3969](ver=0,mode=0,seq=15488864), io_fd={first_id:15488864, second_id:3969, device_handle:null}) [2024-02-19 19:03:34.869346] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3465](ver=0,mode=0,seq=15488865), io_fd={first_id:15488865, second_id:3465, device_handle:null}) [2024-02-19 19:03:34.869353] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1449](ver=0,mode=0,seq=15488866), io_fd={first_id:15488866, second_id:1449, device_handle:null}) [2024-02-19 19:03:34.869360] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2793](ver=0,mode=0,seq=15488867), io_fd={first_id:15488867, second_id:2793, device_handle:null}) [2024-02-19 19:03:34.869367] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3801](ver=0,mode=0,seq=15488868), io_fd={first_id:15488868, second_id:3801, device_handle:null}) [2024-02-19 19:03:34.869374] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1113](ver=0,mode=0,seq=15488869), io_fd={first_id:15488869, second_id:1113, device_handle:null}) [2024-02-19 19:03:34.869381] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[4042](ver=0,mode=0,seq=15488870), io_fd={first_id:15488870, second_id:4042, device_handle:null}) [2024-02-19 19:03:34.869388] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1522](ver=0,mode=0,seq=15488871), io_fd={first_id:15488871, second_id:1522, device_handle:null}) [2024-02-19 19:03:34.869405] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[1186](ver=0,mode=0,seq=15488872), io_fd={first_id:15488872, second_id:1186, device_handle:null}) [2024-02-19 19:03:34.869415] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3874](ver=0,mode=0,seq=15488873), io_fd={first_id:15488873, second_id:3874, device_handle:null}) [2024-02-19 19:03:34.869423] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2194](ver=0,mode=0,seq=15488874), io_fd={first_id:15488874, second_id:2194, device_handle:null}) [2024-02-19 19:03:34.869453] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=28] block manager free block(macro_id=[1018](ver=0,mode=0,seq=15488875), io_fd={first_id:15488875, second_id:1018, device_handle:null}) [2024-02-19 19:03:34.869461] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[682](ver=0,mode=0,seq=15488876), io_fd={first_id:15488876, second_id:682, device_handle:null}) [2024-02-19 19:03:34.869473] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[2026](ver=0,mode=0,seq=15488877), io_fd={first_id:15488877, second_id:2026, device_handle:null}) [2024-02-19 19:03:34.869481] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[923](ver=0,mode=0,seq=15488878), io_fd={first_id:15488878, second_id:923, device_handle:null}) [2024-02-19 19:03:34.869492] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3611](ver=0,mode=0,seq=15488879), io_fd={first_id:15488879, second_id:3611, device_handle:null}) [2024-02-19 19:03:34.869499] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2267](ver=0,mode=0,seq=15488880), io_fd={first_id:15488880, second_id:2267, device_handle:null}) [2024-02-19 19:03:34.869511] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[3779](ver=0,mode=0,seq=15488881), io_fd={first_id:15488881, second_id:3779, device_handle:null}) [2024-02-19 19:03:34.869518] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2435](ver=0,mode=0,seq=15488882), io_fd={first_id:15488882, second_id:2435, device_handle:null}) [2024-02-19 19:03:34.869528] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[1427](ver=0,mode=0,seq=15488883), io_fd={first_id:15488883, second_id:1427, device_handle:null}) [2024-02-19 19:03:34.869544] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=12] block manager free block(macro_id=[83](ver=0,mode=0,seq=15488884), io_fd={first_id:15488884, second_id:83, device_handle:null}) [2024-02-19 19:03:34.869561] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[419](ver=0,mode=0,seq=15488885), io_fd={first_id:15488885, second_id:419, device_handle:null}) [2024-02-19 19:03:34.869578] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[3107](ver=0,mode=0,seq=15488886), io_fd={first_id:15488886, second_id:3107, device_handle:null}) [2024-02-19 19:03:34.869594] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[2603](ver=0,mode=0,seq=15488887), io_fd={first_id:15488887, second_id:2603, device_handle:null}) [2024-02-19 19:03:34.869611] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[3947](ver=0,mode=0,seq=15488888), io_fd={first_id:15488888, second_id:3947, device_handle:null}) [2024-02-19 19:03:34.869628] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[1259](ver=0,mode=0,seq=15488889), io_fd={first_id:15488889, second_id:1259, device_handle:null}) [2024-02-19 19:03:34.869646] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[3275](ver=0,mode=0,seq=15488890), io_fd={first_id:15488890, second_id:3275, device_handle:null}) [2024-02-19 19:03:34.869662] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[587](ver=0,mode=0,seq=15488891), io_fd={first_id:15488891, second_id:587, device_handle:null}) [2024-02-19 19:03:34.869679] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[1931](ver=0,mode=0,seq=15489220), io_fd={first_id:15489220, second_id:1931, device_handle:null}) [2024-02-19 19:03:34.869690] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[251](ver=0,mode=0,seq=15488893), io_fd={first_id:15488893, second_id:251, device_handle:null}) [2024-02-19 19:03:34.869707] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[2939](ver=0,mode=0,seq=15489219), io_fd={first_id:15489219, second_id:2939, device_handle:null}) [2024-02-19 19:03:34.869718] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[1595](ver=0,mode=0,seq=15488895), io_fd={first_id:15488895, second_id:1595, device_handle:null}) [2024-02-19 19:03:34.869734] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=13] block manager free block(macro_id=[1836](ver=0,mode=0,seq=15488896), io_fd={first_id:15488896, second_id:1836, device_handle:null}) [2024-02-19 19:03:34.869746] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3180](ver=0,mode=0,seq=15488897), io_fd={first_id:15488897, second_id:3180, device_handle:null}) [2024-02-19 19:03:34.869758] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[3684](ver=0,mode=0,seq=15488898), io_fd={first_id:15488898, second_id:3684, device_handle:null}) [2024-02-19 19:03:34.869775] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[324](ver=0,mode=0,seq=15488899), io_fd={first_id:15488899, second_id:324, device_handle:null}) [2024-02-19 19:03:34.869787] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1332](ver=0,mode=0,seq=15488900), io_fd={first_id:15488900, second_id:1332, device_handle:null}) [2024-02-19 19:03:34.869804] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[4020](ver=0,mode=0,seq=15488901), io_fd={first_id:15488901, second_id:4020, device_handle:null}) [2024-02-19 19:03:34.869814] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2676](ver=0,mode=0,seq=15488902), io_fd={first_id:15488902, second_id:2676, device_handle:null}) [2024-02-19 19:03:34.869832] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[2844](ver=0,mode=0,seq=15488903), io_fd={first_id:15488903, second_id:2844, device_handle:null}) [2024-02-19 19:03:34.869844] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[2749](ver=0,mode=0,seq=15488904), io_fd={first_id:15488904, second_id:2749, device_handle:null}) [2024-02-19 19:03:34.869856] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[61](ver=0,mode=0,seq=15488905), io_fd={first_id:15488905, second_id:61, device_handle:null}) [2024-02-19 19:03:34.869867] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[4093](ver=0,mode=0,seq=15488906), io_fd={first_id:15488906, second_id:4093, device_handle:null}) [2024-02-19 19:03:34.869879] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2917](ver=0,mode=0,seq=15488907), io_fd={first_id:15488907, second_id:2917, device_handle:null}) [2024-02-19 19:03:34.869895] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[1909](ver=0,mode=0,seq=15488908), io_fd={first_id:15488908, second_id:1909, device_handle:null}) [2024-02-19 19:03:34.869913] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[2245](ver=0,mode=0,seq=15488909), io_fd={first_id:15488909, second_id:2245, device_handle:null}) [2024-02-19 19:03:34.869930] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[397](ver=0,mode=0,seq=15488910), io_fd={first_id:15488910, second_id:397, device_handle:null}) [2024-02-19 19:03:34.869951] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=18] block manager free block(macro_id=[1741](ver=0,mode=0,seq=15488911), io_fd={first_id:15488911, second_id:1741, device_handle:null}) [2024-02-19 19:03:34.869977] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[3757](ver=0,mode=0,seq=15488912), io_fd={first_id:15488912, second_id:3757, device_handle:null}) [2024-02-19 19:03:34.869990] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[733](ver=0,mode=0,seq=15488913), io_fd={first_id:15488913, second_id:733, device_handle:null}) [2024-02-19 19:03:34.869999] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3662](ver=0,mode=0,seq=15488914), io_fd={first_id:15488914, second_id:3662, device_handle:null}) [2024-02-19 19:03:34.870008] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[974](ver=0,mode=0,seq=15488915), io_fd={first_id:15488915, second_id:974, device_handle:null}) [2024-02-19 19:03:34.870017] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[1814](ver=0,mode=0,seq=15488916), io_fd={first_id:15488916, second_id:1814, device_handle:null}) [2024-02-19 19:03:34.870026] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2990](ver=0,mode=0,seq=15488917), io_fd={first_id:15488917, second_id:2990, device_handle:null}) [2024-02-19 19:03:34.870034] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1887](ver=0,mode=0,seq=15488918), io_fd={first_id:15488918, second_id:1887, device_handle:null}) [2024-02-19 19:03:34.870043] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3231](ver=0,mode=0,seq=15488919), io_fd={first_id:15488919, second_id:3231, device_handle:null}) [2024-02-19 19:03:34.870052] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[1047](ver=0,mode=0,seq=15488920), io_fd={first_id:15488920, second_id:1047, device_handle:null}) [2024-02-19 19:03:34.870061] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[375](ver=0,mode=0,seq=15488921), io_fd={first_id:15488921, second_id:375, device_handle:null}) [2024-02-19 19:03:34.870075] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=13] block manager free block(macro_id=[3735](ver=0,mode=0,seq=15488922), io_fd={first_id:15488922, second_id:3735, device_handle:null}) [2024-02-19 19:03:34.870083] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3063](ver=0,mode=0,seq=15488923), io_fd={first_id:15488923, second_id:3063, device_handle:null}) [2024-02-19 19:03:34.870094] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[39](ver=0,mode=0,seq=15488924), io_fd={first_id:15488924, second_id:39, device_handle:null}) [2024-02-19 19:03:34.870102] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[4071](ver=0,mode=0,seq=15488925), io_fd={first_id:15488925, second_id:4071, device_handle:null}) [2024-02-19 19:03:34.870110] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2223](ver=0,mode=0,seq=15488926), io_fd={first_id:15488926, second_id:2223, device_handle:null}) [2024-02-19 19:03:34.870123] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=11] block manager free block(macro_id=[3567](ver=0,mode=0,seq=15489222), io_fd={first_id:15489222, second_id:3567, device_handle:null}) [2024-02-19 19:03:34.870131] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2895](ver=0,mode=0,seq=15489221), io_fd={first_id:15489221, second_id:2895, device_handle:null}) [2024-02-19 19:03:34.870144] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[207](ver=0,mode=0,seq=15488929), io_fd={first_id:15488929, second_id:207, device_handle:null}) [2024-02-19 19:03:34.870159] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=13] block manager free block(macro_id=[1551](ver=0,mode=0,seq=15488930), io_fd={first_id:15488930, second_id:1551, device_handle:null}) [2024-02-19 19:03:34.870167] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3903](ver=0,mode=0,seq=15488931), io_fd={first_id:15488931, second_id:3903, device_handle:null}) [2024-02-19 19:03:34.870178] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1215](ver=0,mode=0,seq=15488932), io_fd={first_id:15488932, second_id:1215, device_handle:null}) [2024-02-19 19:03:34.870191] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=11] block manager free block(macro_id=[2800](ver=0,mode=0,seq=15488933), io_fd={first_id:15488933, second_id:2800, device_handle:null}) [2024-02-19 19:03:34.870204] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=11] block manager free block(macro_id=[112](ver=0,mode=0,seq=15488934), io_fd={first_id:15488934, second_id:112, device_handle:null}) [2024-02-19 19:03:34.870211] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1624](ver=0,mode=0,seq=15488935), io_fd={first_id:15488935, second_id:1624, device_handle:null}) [2024-02-19 19:03:34.870223] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[1960](ver=0,mode=0,seq=15488936), io_fd={first_id:15488936, second_id:1960, device_handle:null}) [2024-02-19 19:03:34.870230] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3304](ver=0,mode=0,seq=15488937), io_fd={first_id:15488937, second_id:3304, device_handle:null}) [2024-02-19 19:03:34.870241] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[952](ver=0,mode=0,seq=15488938), io_fd={first_id:15488938, second_id:952, device_handle:null}) [2024-02-19 19:03:34.870249] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3640](ver=0,mode=0,seq=15488939), io_fd={first_id:15488939, second_id:3640, device_handle:null}) [2024-02-19 19:03:34.870259] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3136](ver=0,mode=0,seq=15488940), io_fd={first_id:15488940, second_id:3136, device_handle:null}) [2024-02-19 19:03:34.870267] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[448](ver=0,mode=0,seq=15488941), io_fd={first_id:15488941, second_id:448, device_handle:null}) [2024-02-19 19:03:34.870280] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=11] block manager free block(macro_id=[1792](ver=0,mode=0,seq=15488942), io_fd={first_id:15488942, second_id:1792, device_handle:null}) [2024-02-19 19:03:34.870287] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3808](ver=0,mode=0,seq=15488943), io_fd={first_id:15488943, second_id:3808, device_handle:null}) [2024-02-19 19:03:34.870298] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1120](ver=0,mode=0,seq=15488944), io_fd={first_id:15488944, second_id:1120, device_handle:null}) [2024-02-19 19:03:34.870309] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[784](ver=0,mode=0,seq=15488945), io_fd={first_id:15488945, second_id:784, device_handle:null}) [2024-02-19 19:03:34.870321] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[3472](ver=0,mode=0,seq=15488946), io_fd={first_id:15488946, second_id:3472, device_handle:null}) [2024-02-19 19:03:34.870328] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2369](ver=0,mode=0,seq=15488947), io_fd={first_id:15488947, second_id:2369, device_handle:null}) [2024-02-19 19:03:34.870339] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1193](ver=0,mode=0,seq=15488948), io_fd={first_id:15488948, second_id:1193, device_handle:null}) [2024-02-19 19:03:34.870347] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2201](ver=0,mode=0,seq=15488949), io_fd={first_id:15488949, second_id:2201, device_handle:null}) [2024-02-19 19:03:34.870358] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[17](ver=0,mode=0,seq=15488950), io_fd={first_id:15488950, second_id:17, device_handle:null}) [2024-02-19 19:03:34.870366] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[4049](ver=0,mode=0,seq=15488951), io_fd={first_id:15488951, second_id:4049, device_handle:null}) [2024-02-19 19:03:34.870375] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2705](ver=0,mode=0,seq=15488952), io_fd={first_id:15488952, second_id:2705, device_handle:null}) [2024-02-19 19:03:34.870385] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[689](ver=0,mode=0,seq=15488953), io_fd={first_id:15488953, second_id:689, device_handle:null}) [2024-02-19 19:03:34.870394] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2033](ver=0,mode=0,seq=15488954), io_fd={first_id:15488954, second_id:2033, device_handle:null}) [2024-02-19 19:03:34.870402] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3377](ver=0,mode=0,seq=15488955), io_fd={first_id:15488955, second_id:3377, device_handle:null}) [2024-02-19 19:03:34.870412] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[1697](ver=0,mode=0,seq=15488956), io_fd={first_id:15488956, second_id:1697, device_handle:null}) [2024-02-19 19:03:34.870423] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3041](ver=0,mode=0,seq=15488957), io_fd={first_id:15488957, second_id:3041, device_handle:null}) [2024-02-19 19:03:34.870432] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3282](ver=0,mode=0,seq=15488958), io_fd={first_id:15488958, second_id:3282, device_handle:null}) [2024-02-19 19:03:34.870444] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[2106](ver=0,mode=0,seq=15489223), io_fd={first_id:15489223, second_id:2106, device_handle:null}) [2024-02-19 19:03:34.870452] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2442](ver=0,mode=0,seq=15489224), io_fd={first_id:15489224, second_id:2442, device_handle:null}) [2024-02-19 19:03:34.870464] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[2778](ver=0,mode=0,seq=15488961), io_fd={first_id:15488961, second_id:2778, device_handle:null}) [2024-02-19 19:03:34.870472] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[90](ver=0,mode=0,seq=15488962), io_fd={first_id:15488962, second_id:90, device_handle:null}) [2024-02-19 19:03:34.870480] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[930](ver=0,mode=0,seq=15488963), io_fd={first_id:15488963, second_id:930, device_handle:null}) [2024-02-19 19:03:34.870488] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2274](ver=0,mode=0,seq=15488964), io_fd={first_id:15488964, second_id:2274, device_handle:null}) [2024-02-19 19:03:34.870496] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3618](ver=0,mode=0,seq=15488965), io_fd={first_id:15488965, second_id:3618, device_handle:null}) [2024-02-19 19:03:34.870507] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1602](ver=0,mode=0,seq=15488966), io_fd={first_id:15488966, second_id:1602, device_handle:null}) [2024-02-19 19:03:34.870516] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2610](ver=0,mode=0,seq=15488967), io_fd={first_id:15488967, second_id:2610, device_handle:null}) [2024-02-19 19:03:34.870528] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[3954](ver=0,mode=0,seq=15488968), io_fd={first_id:15488968, second_id:3954, device_handle:null}) [2024-02-19 19:03:34.870536] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1507](ver=0,mode=0,seq=15488969), io_fd={first_id:15488969, second_id:1507, device_handle:null}) [2024-02-19 19:03:34.870545] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3019](ver=0,mode=0,seq=15488970), io_fd={first_id:15488970, second_id:3019, device_handle:null}) [2024-02-19 19:03:34.870553] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[331](ver=0,mode=0,seq=15488971), io_fd={first_id:15488971, second_id:331, device_handle:null}) [2024-02-19 19:03:34.870561] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1339](ver=0,mode=0,seq=15488972), io_fd={first_id:15488972, second_id:1339, device_handle:null}) [2024-02-19 19:03:34.870570] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3355](ver=0,mode=0,seq=15488973), io_fd={first_id:15488973, second_id:3355, device_handle:null}) [2024-02-19 19:03:34.870577] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2683](ver=0,mode=0,seq=15488974), io_fd={first_id:15488974, second_id:2683, device_handle:null}) [2024-02-19 19:03:34.870585] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2347](ver=0,mode=0,seq=15488975), io_fd={first_id:15488975, second_id:2347, device_handle:null}) [2024-02-19 19:03:34.870592] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3187](ver=0,mode=0,seq=15488976), io_fd={first_id:15488976, second_id:3187, device_handle:null}) [2024-02-19 19:03:34.870601] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2515](ver=0,mode=0,seq=15488977), io_fd={first_id:15488977, second_id:2515, device_handle:null}) [2024-02-19 19:03:34.870610] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3859](ver=0,mode=0,seq=15488978), io_fd={first_id:15488978, second_id:3859, device_handle:null}) [2024-02-19 19:03:34.870620] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3523](ver=0,mode=0,seq=15488979), io_fd={first_id:15488979, second_id:3523, device_handle:null}) [2024-02-19 19:03:34.870627] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2179](ver=0,mode=0,seq=15488980), io_fd={first_id:15488980, second_id:2179, device_handle:null}) [2024-02-19 19:03:34.870637] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2420](ver=0,mode=0,seq=15488981), io_fd={first_id:15488981, second_id:2420, device_handle:null}) [2024-02-19 19:03:34.870644] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1244](ver=0,mode=0,seq=15488982), io_fd={first_id:15488982, second_id:1244, device_handle:null}) [2024-02-19 19:03:34.870658] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=12] block manager free block(macro_id=[2252](ver=0,mode=0,seq=15488983), io_fd={first_id:15488983, second_id:2252, device_handle:null}) [2024-02-19 19:03:34.870666] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[908](ver=0,mode=0,seq=15488984), io_fd={first_id:15488984, second_id:908, device_handle:null}) [2024-02-19 19:03:34.870675] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[236](ver=0,mode=0,seq=15488985), io_fd={first_id:15488985, second_id:236, device_handle:null}) [2024-02-19 19:03:34.870682] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3428](ver=0,mode=0,seq=15488986), io_fd={first_id:15488986, second_id:3428, device_handle:null}) [2024-02-19 19:03:34.870689] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2084](ver=0,mode=0,seq=15488987), io_fd={first_id:15488987, second_id:2084, device_handle:null}) [2024-02-19 19:03:34.870703] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=12] block manager free block(macro_id=[3333](ver=0,mode=0,seq=15488988), io_fd={first_id:15488988, second_id:3333, device_handle:null}) [2024-02-19 19:03:34.870712] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[813](ver=0,mode=0,seq=15488989), io_fd={first_id:15488989, second_id:813, device_handle:null}) [2024-02-19 19:03:34.870722] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2157](ver=0,mode=0,seq=15488990), io_fd={first_id:15488990, second_id:2157, device_handle:null}) [2024-02-19 19:03:34.870729] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3165](ver=0,mode=0,seq=15488991), io_fd={first_id:15488991, second_id:3165, device_handle:null}) [2024-02-19 19:03:34.870742] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=11] block manager free block(macro_id=[1821](ver=0,mode=0,seq=15488992), io_fd={first_id:15488992, second_id:1821, device_handle:null}) [2024-02-19 19:03:34.870750] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1149](ver=0,mode=0,seq=15488993), io_fd={first_id:15488993, second_id:1149, device_handle:null}) [2024-02-19 19:03:34.870783] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=31] block manager free block(macro_id=[477](ver=0,mode=0,seq=15489226), io_fd={first_id:15489226, second_id:477, device_handle:null}) [2024-02-19 19:03:34.870791] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3837](ver=0,mode=0,seq=15488995), io_fd={first_id:15488995, second_id:3837, device_handle:null}) [2024-02-19 19:03:34.870801] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[1485](ver=0,mode=0,seq=15489225), io_fd={first_id:15489225, second_id:1485, device_handle:null}) [2024-02-19 19:03:34.870808] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[141](ver=0,mode=0,seq=15488997), io_fd={first_id:15488997, second_id:141, device_handle:null}) [2024-02-19 19:03:34.870818] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2829](ver=0,mode=0,seq=15488998), io_fd={first_id:15488998, second_id:2829, device_handle:null}) [2024-02-19 19:03:34.870826] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3669](ver=0,mode=0,seq=15488999), io_fd={first_id:15488999, second_id:3669, device_handle:null}) [2024-02-19 19:03:34.870839] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=11] block manager free block(macro_id=[981](ver=0,mode=0,seq=15489000), io_fd={first_id:15489000, second_id:981, device_handle:null}) [2024-02-19 19:03:34.870847] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2325](ver=0,mode=0,seq=15489001), io_fd={first_id:15489001, second_id:2325, device_handle:null}) [2024-02-19 19:03:34.870855] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[309](ver=0,mode=0,seq=15489002), io_fd={first_id:15489002, second_id:309, device_handle:null}) [2024-02-19 19:03:34.870868] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=12] block manager free block(macro_id=[1653](ver=0,mode=0,seq=15489003), io_fd={first_id:15489003, second_id:1653, device_handle:null}) [2024-02-19 19:03:34.870876] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2661](ver=0,mode=0,seq=15489004), io_fd={first_id:15489004, second_id:2661, device_handle:null}) [2024-02-19 19:03:34.870885] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[4005](ver=0,mode=0,seq=15489005), io_fd={first_id:15489005, second_id:4005, device_handle:null}) [2024-02-19 19:03:34.870894] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2902](ver=0,mode=0,seq=15489006), io_fd={first_id:15489006, second_id:2902, device_handle:null}) [2024-02-19 19:03:34.870903] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[214](ver=0,mode=0,seq=15489007), io_fd={first_id:15489007, second_id:214, device_handle:null}) [2024-02-19 19:03:34.870910] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3070](ver=0,mode=0,seq=15489008), io_fd={first_id:15489008, second_id:3070, device_handle:null}) [2024-02-19 19:03:34.870918] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[382](ver=0,mode=0,seq=15489009), io_fd={first_id:15489009, second_id:382, device_handle:null}) [2024-02-19 19:03:34.870928] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3406](ver=0,mode=0,seq=15489010), io_fd={first_id:15489010, second_id:3406, device_handle:null}) [2024-02-19 19:03:34.870938] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2734](ver=0,mode=0,seq=15489011), io_fd={first_id:15489011, second_id:2734, device_handle:null}) [2024-02-19 19:03:34.870950] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2062](ver=0,mode=0,seq=15489012), io_fd={first_id:15489012, second_id:2062, device_handle:null}) [2024-02-19 19:03:34.870966] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=13] block manager free block(macro_id=[1390](ver=0,mode=0,seq=15489013), io_fd={first_id:15489013, second_id:1390, device_handle:null}) [2024-02-19 19:03:34.870975] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[4078](ver=0,mode=0,seq=15489014), io_fd={first_id:15489014, second_id:4078, device_handle:null}) [2024-02-19 19:03:34.870989] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=13] block manager free block(macro_id=[718](ver=0,mode=0,seq=15489015), io_fd={first_id:15489015, second_id:718, device_handle:null}) [2024-02-19 19:03:34.870998] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3742](ver=0,mode=0,seq=15489016), io_fd={first_id:15489016, second_id:3742, device_handle:null}) [2024-02-19 19:03:34.871006] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[550](ver=0,mode=0,seq=15489017), io_fd={first_id:15489017, second_id:550, device_handle:null}) [2024-02-19 19:03:34.871015] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[1894](ver=0,mode=0,seq=15489018), io_fd={first_id:15489018, second_id:1894, device_handle:null}) [2024-02-19 19:03:34.871023] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3238](ver=0,mode=0,seq=15489019), io_fd={first_id:15489019, second_id:3238, device_handle:null}) [2024-02-19 19:03:34.871037] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=12] block manager free block(macro_id=[2230](ver=0,mode=0,seq=15489020), io_fd={first_id:15489020, second_id:2230, device_handle:null}) [2024-02-19 19:03:34.871045] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[886](ver=0,mode=0,seq=15489021), io_fd={first_id:15489021, second_id:886, device_handle:null}) [2024-02-19 19:03:34.871053] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[1127](ver=0,mode=0,seq=15489022), io_fd={first_id:15489022, second_id:1127, device_handle:null}) [2024-02-19 19:03:34.871069] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[2639](ver=0,mode=0,seq=15489023), io_fd={first_id:15489023, second_id:2639, device_handle:null}) [2024-02-19 19:03:34.871077] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2975](ver=0,mode=0,seq=15489024), io_fd={first_id:15489024, second_id:2975, device_handle:null}) [2024-02-19 19:03:34.871087] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[119](ver=0,mode=0,seq=15489025), io_fd={first_id:15489025, second_id:119, device_handle:null}) [2024-02-19 19:03:34.871094] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[791](ver=0,mode=0,seq=15489026), io_fd={first_id:15489026, second_id:791, device_handle:null}) [2024-02-19 19:03:34.871108] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=12] block manager free block(macro_id=[1799](ver=0,mode=0,seq=15489027), io_fd={first_id:15489027, second_id:1799, device_handle:null}) [2024-02-19 19:03:34.871116] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1872](ver=0,mode=0,seq=15489028), io_fd={first_id:15489028, second_id:1872, device_handle:null}) [2024-02-19 19:03:34.871125] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3888](ver=0,mode=0,seq=15489228), io_fd={first_id:15489228, second_id:3888, device_handle:null}) [2024-02-19 19:03:34.871134] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2376](ver=0,mode=0,seq=15489227), io_fd={first_id:15489227, second_id:2376, device_handle:null}) [2024-02-19 19:03:34.871141] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1704](ver=0,mode=0,seq=15489031), io_fd={first_id:15489031, second_id:1704, device_handle:null}) [2024-02-19 19:03:34.871155] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=12] block manager free block(macro_id=[2712](ver=0,mode=0,seq=15489032), io_fd={first_id:15489032, second_id:2712, device_handle:null}) [2024-02-19 19:03:34.871163] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3121](ver=0,mode=0,seq=15489033), io_fd={first_id:15489033, second_id:3121, device_handle:null}) [2024-02-19 19:03:34.871178] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=13] block manager free block(macro_id=[2785](ver=0,mode=0,seq=15489034), io_fd={first_id:15489034, second_id:2785, device_handle:null}) [2024-02-19 19:03:34.871186] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1441](ver=0,mode=0,seq=15489035), io_fd={first_id:15489035, second_id:1441, device_handle:null}) [2024-02-19 19:03:34.871195] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[601](ver=0,mode=0,seq=15489036), io_fd={first_id:15489036, second_id:601, device_handle:null}) [2024-02-19 19:03:34.871202] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3961](ver=0,mode=0,seq=15489037), io_fd={first_id:15489037, second_id:3961, device_handle:null}) [2024-02-19 19:03:34.871210] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[937](ver=0,mode=0,seq=15489038), io_fd={first_id:15489038, second_id:937, device_handle:null}) [2024-02-19 19:03:34.871218] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2690](ver=0,mode=0,seq=15489039), io_fd={first_id:15489039, second_id:2690, device_handle:null}) [2024-02-19 19:03:34.871225] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1010](ver=0,mode=0,seq=15489040), io_fd={first_id:15489040, second_id:1010, device_handle:null}) [2024-02-19 19:03:34.871235] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2354](ver=0,mode=0,seq=15489041), io_fd={first_id:15489041, second_id:2354, device_handle:null}) [2024-02-19 19:03:34.871243] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3362](ver=0,mode=0,seq=15489042), io_fd={first_id:15489042, second_id:3362, device_handle:null}) [2024-02-19 19:03:34.871250] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[170](ver=0,mode=0,seq=15489043), io_fd={first_id:15489043, second_id:170, device_handle:null}) [2024-02-19 19:03:34.871258] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3530](ver=0,mode=0,seq=15489044), io_fd={first_id:15489044, second_id:3530, device_handle:null}) [2024-02-19 19:03:34.871268] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[1850](ver=0,mode=0,seq=15489045), io_fd={first_id:15489045, second_id:1850, device_handle:null}) [2024-02-19 19:03:34.871276] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3194](ver=0,mode=0,seq=15489046), io_fd={first_id:15489046, second_id:3194, device_handle:null}) [2024-02-19 19:03:34.871301] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=23] block manager free block(macro_id=[1251](ver=0,mode=0,seq=15489047), io_fd={first_id:15489047, second_id:1251, device_handle:null}) [2024-02-19 19:03:34.871309] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[243](ver=0,mode=0,seq=15489048), io_fd={first_id:15489048, second_id:243, device_handle:null}) [2024-02-19 19:03:34.871317] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2427](ver=0,mode=0,seq=15489049), io_fd={first_id:15489049, second_id:2427, device_handle:null}) [2024-02-19 19:03:34.871325] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3771](ver=0,mode=0,seq=15489050), io_fd={first_id:15489050, second_id:3771, device_handle:null}) [2024-02-19 19:03:34.871333] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1755](ver=0,mode=0,seq=15489051), io_fd={first_id:15489051, second_id:1755, device_handle:null}) [2024-02-19 19:03:34.871341] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3099](ver=0,mode=0,seq=15489052), io_fd={first_id:15489052, second_id:3099, device_handle:null}) [2024-02-19 19:03:34.871348] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2763](ver=0,mode=0,seq=15489053), io_fd={first_id:15489053, second_id:2763, device_handle:null}) [2024-02-19 19:03:34.871356] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3172](ver=0,mode=0,seq=15489054), io_fd={first_id:15489054, second_id:3172, device_handle:null}) [2024-02-19 19:03:34.871364] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3676](ver=0,mode=0,seq=15489055), io_fd={first_id:15489055, second_id:3676, device_handle:null}) [2024-02-19 19:03:34.871374] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[126](ver=0,mode=0,seq=15489056), io_fd={first_id:15489056, second_id:126, device_handle:null}) [2024-02-19 19:03:34.871382] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[703](ver=0,mode=0,seq=15489057), io_fd={first_id:15489057, second_id:703, device_handle:null}) [2024-02-19 19:03:34.871390] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[440](ver=0,mode=0,seq=15489058), io_fd={first_id:15489058, second_id:440, device_handle:null}) [2024-02-19 19:03:34.871398] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3632](ver=0,mode=0,seq=15489059), io_fd={first_id:15489059, second_id:3632, device_handle:null}) [2024-02-19 19:03:34.871406] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3369](ver=0,mode=0,seq=15489060), io_fd={first_id:15489060, second_id:3369, device_handle:null}) [2024-02-19 19:03:34.871413] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[681](ver=0,mode=0,seq=15489230), io_fd={first_id:15489230, second_id:681, device_handle:null}) [2024-02-19 19:03:34.871421] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3033](ver=0,mode=0,seq=15489229), io_fd={first_id:15489229, second_id:3033, device_handle:null}) [2024-02-19 19:03:34.871429] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3873](ver=0,mode=0,seq=15489063), io_fd={first_id:15489063, second_id:3873, device_handle:null}) [2024-02-19 19:03:34.871436] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1185](ver=0,mode=0,seq=15489064), io_fd={first_id:15489064, second_id:1185, device_handle:null}) [2024-02-19 19:03:34.871444] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[177](ver=0,mode=0,seq=15489065), io_fd={first_id:15489065, second_id:177, device_handle:null}) [2024-02-19 19:03:34.871452] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1930](ver=0,mode=0,seq=15489066), io_fd={first_id:15489066, second_id:1930, device_handle:null}) [2024-02-19 19:03:34.871461] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2938](ver=0,mode=0,seq=15489067), io_fd={first_id:15489067, second_id:2938, device_handle:null}) [2024-02-19 19:03:34.871470] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3610](ver=0,mode=0,seq=15489068), io_fd={first_id:15489068, second_id:3610, device_handle:null}) [2024-02-19 19:03:34.871478] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3946](ver=0,mode=0,seq=15489069), io_fd={first_id:15489069, second_id:3946, device_handle:null}) [2024-02-19 19:03:34.871486] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[82](ver=0,mode=0,seq=15489070), io_fd={first_id:15489070, second_id:82, device_handle:null}) [2024-02-19 19:03:34.871494] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3778](ver=0,mode=0,seq=15489071), io_fd={first_id:15489071, second_id:3778, device_handle:null}) [2024-02-19 19:03:34.871502] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2507](ver=0,mode=0,seq=15489072), io_fd={first_id:15489072, second_id:2507, device_handle:null}) [2024-02-19 19:03:34.871510] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1667](ver=0,mode=0,seq=15489073), io_fd={first_id:15489073, second_id:1667, device_handle:null}) [2024-02-19 19:03:34.871518] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3011](ver=0,mode=0,seq=15489074), io_fd={first_id:15489074, second_id:3011, device_handle:null}) [2024-02-19 19:03:34.871526] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[995](ver=0,mode=0,seq=15489075), io_fd={first_id:15489075, second_id:995, device_handle:null}) [2024-02-19 19:03:34.871534] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3420](ver=0,mode=0,seq=15489076), io_fd={first_id:15489076, second_id:3420, device_handle:null}) [2024-02-19 19:03:34.871541] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[2580](ver=0,mode=0,seq=15489077), io_fd={first_id:15489077, second_id:2580, device_handle:null}) [2024-02-19 19:03:34.871551] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[469](ver=0,mode=0,seq=15489078), io_fd={first_id:15489078, second_id:469, device_handle:null}) [2024-02-19 19:03:34.871560] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[1477](ver=0,mode=0,seq=15489079), io_fd={first_id:15489079, second_id:1477, device_handle:null}) [2024-02-19 19:03:34.871567] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[206](ver=0,mode=0,seq=15489080), io_fd={first_id:15489080, second_id:206, device_handle:null}) [2024-02-19 19:03:34.871575] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1886](ver=0,mode=0,seq=15489081), io_fd={first_id:15489081, second_id:1886, device_handle:null}) [2024-02-19 19:03:34.871582] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3566](ver=0,mode=0,seq=15489082), io_fd={first_id:15489082, second_id:3566, device_handle:null}) [2024-02-19 19:03:34.871590] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1718](ver=0,mode=0,seq=15489083), io_fd={first_id:15489083, second_id:1718, device_handle:null}) [2024-02-19 19:03:34.871597] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3734](ver=0,mode=0,seq=15489084), io_fd={first_id:15489084, second_id:3734, device_handle:null}) [2024-02-19 19:03:34.871604] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2390](ver=0,mode=0,seq=15489085), io_fd={first_id:15489085, second_id:2390, device_handle:null}) [2024-02-19 19:03:34.871611] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2799](ver=0,mode=0,seq=15489086), io_fd={first_id:15489086, second_id:2799, device_handle:null}) [2024-02-19 19:03:34.871621] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[3471](ver=0,mode=0,seq=15489087), io_fd={first_id:15489087, second_id:3471, device_handle:null}) [2024-02-19 19:03:34.871629] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[111](ver=0,mode=0,seq=15489088), io_fd={first_id:15489088, second_id:111, device_handle:null}) [2024-02-19 19:03:34.871646] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[447](ver=0,mode=0,seq=15489089), io_fd={first_id:15489089, second_id:447, device_handle:null}) [2024-02-19 19:03:34.871654] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3135](ver=0,mode=0,seq=15489090), io_fd={first_id:15489090, second_id:3135, device_handle:null}) [2024-02-19 19:03:34.871661] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[1623](ver=0,mode=0,seq=15489091), io_fd={first_id:15489091, second_id:1623, device_handle:null}) [2024-02-19 19:03:34.871671] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[520](ver=0,mode=0,seq=15489092), io_fd={first_id:15489092, second_id:520, device_handle:null}) [2024-02-19 19:03:34.871680] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[184](ver=0,mode=0,seq=15489093), io_fd={first_id:15489093, second_id:184, device_handle:null}) [2024-02-19 19:03:34.871693] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=11] block manager free block(macro_id=[3785](ver=0,mode=0,seq=15489094), io_fd={first_id:15489094, second_id:3785, device_handle:null}) [2024-02-19 19:03:34.871701] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3595](ver=0,mode=0,seq=15489095), io_fd={first_id:15489095, second_id:3595, device_handle:null}) [2024-02-19 19:03:34.871710] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[235](ver=0,mode=0,seq=15489231), io_fd={first_id:15489231, second_id:235, device_handle:null}) [2024-02-19 19:03:34.871719] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2828](ver=0,mode=0,seq=15489232), io_fd={first_id:15489232, second_id:2828, device_handle:null}) [2024-02-19 19:03:34.871729] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2492](ver=0,mode=0,seq=15489098), io_fd={first_id:15489098, second_id:2492, device_handle:null}) [2024-02-19 19:03:34.871737] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[454](ver=0,mode=0,seq=15489099), io_fd={first_id:15489099, second_id:454, device_handle:null}) [2024-02-19 19:03:34.871750] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=11] block manager free block(macro_id=[1535](ver=0,mode=0,seq=15489100), io_fd={first_id:15489100, second_id:1535, device_handle:null}) [2024-02-19 19:03:34.871758] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[4055](ver=0,mode=0,seq=15489101), io_fd={first_id:15489101, second_id:4055, device_handle:null}) [2024-02-19 19:03:34.871767] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[2543](ver=0,mode=0,seq=15489102), io_fd={first_id:15489102, second_id:2543, device_handle:null}) [2024-02-19 19:03:34.871776] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[3551](ver=0,mode=0,seq=15489103), io_fd={first_id:15489103, second_id:3551, device_handle:null}) [2024-02-19 19:03:34.871786] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2280](ver=0,mode=0,seq=15489104), io_fd={first_id:15489104, second_id:2280, device_handle:null}) [2024-02-19 19:03:34.871796] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[1440](ver=0,mode=0,seq=15489105), io_fd={first_id:15489105, second_id:1440, device_handle:null}) [2024-02-19 19:03:34.871803] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[3456](ver=0,mode=0,seq=15489106), io_fd={first_id:15489106, second_id:3456, device_handle:null}) [2024-02-19 19:03:34.871812] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[505](ver=0,mode=0,seq=15489107), io_fd={first_id:15489107, second_id:505, device_handle:null}) [2024-02-19 19:03:34.871821] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[410](ver=0,mode=0,seq=15489108), io_fd={first_id:15489108, second_id:410, device_handle:null}) [2024-02-19 19:03:34.871829] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[746](ver=0,mode=0,seq=15489109), io_fd={first_id:15489109, second_id:746, device_handle:null}) [2024-02-19 19:03:34.871836] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] block manager free block(macro_id=[2236](ver=0,mode=0,seq=15489110), io_fd={first_id:15489110, second_id:2236, device_handle:null}) [2024-02-19 19:03:34.871845] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[461](ver=0,mode=0,seq=15489111), io_fd={first_id:15489111, second_id:461, device_handle:null}) [2024-02-19 19:03:34.871856] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[797](ver=0,mode=0,seq=15489112), io_fd={first_id:15489112, second_id:797, device_handle:null}) [2024-02-19 19:03:34.871864] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[1301](ver=0,mode=0,seq=15489113), io_fd={first_id:15489113, second_id:1301, device_handle:null}) [2024-02-19 19:03:34.871878] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=12] block manager free block(macro_id=[2309](ver=0,mode=0,seq=15489114), io_fd={first_id:15489114, second_id:2309, device_handle:null}) [2024-02-19 19:03:34.871886] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3894](ver=0,mode=0,seq=15489115), io_fd={first_id:15489115, second_id:3894, device_handle:null}) [2024-02-19 19:03:34.871895] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[1206](ver=0,mode=0,seq=15489116), io_fd={first_id:15489116, second_id:1206, device_handle:null}) [2024-02-19 19:03:34.871904] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=7] block manager free block(macro_id=[4062](ver=0,mode=0,seq=15489117), io_fd={first_id:15489117, second_id:4062, device_handle:null}) [2024-02-19 19:03:34.871912] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[366](ver=0,mode=0,seq=15489118), io_fd={first_id:15489118, second_id:366, device_handle:null}) [2024-02-19 19:03:34.871925] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=11] block manager free block(macro_id=[3631](ver=0,mode=0,seq=15489119), io_fd={first_id:15489119, second_id:3631, device_handle:null}) [2024-02-19 19:03:34.871935] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[1279](ver=0,mode=0,seq=15489120), io_fd={first_id:15489120, second_id:1279, device_handle:null}) [2024-02-19 19:03:34.871947] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3967](ver=0,mode=0,seq=15489121), io_fd={first_id:15489121, second_id:3967, device_handle:null}) [2024-02-19 19:03:34.871964] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[3295](ver=0,mode=0,seq=15489122), io_fd={first_id:15489122, second_id:3295, device_handle:null}) [2024-02-19 19:03:34.871983] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[3536](ver=0,mode=0,seq=15489123), io_fd={first_id:15489123, second_id:3536, device_handle:null}) [2024-02-19 19:03:34.871999] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[1520](ver=0,mode=0,seq=15489124), io_fd={first_id:15489124, second_id:1520, device_handle:null}) [2024-02-19 19:03:34.872011] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[81](ver=0,mode=0,seq=15489125), io_fd={first_id:15489125, second_id:81, device_handle:null}) [2024-02-19 19:03:34.872023] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[2265](ver=0,mode=0,seq=15489126), io_fd={first_id:15489126, second_id:2265, device_handle:null}) [2024-02-19 19:03:34.872042] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=17] block manager free block(macro_id=[3178](ver=0,mode=0,seq=15489233), io_fd={first_id:15489233, second_id:3178, device_handle:null}) [2024-02-19 19:03:34.872053] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1498](ver=0,mode=0,seq=15489128), io_fd={first_id:15489128, second_id:1498, device_handle:null}) [2024-02-19 19:03:34.872070] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[1308](ver=0,mode=0,seq=15489234), io_fd={first_id:15489234, second_id:1308, device_handle:null}) [2024-02-19 19:03:34.872081] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[2389](ver=0,mode=0,seq=15489130), io_fd={first_id:15489130, second_id:2389, device_handle:null}) [2024-02-19 19:03:34.872098] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[2893](ver=0,mode=0,seq=15489131), io_fd={first_id:15489131, second_id:2893, device_handle:null}) [2024-02-19 19:03:34.872109] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[278](ver=0,mode=0,seq=15489132), io_fd={first_id:15489132, second_id:278, device_handle:null}) [2024-02-19 19:03:34.872125] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[1695](ver=0,mode=0,seq=15489133), io_fd={first_id:15489133, second_id:1695, device_handle:null}) [2024-02-19 19:03:34.872142] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[928](ver=0,mode=0,seq=15489134), io_fd={first_id:15489134, second_id:928, device_handle:null}) [2024-02-19 19:03:34.872160] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[3784](ver=0,mode=0,seq=15489135), io_fd={first_id:15489135, second_id:3784, device_handle:null}) [2024-02-19 19:03:34.872171] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[3952](ver=0,mode=0,seq=15489136), io_fd={first_id:15489136, second_id:3952, device_handle:null}) [2024-02-19 19:03:34.872189] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[256](ver=0,mode=0,seq=15489137), io_fd={first_id:15489137, second_id:256, device_handle:null}) [2024-02-19 19:03:34.872200] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=8] block manager free block(macro_id=[2944](ver=0,mode=0,seq=15489138), io_fd={first_id:15489138, second_id:2944, device_handle:null}) [2024-02-19 19:03:34.872217] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[2849](ver=0,mode=0,seq=15489139), io_fd={first_id:15489139, second_id:2849, device_handle:null}) [2024-02-19 19:03:34.872229] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[1505](ver=0,mode=0,seq=15489140), io_fd={first_id:15489140, second_id:1505, device_handle:null}) [2024-02-19 19:03:34.872246] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=14] block manager free block(macro_id=[402](ver=0,mode=0,seq=15489141), io_fd={first_id:15489141, second_id:402, device_handle:null}) [2024-02-19 19:03:34.872257] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3163](ver=0,mode=0,seq=15489142), io_fd={first_id:15489142, second_id:3163, device_handle:null}) [2024-02-19 19:03:34.872274] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] block manager free block(macro_id=[307](ver=0,mode=0,seq=15489143), io_fd={first_id:15489143, second_id:307, device_handle:null}) [2024-02-19 19:03:34.872286] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3068](ver=0,mode=0,seq=15489144), io_fd={first_id:15489144, second_id:3068, device_handle:null}) [2024-02-19 19:03:34.872306] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=17] block manager free block(macro_id=[1125](ver=0,mode=0,seq=15489145), io_fd={first_id:15489145, second_id:1125, device_handle:null}) [2024-02-19 19:03:34.872317] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=9] block manager free block(macro_id=[3477](ver=0,mode=0,seq=15489146), io_fd={first_id:15489146, second_id:3477, device_handle:null}) [2024-02-19 19:03:34.872334] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=13] block manager free block(macro_id=[3550](ver=0,mode=0,seq=15489147), io_fd={first_id:15489147, second_id:3550, device_handle:null}) [2024-02-19 19:03:34.872346] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=10] block manager free block(macro_id=[22](ver=0,mode=0,seq=15489148), io_fd={first_id:15489148, second_id:22, device_handle:null}) [2024-02-19 19:03:34.872359] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=11] block manager free block(macro_id=[1439](ver=0,mode=0,seq=15489149), io_fd={first_id:15489149, second_id:1439, device_handle:null}) [2024-02-19 19:03:34.872367] INFO [STORAGE.BLKMGR] do_sweep (ob_block_manager.cpp:901) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=6] block manager free block(macro_id=[3623](ver=0,mode=0,seq=15489150), io_fd={first_id:15489150, second_id:3623, device_handle:null}) [2024-02-19 19:03:34.873475] INFO [STORAGE.BLKMGR] mark_and_sweep (ob_block_manager.cpp:946) [1106757][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=5] finish once mark and sweep(ret=0, marker_status={total_block_count:4096, reserved_block_count:2, linked_block_count:2, tmp_file_count:0, data_block_count:2419, index_block_count:496, ids_block_count:0, disk_block_count:0, bloomfiter_count:0, hold_count:0, pending_free_count:1177, free_count:1177, mark_cost_time:7849, sweep_cost_time:12022, start_time:"2024-02-19 19:03:34.852504", last_end_time:"2024-02-19 19:03:34.872375", hold_info:nothing}, map_cnt=2917) [2024-02-19 19:03:34.878654] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.878687] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.881740] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=24] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340614881728}) [2024-02-19 19:03:34.881752] WARN [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:485) [1106741][SysLocAsyncUp0][T0][YB42AC0103F2-000611B9212AA0E9-0-0] [lt=19] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, tasks=[{cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340614881728}]) [2024-02-19 19:03:34.881765] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=25] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340614866418}}) [2024-02-19 19:03:34.884981] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=31] get wrs ts(ls_id={id:1}, delta_ns=-1706042771800411233, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:34.885018] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=36] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=1, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:34.886984] INFO [COMMON] print_io_status (ob_io_struct.cpp:619) [1106661][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=25] [IO STATUS](tenant_ids=[1, 500], send_thread_count=2, send_queues=[0, 0]) [2024-02-19 19:03:34.889168] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.889206] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=173] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.902671] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.902717] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.905325] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=15] Cache replace map node details(ret=0, replace_node_count=0, replace_time=17269, replace_start_pos=1273968, replace_num=15728) [2024-02-19 19:03:34.912855] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.912898] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.923060] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.923105] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.928952] WARN [SERVER] batch_process_tasks (ob_ls_table_updater.cpp:333) [1106712][LSSysTblUp0][T0][YB42AC0103F2-000611B9216D2E20-0-0] [lt=41] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1, task={tenant_id:1, ls_id:{id:1}, add_timestamp:1708337390831403}) [2024-02-19 19:03:34.933237] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.933273] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.940727] WARN [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2113) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=11] get invalid Ethernet speed, use default(devname="ens18") [2024-02-19 19:03:34.940776] WARN [SERVER] runTimerTask (ob_server.cpp:2632) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=52] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4002, ret="OB_INVALID_ARGUMENT") [2024-02-19 19:03:34.943422] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.943465] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.953601] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.953646] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.963785] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.963823] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.967383] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=15] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:34.967428] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=47] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614967366}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.967484] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=52] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340614967366}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:34.967502] WARN [STORAGE.TRANS] operator() (ob_ts_mgr.h:225) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=14] refresh gts failed(ret=-4038, ret="OB_NOT_MASTER", gts_tenant_info={v:1}) [2024-02-19 19:03:34.967515] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:229) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=13] refresh gts functor(ret=-4038, ret="OB_NOT_MASTER", gts_tenant_info={v:1}) [2024-02-19 19:03:34.973967] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.974040] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=76] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.982203] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=18] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340614982182}) [2024-02-19 19:03:34.982240] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=37] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340614967366}}) [2024-02-19 19:03:34.984211] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.984253] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.985621] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:34.985651] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=29] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:34.985674] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340614985604) [2024-02-19 19:03:34.985689] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340614784997, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:34.985763] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] get wrs ts(ls_id={id:1}, delta_ns=-1706042771800310526, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:34.985781] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:34.994395] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:34.994430] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:34.995732] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:129) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=8] ====== checkpoint timer task ====== [2024-02-19 19:03:34.995784] INFO [CLOG] get_min_unapplied_log_ts_ns (ob_log_apply_service.cpp:729) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=23] get_min_unapplied_log_ts_ns(log_ts=1707751112415295197, this={ls_id_:{id:1}, role_:1, proposal_id_:138, palf_committed_end_lsn_:{lsn:0}, last_check_log_ts_ns_:1707751112415295196, max_applied_cb_ts_ns_:1707751112415295196}) [2024-02-19 19:03:34.995816] INFO [CLOG] get_min_unreplayed_log_info (ob_replay_status.cpp:971) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=23] get_min_unreplayed_log_info(lsn={lsn:25325337226}, log_ts=1707751112415295197, this={ls_id_:{id:1}, is_enabled_:true, is_submit_blocked_:false, role_:1, err_info_:{lsn_:{lsn:18446744073709551615}, scn_:0, log_type_:0, is_submit_err_:false, err_ts_:0, err_ret_:0}, ref_cnt_:2, post_barrier_lsn_:{lsn:18446744073709551615}, pending_task_count_:0, submit_log_task_:{ObReplayServiceSubmitTask:{type_:1, enqueue_ts_:1708337375831694, err_info_:{has_fatal_error_:false, fail_ts_:0, fail_cost_:503671052, ret_code_:0}}, next_to_submit_lsn_:{lsn:25325337226}, committed_end_lsn_:{lsn:25325337226}, next_to_submit_log_ts_:1707751112415295197, base_lsn_:{lsn:23419564032}, base_log_ts_:1707209832548318068}}) [2024-02-19 19:03:34.996859] INFO [STORAGE.TRANS] get_rec_log_ts (ob_trans_ctx_mgr_v4.cpp:1283) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=27] succ to get rec log ts(*this={this:0x7fdce3204030, ls_id:{id:1}, tenant_id:1, state:"F_WORKING", total_tx_ctx_count:0, leader_takeover_ts:{mts:0}, is_leader_serving:false, max_replay_commit_version:1707751112415295196, ls_retain_ctx_mgr:{retain_ctx_list_.size():0}, aggre_rec_log_ts:-1, prev_aggre_rec_log_ts:-1, online_ts:0, uref:1073741825}, aggre_rec_log_ts=9223372036854775807) [2024-02-19 19:03:34.996885] INFO [STORAGE.TRANS] get_rec_log_ts (ob_tx_ctx_memtable.cpp:231) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=27] tx ctx memtable get rec log ts(this={ObITable:{this:0x7fdce5f6e080, key:{tablet_id:{id:49401}, column_group_idx:0, table_type:"TX_CTX_MEMTABLE", log_ts_range:{start_log_ts:1, end_log_ts:1708337131277985}}, ref_cnt:2, upper_trans_version:-4007, timestamp:0}, this:0x7fdce5f6e080, snapshot_version:1708337131277985, ls_id:{id:1}, is_frozen:false}, rec_log_ts=9223372036854775807) [2024-02-19 19:03:34.996910] INFO [STORAGE.TABLELOCK] get_rec_log_ts (ob_lock_memtable.cpp:739) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=18] rec_log_ts of ObLockMemtable is (rec_log_ts_=9223372036854775807, flushed_log_ts_=1707033175148098668, freeze_log_ts_=0, max_committed_log_ts_=-1, is_frozen_=false, ls_id_={id:1}) [2024-02-19 19:03:34.996938] INFO [STORAGE.TRANS] get_rec_log_ts (ob_ls_tx_service.cpp:437) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=10] [CHECKPOINT] ObLSTxService::get_rec_log_ts(common_checkpoint_type="TX_DATA_MEMTABLE_TYPE", common_checkpoints_[min_rec_log_ts_common_checkpoint_type_index]={ObIMemtableMgr:{Memtables:this:0x7fdce89de180, ref_cnt:1, is_inited:true, tablet_id:{id:49402}, freezer:0x7fdce89e30d0, table_type:1, memtable_head:0, memtable_tail:2, t3m:0x7fdd18bce030, tables:[0x7fdce5eea080, 0x7fdce5eea360, null, null, null, null, null, null, null, null, null, null, null, null, null, null]}, is_freezing:false, ls_id:{id:1}, tx_data_table:0x7fdce89e4550, ls_tablet_svr:0x7fdce89de160, slice_allocator:0x7fdce89e4590}, min_rec_log_ts=1707209832548318068, ls_id_={id:1}) [2024-02-19 19:03:34.998710] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=37] get rec log ts(service_type_=0, rec_log_ts=9223372036854775807) [2024-02-19 19:03:34.998728] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=20] get rec log ts(service_type_=1, rec_log_ts=9223372036854775807) [2024-02-19 19:03:34.998738] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=6] get rec log ts(service_type_=2, rec_log_ts=9223372036854775807) [2024-02-19 19:03:34.998749] INFO [STORAGE] update_clog_checkpoint (ob_checkpoint_executor.cpp:158) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=5] [CHECKPOINT] clog checkpoint no change(checkpoint_ts=1707209832548318068, checkpoint_ts_in_ls_meta=1707209832548318068, ls_id={id:1}, service_type="TRANS_SERVICE") [2024-02-19 19:03:34.998770] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:166) [1108331][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=18] succeed to update_clog_checkpoint(ret=0, ls_cnt=1) [2024-02-19 19:03:35.004550] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.004588] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.014715] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.014753] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.024892] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.024963] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=74] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.035124] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.035160] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.045309] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.045346] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.055483] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.055530] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.059441] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=57] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:35.059679] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=149] Wash time detail, (compute_wash_size_time=286, refresh_score_time=82, wash_time=7) [2024-02-19 19:03:35.065641] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.065673] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.068216] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.068251] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=34] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615068205}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.068272] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=19] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615068205}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.075814] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.075881] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=76] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.081382] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC86-0-0] [lt=105] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:35.081423] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC86-0-0] [lt=42] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:35.081447] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC86-0-0] [lt=21] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:35.081466] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC86-0-0] [lt=16] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:35.081485] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC86-0-0] [lt=19] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:35.082251] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=12] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340615082238}) [2024-02-19 19:03:35.082272] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=21] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340615068205}}) [2024-02-19 19:03:35.085746] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] get wrs ts(ls_id={id:1}, delta_ns=-1706042771800210099, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:35.085771] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=26] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:35.090166] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.090211] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.100331] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.100363] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.100715] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=32] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=41000, clean_start_pos=1069538, clean_num=31457) [2024-02-19 19:03:35.110532] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.110578] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.120175] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:199) [1107573][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=40] sql audit evict task end(evict_high_mem_level=32212254, evict_high_size_level=90000, evict_batch_count=0, elapse_time=1, size_used=14925, mem_used=31196160) [2024-02-19 19:03:35.120726] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.120754] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.123734] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=54] Cache replace map node details(ret=0, replace_node_count=0, replace_time=18238, replace_start_pos=1289696, replace_num=15728) [2024-02-19 19:03:35.130883] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.130927] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.141055] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.141096] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.151266] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.151325] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=62] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.161518] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.161588] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=73] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.168855] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.168906] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=51] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615168841}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.169025] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=113] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615168841}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.171727] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.171769] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.181913] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.181956] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.182637] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=13] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340615182613}) [2024-02-19 19:03:35.182664] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=26] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340615168841}}) [2024-02-19 19:03:35.185779] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=28] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:35.185817] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=40] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:35.185836] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340615185758) [2024-02-19 19:03:35.185846] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340614985701, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:35.185907] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] get wrs ts(ls_id={id:1}, delta_ns=-1706042771800109812, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:35.185919] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:35.192120] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=63] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.192161] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.202408] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=131] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.202453] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.212594] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.212634] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=42] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.213669] INFO [CLOG] get_replay_process (ob_replay_status.cpp:1029) [1107640][T1_ReplayProces][T1][Y0-0000000000000000-0-0] [lt=15] replay status is not follower(max_replayed_lsn={lsn:25325337226}, base_lsn={lsn:23419564032}, this={ls_id_:{id:1}, is_enabled_:true, is_submit_blocked_:false, role_:1, err_info_:{lsn_:{lsn:18446744073709551615}, scn_:0, log_type_:0, is_submit_err_:false, err_ts_:0, err_ret_:0}, ref_cnt_:1, post_barrier_lsn_:{lsn:18446744073709551615}, pending_task_count_:0, submit_log_task_:{ObReplayServiceSubmitTask:{type_:1, enqueue_ts_:1708337375831694, err_info_:{has_fatal_error_:false, fail_ts_:0, fail_cost_:503671052, ret_code_:0}}, next_to_submit_lsn_:{lsn:25325337226}, committed_end_lsn_:{lsn:25325337226}, next_to_submit_log_ts_:1707751112415295197, base_lsn_:{lsn:23419564032}, base_log_ts_:1707209832548318068}}) [2024-02-19 19:03:35.213711] INFO [CLOG] operator() (ob_log_replay_service.cpp:1463) [1107640][T1_ReplayProces][T1][Y0-0000000000000000-0-0] [lt=40] get_replay_process success(id={id:1}, replayed_log_size=1905773194, unreplayed_log_size=0) [2024-02-19 19:03:35.213768] INFO [CLOG] runTimerTask (ob_log_replay_service.cpp:152) [1107640][T1_ReplayProces][T1][Y0-0000000000000000-0-0] [lt=15] dump tenant replay process(tenant_id=1, unreplayed_log_size(MB)=0, estimate_time(second)=0, replayed_log_size(MB)=1817, last_replayed_log_size(MB)=1817, round_cost_time(second)=10, pending_replay_log_size(MB)=0) [2024-02-19 19:03:35.214232] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=8] table not exist(tenant_id=1, database_id=201001, table_name=__all_sys_parameter, ret=-5019) [2024-02-19 19:03:35.214264] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=29] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_sys_parameter, ret=-5019) [2024-02-19 19:03:35.214277] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=13] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:35.214288] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=10] resolve table relation factor failed(ret=-5019, table_name=__all_sys_parameter) [2024-02-19 19:03:35.214300] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=7] fail to resolve table(ret=-5019) [2024-02-19 19:03:35.214311] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=11] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:35.214322] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=6] Table 'oceanbase.__all_sys_parameter' doesn't exist [2024-02-19 19:03:35.214336] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=13] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:35.214344] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=7] resolve basic table failed(ret=-5019) [2024-02-19 19:03:35.214350] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=6] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:35.214357] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=6] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:35.214371] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=7] resolve normal query failed(ret=-5019) [2024-02-19 19:03:35.214388] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=15] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:35.214407] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=10] failed to resolve(ret=-5019) [2024-02-19 19:03:35.214426] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=15] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.214438] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=11] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.214454] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=13] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:35.214465] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=8] fail to handle text query(stmt=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter, ret=-5019) [2024-02-19 19:03:35.214479] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=13] executor execute failed(ret=-5019) [2024-02-19 19:03:35.214487] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=7] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, retry_cnt=0) [2024-02-19 19:03:35.214506] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=13] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:35.214519] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=11] result set close failed(ret=-5019) [2024-02-19 19:03:35.214528] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:35.214534] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=5] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:35.214553] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106658][ConfigMgr][T1][YB42AC0103F2-000611B922D790D9-0-0] [lt=6] failed to process record(executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:35.214568] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D9-0-0] [lt=12] failed to process final(executor={ObIExecutor:, sql:"select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter"}, aret=-5019, ret=-5019) [2024-02-19 19:03:35.214576] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D9-0-0] [lt=6] execute sql failed(ret=-5019, tenant_id=1, sql=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter) [2024-02-19 19:03:35.214588] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D9-0-0] [lt=11] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:35.214595] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D9-0-0] [lt=6] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:35.214605] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D9-0-0] [lt=9] query failed(ret=-5019, conn=0x7fdd189bc050, start=1708340615213957, sql=select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter) [2024-02-19 19:03:35.214616] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D9-0-0] [lt=10] read failed(ret=-5019) [2024-02-19 19:03:35.214628] WARN [SHARE] update_local (ob_config_manager.cpp:322) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D9-0-0] [lt=10] read config from __all_sys_parameter failed(sqlstr="select config_version, zone, svr_type, svr_ip, svr_port, name, data_type, value, info, section, scope, source, edit_level from __all_sys_parameter", ret=-5019) [2024-02-19 19:03:35.214685] WARN [SHARE] update_local (ob_config_manager.cpp:356) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D9-0-0] [lt=6] Read system config from inner table error(ret=-5019) [2024-02-19 19:03:35.214700] WARN [SHARE] runTimerTask (ob_config_manager.cpp:455) [1106658][ConfigMgr][T0][YB42AC0103F2-000611B922D790D9-0-0] [lt=14] Update local config failed(ret=-5019) [2024-02-19 19:03:35.222751] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.222803] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.232240] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=13] table not exist(tenant_id=1, database_id=201001, table_name=__all_space_usage, ret=-5019) [2024-02-19 19:03:35.232264] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=38] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_space_usage, ret=-5019) [2024-02-19 19:03:35.232274] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=9] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:35.232282] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=7] resolve table relation factor failed(ret=-5019, table_name=__all_space_usage) [2024-02-19 19:03:35.232291] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=6] fail to resolve table(ret=-5019) [2024-02-19 19:03:35.232298] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=6] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:35.232309] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=6] Table 'oceanbase.__all_space_usage' doesn't exist [2024-02-19 19:03:35.232323] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=13] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:35.232332] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:35.232341] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=8] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:35.232350] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=8] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:35.232362] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=9] resolve normal query failed(ret=-5019) [2024-02-19 19:03:35.232372] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:35.232391] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=10] failed to resolve(ret=-5019) [2024-02-19 19:03:35.232399] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=9] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.232408] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=7] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.232415] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=5] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:35.232423] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=6] fail to handle text query(stmt=SELECT DISTINCT tenant_id FROM __all_space_usage WHERE svr_ip = '172.1.3.242' and svr_port = 2882 and start_seq = 0 ORDER BY tenant_id, ret=-5019) [2024-02-19 19:03:35.232431] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=6] executor execute failed(ret=-5019) [2024-02-19 19:03:35.232438] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=7] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT DISTINCT tenant_id FROM __all_space_usage WHERE svr_ip = '172.1.3.242' and svr_port = 2882 and start_seq = 0 ORDER BY tenant_id"}, retry_cnt=0) [2024-02-19 19:03:35.232453] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=10] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:35.232466] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=11] result set close failed(ret=-5019) [2024-02-19 19:03:35.232474] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=6] result set close failed(ret=-5019) [2024-02-19 19:03:35.232480] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=5] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:35.232509] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106758][DiskUseReport][T1][YB42AC0103F2-000611B92347831E-0-0] [lt=6] failed to process record(executor={ObIExecutor:, sql:"SELECT DISTINCT tenant_id FROM __all_space_usage WHERE svr_ip = '172.1.3.242' and svr_port = 2882 and start_seq = 0 ORDER BY tenant_id"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:35.232524] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106758][DiskUseReport][T0][YB42AC0103F2-000611B92347831E-0-0] [lt=13] failed to process final(executor={ObIExecutor:, sql:"SELECT DISTINCT tenant_id FROM __all_space_usage WHERE svr_ip = '172.1.3.242' and svr_port = 2882 and start_seq = 0 ORDER BY tenant_id"}, aret=-5019, ret=-5019) [2024-02-19 19:03:35.232537] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106758][DiskUseReport][T0][Y0-0000000000000000-0-0] [lt=11] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT DISTINCT tenant_id FROM __all_space_usage WHERE svr_ip = '172.1.3.242' and svr_port = 2882 and start_seq = 0 ORDER BY tenant_id) [2024-02-19 19:03:35.232548] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106758][DiskUseReport][T0][Y0-0000000000000000-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:35.232558] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106758][DiskUseReport][T0][Y0-0000000000000000-0-0] [lt=9] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:35.232569] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106758][DiskUseReport][T0][Y0-0000000000000000-0-0] [lt=9] query failed(ret=-5019, conn=0x7fdcf4ef4050, start=1708340615232022, sql=SELECT DISTINCT tenant_id FROM __all_space_usage WHERE svr_ip = '172.1.3.242' and svr_port = 2882 and start_seq = 0 ORDER BY tenant_id) [2024-02-19 19:03:35.232579] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106758][DiskUseReport][T0][Y0-0000000000000000-0-0] [lt=9] read failed(ret=-5019) [2024-02-19 19:03:35.232586] WARN [SHARE] get_all_tenant_ids (ob_disk_usage_table_operator.cpp:270) [1106758][DiskUseReport][T0][Y0-0000000000000000-0-0] [lt=6] fail to read result(ret=-5019, sql=SELECT DISTINCT tenant_id FROM __all_space_usage WHERE svr_ip = '172.1.3.242' and svr_port = 2882 and start_seq = 0 ORDER BY tenant_id) [2024-02-19 19:03:35.232595] WARN [SHARE] get_all_tenant_ids (ob_disk_usage_table_operator.cpp:287) [1106758][DiskUseReport][T0][Y0-0000000000000000-0-0] [lt=8] fail to get all tenant ids from __all_space_usage(ret=-5019) [2024-02-19 19:03:35.232664] WARN [STORAGE] execute_gc_disk_usage (ob_disk_usage_reporter.cpp:474) [1106758][DiskUseReport][T0][Y0-0000000000000000-0-0] [lt=5] failed to get all tenant ids(ret=-5019) [2024-02-19 19:03:35.232675] WARN [STORAGE] runTimerTask (ob_disk_usage_reporter.cpp:105) [1106758][DiskUseReport][T0][Y0-0000000000000000-0-0] [lt=12] fail to gc tenant stat(ret=-5019) [2024-02-19 19:03:35.232752] INFO [PALF] get_disk_usage (palf_env_impl.cpp:820) [1106758][DiskUseReport][T1][Y0-0000000000000000-0-0] [lt=8] get_disk_usage(ret=0, capacity(MB):=2048, used(MB):=1945) [2024-02-19 19:03:35.239039] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.239079] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.241142] WARN [SHARE] update_tenant_space_usage (ob_disk_usage_table_operator.cpp:70) [1106758][DiskUseReport][T0][Y0-0000000000000000-0-0] [lt=10] invalid argument(ret=-4002, tenant_id=1, svr_ip="172.1.3.242", svr_port=2882, file_type=0, data_size=6113198080, used_size=6113198080, seq_num=0) [2024-02-19 19:03:35.241172] WARN [STORAGE] report_tenant_disk_usage (ob_disk_usage_reporter.cpp:193) [1106758][DiskUseReport][T0][Y0-0000000000000000-0-0] [lt=30] failed to update disk usage of log and meta(ret=-4002, pair.first={file_type:0, tenant_id:1}) [2024-02-19 19:03:35.241188] WARN [STORAGE] runTimerTask (ob_disk_usage_reporter.cpp:108) [1106758][DiskUseReport][T0][Y0-0000000000000000-0-0] [lt=13] Failed to report tenant disk usage(ret=-4002) [2024-02-19 19:03:35.243556] INFO [LIB] runTimerTask (ob_work_queue.cpp:24) [1106715][ObTimer][T0][Y0-0000000000000000-0-0] [lt=50] add async task(this=tasktype:N9oceanbase10rootserver13ObRootService19ObRefreshServerTaskE) [2024-02-19 19:03:35.244480] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=15] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:35.244504] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=23] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:35.244514] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=8] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:35.244522] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=7] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:35.244530] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=6] fail to resolve table(ret=-5019) [2024-02-19 19:03:35.244537] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=6] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:35.244547] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=5] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:35.244553] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=6] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:35.244560] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=6] resolve basic table failed(ret=-5019) [2024-02-19 19:03:35.244574] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=13] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:35.244583] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=8] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:35.244592] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=8] resolve normal query failed(ret=-5019) [2024-02-19 19:03:35.244601] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=7] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:35.244620] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=13] failed to resolve(ret=-5019) [2024-02-19 19:03:35.244630] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=9] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.244639] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=7] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.244648] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=8] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:35.244657] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=7] fail to handle text query(stmt=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server, ret=-5019) [2024-02-19 19:03:35.244669] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=11] executor execute failed(ret=-5019) [2024-02-19 19:03:35.244677] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=6] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, retry_cnt=0) [2024-02-19 19:03:35.244698] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=15] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:35.244722] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=21] result set close failed(ret=-5019) [2024-02-19 19:03:35.244734] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:35.244748] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=15] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:35.244773] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106717][RSAsyncTask0][T1][YB42AC0103F2-000611B922978A2E-0-0] [lt=8] failed to process record(executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:35.244793] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106717][RSAsyncTask0][T0][YB42AC0103F2-000611B922978A2E-0-0] [lt=17] failed to process final(executor={ObIExecutor:, sql:"SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server"}, aret=-5019, ret=-5019) [2024-02-19 19:03:35.244804] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=9] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server) [2024-02-19 19:03:35.244821] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=14] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:35.244830] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=9] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:35.244847] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=15] query failed(ret=-5019, conn=0x7fdcdc89a050, start=1708340615244277, sql=SELECT time_to_usec(gmt_modified) AS last_hb_time, id, zone, svr_ip, svr_port, inner_port, status, with_rootserver, block_migrate_in_time, build_version, stop_time, start_service_time, with_partition FROM __all_server) [2024-02-19 19:03:35.244865] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=17] read failed(ret=-5019) [2024-02-19 19:03:35.245043] WARN [SHARE] run2 (ob_async_task_queue.cpp:148) [1106717][RSAsyncTask0][T0][Y0-0000000000000000-0-0] [lt=9] task process failed, start retry(max retry time=0, retry interval=1000000, ret=-5019) [2024-02-19 19:03:35.249209] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.249239] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.259360] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=25] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.259395] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.261066] INFO [SHARE] run_loop_ (ob_bg_thread_monitor.cpp:331) [1109111][BGThreadMonitor][T0][Y0-0000000000000000-0-0] [lt=51] current monitor number(seq_=-1) [2024-02-19 19:03:35.269538] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.269573] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=25] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.269610] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=34] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615269565}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.269589] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=51] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.269626] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=15] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615269565}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.279735] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.279778] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.283076] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340615283051}) [2024-02-19 19:03:35.283117] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=42] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340615269565}}) [2024-02-19 19:03:35.286042] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:35.286076] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=33] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:35.286114] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=28] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340615286104}) [2024-02-19 19:03:35.286135] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340615286022) [2024-02-19 19:03:35.286155] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340615185854, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:35.286182] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:738) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] current server is WRS leader, need start CLUSTER weak read service(tenant_id=1, serve_leader_epoch=0, cur_leader_epoch=138, cluster_service_tablet_id_={id:226}, in_service=false, can_update_version=false, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:35.286222] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:336) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] begin start service(tenant_id=1, is_in_service()=false, can_update_version=false) [2024-02-19 19:03:35.286239] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:338) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] start TenantWeakReadClusterService(tenant_id=1) [2024-02-19 19:03:35.287251] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=16] table not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:35.287276] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=23] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:35.287286] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=8] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:35.287293] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=7] resolve table relation factor failed(ret=-5019, table_name=__all_weak_read_service) [2024-02-19 19:03:35.287304] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=7] fail to resolve table(ret=-5019) [2024-02-19 19:03:35.287321] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=15] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:35.287336] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=8] Table 'oceanbase.__all_weak_read_service' doesn't exist [2024-02-19 19:03:35.287346] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=8] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:35.287372] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=24] resolve basic table failed(ret=-5019) [2024-02-19 19:03:35.287383] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=12] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:35.287390] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=6] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:35.287398] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=6] resolve normal query failed(ret=-5019) [2024-02-19 19:03:35.287416] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=9] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:35.287437] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=13] failed to resolve(ret=-5019) [2024-02-19 19:03:35.287446] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=9] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.287462] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=13] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.287469] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=6] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:35.287481] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=10] fail to handle text query(stmt=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '', ret=-5019) [2024-02-19 19:03:35.287491] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=8] executor execute failed(ret=-5019) [2024-02-19 19:03:35.287509] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=16] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, retry_cnt=0) [2024-02-19 19:03:35.287529] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=14] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:35.287549] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=15] result set close failed(ret=-5019) [2024-02-19 19:03:35.287558] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=9] result set close failed(ret=-5019) [2024-02-19 19:03:35.287574] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=14] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:35.287599] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:35.287618] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F1-0-0] [lt=17] failed to process final(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, aret=-5019, ret=-5019) [2024-02-19 19:03:35.287637] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] execute sql failed(ret=-5019, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:35.287647] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:35.287658] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:35.287665] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6] query failed(ret=-5019, conn=0x7fdcdc9be050, start=1708340615287044, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:35.287676] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] read failed(ret=-5019) [2024-02-19 19:03:35.287693] WARN [STORAGE.TRANS] query_cluster_version_range_ (ob_tenant_weak_read_cluster_service.cpp:192) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] execute sql read fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", exec_tenant_id=1, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:35.287761] WARN [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:367) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] query cluster version range from WRS table fail(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:35.287777] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:415) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] start service done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, in_service=false, leader_epoch=0, current_version=0, delta=1708340615287774, min_version=0, max_version=0, max_stale_time=5000000000, all_valid_server_count=0, total_time=1571, wlock_time=53, check_leader_time=2, query_version_time=0, persist_version_time=0) [2024-02-19 19:03:35.287794] WARN [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:781) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] start CLUSTER weak read service fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:35.287802] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:791) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, need_start_service=true, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=138, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:35.287859] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] get wrs ts(ls_id={id:1}, delta_ns=-1706042771800008209, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:35.287877] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:35.289907] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.289952] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.290716] INFO [STORAGE] operator() (ob_tenant_freezer.cpp:124) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] ====== tenant freeze timer task ====== [2024-02-19 19:03:35.291783] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=21] table not exist(tenant_id=1, database_id=201001, table_name=__all_freeze_info, ret=-5019) [2024-02-19 19:03:35.291805] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=20] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_freeze_info, ret=-5019) [2024-02-19 19:03:35.291817] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=10] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:35.291827] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=9] resolve table relation factor failed(ret=-5019, table_name=__all_freeze_info) [2024-02-19 19:03:35.291841] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=9] fail to resolve table(ret=-5019) [2024-02-19 19:03:35.291851] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=10] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:35.291866] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=8] Table 'oceanbase.__all_freeze_info' doesn't exist [2024-02-19 19:03:35.291876] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=9] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:35.291886] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=9] resolve basic table failed(ret=-5019) [2024-02-19 19:03:35.291905] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=18] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:35.291914] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=8] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:35.291921] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=6] resolve normal query failed(ret=-5019) [2024-02-19 19:03:35.291928] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=6] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:35.291942] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=7] failed to resolve(ret=-5019) [2024-02-19 19:03:35.291950] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=7] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.291959] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=6] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.291965] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=6] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:35.291973] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=5] fail to handle text query(stmt=SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1, ret=-5019) [2024-02-19 19:03:35.291982] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=7] executor execute failed(ret=-5019) [2024-02-19 19:03:35.291989] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=7] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1"}, retry_cnt=0) [2024-02-19 19:03:35.292004] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=9] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:35.292018] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=11] result set close failed(ret=-5019) [2024-02-19 19:03:35.292028] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=9] result set close failed(ret=-5019) [2024-02-19 19:03:35.292034] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=6] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:35.292053] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=6] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:35.292064] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107575][T1_Occam][T1][YB42AC0103F2-000611B9223790D9-0-0] [lt=9] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1"}, aret=-5019, ret=-5019) [2024-02-19 19:03:35.292076] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1) [2024-02-19 19:03:35.292087] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:35.292097] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:35.292107] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] query failed(ret=-5019, conn=0x7fdcdc924050, start=1708340615291592, sql=SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1) [2024-02-19 19:03:35.292119] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=12] read failed(ret=-5019) [2024-02-19 19:03:35.292127] WARN [SHARE] get_freeze_info (ob_freeze_info_proxy.cpp:68) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", sql=SELECT * FROM __all_freeze_info ORDER BY frozen_scn DESC LIMIT 1, tenant_id=1) [2024-02-19 19:03:35.292212] WARN [STORAGE] get_global_frozen_scn_ (ob_tenant_freezer.cpp:1086) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] get_frozen_scn failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:35.292223] WARN [STORAGE] do_major_if_need_ (ob_tenant_freezer.cpp:1188) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] fail to get global frozen version(ret=-5019) [2024-02-19 19:03:35.292230] WARN [STORAGE] check_and_freeze_normal_data_ (ob_tenant_freezer.cpp:379) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6] [TenantFreezer] fail to do major freeze(tmp_ret=-5019) [2024-02-19 19:03:35.292254] INFO [STORAGE] check_and_freeze_tx_data_ (ob_tenant_freezer.cpp:419) [1107575][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6] [TenantFreezer] Trigger Tx Data Table Self Freeze. (tenant_info_.tenant_id_=1, tenant_tx_data_mem_used=430988896, self_freeze_max_limit_=214748364, hold_memory=1718894592, self_freeze_tenant_hold_limit_=429496729, self_freeze_min_limit_=21474836) [2024-02-19 19:03:35.292555] INFO [STORAGE] do_tx_data_table_freeze_ (ob_tenant_freezer_rpc.cpp:73) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790DA-0-0] [lt=9] start tx data table self freeze task in rpc handle thread(arg_=freeze_type:3) [2024-02-19 19:03:35.292603] INFO [STORAGE] self_freeze_task (ob_tx_data_table.cpp:794) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790DA-0-0] [lt=30] start tx data table self freeze task(get_ls_id()={id:1}) [2024-02-19 19:03:35.292624] INFO [STORAGE] freeze (ob_tx_data_memtable_mgr.cpp:192) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790DA-0-0] [lt=17] start freeze tx data memtable(ls_id_={id:1}) [2024-02-19 19:03:35.292643] INFO [STORAGE] freeze_ (ob_tx_data_memtable_mgr.cpp:228) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790DA-0-0] [lt=15] There is a freezed memetable existed. Try freeze after flushing it.(ret=-4023, ret="OB_EAGAIN", get_memtable_count_()=2) [2024-02-19 19:03:35.292662] WARN [STORAGE] freeze (ob_tx_data_memtable_mgr.cpp:206) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790DA-0-0] [lt=18] freeze tx data memtable fail.(ret=-4023, ret="OB_EAGAIN") [2024-02-19 19:03:35.292678] WARN [STORAGE] flush (ob_tx_data_memtable_mgr.cpp:430) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790DA-0-0] [lt=16] freeze failed(ret=-4023, ret="OB_EAGAIN", this=0x7fdce89de180) [2024-02-19 19:03:35.292695] WARN [STORAGE] self_freeze_task (ob_tx_data_table.cpp:798) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790DA-0-0] [lt=16] self freeze of tx data memtable failed.(ret=-4023, ret="OB_EAGAIN", ls_id={id:1}, memtable_mgr_={ObIMemtableMgr:{Memtables:this:0x7fdce89de180, ref_cnt:1, is_inited:true, tablet_id:{id:49402}, freezer:0x7fdce89e30d0, table_type:1, memtable_head:0, memtable_tail:2, t3m:0x7fdd18bce030, tables:[0x7fdce5eea080, 0x7fdce5eea360, null, null, null, null, null, null, null, null, null, null, null, null, null, null]}, is_freezing:false, ls_id:{id:1}, tx_data_table:0x7fdce89e4550, ls_tablet_svr:0x7fdce89de160, slice_allocator:0x7fdce89e4590}) [2024-02-19 19:03:35.292736] INFO [STORAGE] self_freeze_task (ob_tx_data_table.cpp:801) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790DA-0-0] [lt=39] finish tx data table self freeze task(ret=-4023, ret="OB_EAGAIN", get_ls_id()={id:1}) [2024-02-19 19:03:35.292752] WARN [STORAGE] do_tx_data_table_freeze_ (ob_tenant_freezer_rpc.cpp:102) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790DA-0-0] [lt=15] freeze tx data table failed.(ret=-4023, ret="OB_EAGAIN", arg_=freeze_type:3) [2024-02-19 19:03:35.292769] INFO [STORAGE] do_tx_data_table_freeze_ (ob_tenant_freezer_rpc.cpp:115) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790DA-0-0] [lt=16] finish self freeze task in rpc handle thread(ret=-4023, ret="OB_EAGAIN", arg_=freeze_type:3) [2024-02-19 19:03:35.292784] WARN [STORAGE] process (ob_tenant_freezer_rpc.cpp:56) [1108354][T1_TNT_L0_G0][T1][YB42AC0103F2-000611B9223790DA-0-0] [lt=10] do tx data table freeze failed.(ret=-4023, ret="OB_EAGAIN", arg_=freeze_type:3) [2024-02-19 19:03:35.292952] INFO [STORAGE] rpc_callback (ob_tenant_freezer.cpp:990) [1106789][RpcIO][T0][Y0-0000000000000000-0-0] [lt=24] [TenantFreezer] call back of tenant freezer request [2024-02-19 19:03:35.300138] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=65] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.300172] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.301338] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=30] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:35.301434] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=44] Wash time detail, (compute_wash_size_time=155, refresh_score_time=47, wash_time=5) [2024-02-19 19:03:35.310305] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.310335] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.320453] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.320486] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.322136] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=8] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:35.322166] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=29] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:35.322230] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=60] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:35.322240] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=10] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:35.322254] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=9] fail to resolve table(ret=-5019) [2024-02-19 19:03:35.322268] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=14] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:35.322285] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=11] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:35.322298] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=11] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:35.322309] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=12] resolve basic table failed(ret=-5019) [2024-02-19 19:03:35.322321] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=10] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:35.322332] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=11] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:35.322345] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=11] resolve normal query failed(ret=-5019) [2024-02-19 19:03:35.322357] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=11] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:35.322380] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=12] failed to resolve(ret=-5019) [2024-02-19 19:03:35.322394] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=13] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.322410] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=12] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.322423] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=12] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:35.322438] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=12] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, ret=-5019) [2024-02-19 19:03:35.322451] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=12] executor execute failed(ret=-5019) [2024-02-19 19:03:35.322466] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=13] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, retry_cnt=0) [2024-02-19 19:03:35.322488] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=16] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:35.322510] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=19] result set close failed(ret=-5019) [2024-02-19 19:03:35.322522] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=11] result set close failed(ret=-5019) [2024-02-19 19:03:35.322534] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=11] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:35.322561] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:35.322578] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02C-0-0] [lt=16] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-02-19 19:03:35.322594] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:35.322609] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=14] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:35.322623] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=12] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:35.322637] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] query failed(ret=-5019, conn=0x7fdd189bc050, start=1708340615321900, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:35.322652] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=15] read failed(ret=-5019) [2024-02-19 19:03:35.322666] WARN get_my_sql_result_ (ob_table_access_helper.h:329) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x7fdcfffccd78, table=__all_server, condition=where svr_ip='172.1.3.242' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, columns_str="zone") [2024-02-19 19:03:35.322688] WARN read_single_row (ob_table_access_helper.h:178) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=17] get mysql result failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1, columns=0x7fdcfffccd78, table=__all_server, where_condition=where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:35.322770] WARN [COORDINATOR] get_self_zone_name (table_accessor.cpp:517) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=15] get zone from __all_server failed(ret=-4016, ret="OB_ERR_UNEXPECTED", columns=0x7fdcfffccd78, where_condition="where svr_ip='172.1.3.242' and svr_port=2882", zone_name_holder=) [2024-02-19 19:03:35.322791] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:450) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=19] get self zone name failed(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:35.322805] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:459) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] zone name is empty(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:35.322819] WARN [COORDINATOR] refresh (ob_leader_coordinator.cpp:107) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=12] get all ls election reference info failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1) [2024-02-19 19:03:35.328975] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC87-0-0] [lt=127] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:35.329005] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC87-0-0] [lt=30] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:35.329023] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC87-0-0] [lt=17] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:35.329045] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC87-0-0] [lt=20] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:35.329056] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC87-0-0] [lt=10] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:35.329590] INFO [CLOG] do_fetch_log_ (ob_remote_fetch_log.cpp:154) [1107644][T1_LogRessvr][T1][YB42AC0103F2-000611B921578198-0-0] [lt=50] print do_fetch_log_(lsn={lsn:18446744073709551615}, max_fetch_lsn={lsn:18446744073709551615}, need_schedule=false, proposal_id=-1, last_fetch_ts=-1, size=0, ls={ls_meta:{tenant_id:1, ls_id:{id:1}, replica_type:0, ls_create_status:1, clog_checkpoint_ts:1707209832548318068, clog_base_lsn:{lsn:23419564032}, rebuild_seq:0, migration_status:0, gc_state_:1, offline_ts_ns_:-1, restore_status:{status:0}, replayable_point:-1, tablet_change_checkpoint_ts:1707751112415295196, all_id_meta:{id_meta:[{limited_id:1707751122157059767, latest_log_ts:1707751105505586716}, {limited_id:46000001, latest_log_ts:1707741702196260609}, {limited_id:290000001, latest_log_ts:1707637636773992411}]}}, log_handler:{role:1, proposal_id:138, palf_env_:0x7fdd02a44030, is_in_stop_state_:false, is_inited_:true}, restore_handler:{is_inited:true, is_in_stop_state:false, id:1, proposal_id:9223372036854775807, role:2, parent:null, context:{issued:false, last_fetch_ts:-1, max_submit_lsn:{lsn:18446744073709551615}, max_fetch_lsn:{lsn:18446744073709551615}, error_context:{ret_code:0, trace_id:Y0-0000000000000000-0-0}}}, is_inited:true, tablet_gc_handler:{tablet_persist_trigger:0, is_inited:true}}) [2024-02-19 19:03:35.330596] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.330619] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=23] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.338007] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=18] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=36547, clean_start_pos=1100995, clean_num=31457) [2024-02-19 19:03:35.340733] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=25] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.340759] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.342894] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=27] Cache replace map node details(ret=0, replace_node_count=0, replace_time=19025, replace_start_pos=1305424, replace_num=15728) [2024-02-19 19:03:35.350877] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.350912] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.361136] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.361182] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.370216] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.370256] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=40] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615370204}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.370276] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=19] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615370204}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.371328] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.371364] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.381513] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.381549] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.383614] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=21] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340615383592}) [2024-02-19 19:03:35.383649] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=35] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340615370204}}) [2024-02-19 19:03:35.386132] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] get wrs ts(ls_id={id:1}, delta_ns=-1706042771799909278, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:35.386160] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=26] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:35.391686] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.391730] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.401859] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.401892] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.412016] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.412064] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.422181] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.422214] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.430734] INFO [STORAGE.TRANS] dump_mapper_info (ob_lock_wait_mgr.h:63) [1108319][T1_LockWaitMgr][T1][Y0-0000000000000000-0-0] [lt=27] report RowHolderMapper summary info(count=0, bkt_cnt=252) [2024-02-19 19:03:35.432362] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.432392] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.436524] WARN [STORAGE.TRANS] acquire_global_snapshot__ (ob_trans_service_v4.cpp:1472) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=14] acquire global snapshot fail(ret=-4012, gts_ahead=0, expire_ts=1708340615435151, now={mts:1708340585505381}, now0={mts:1708340585505381}, snapshot=-1, uncertain_bound=0) [2024-02-19 19:03:35.436570] WARN [STORAGE.TRANS] get_read_snapshot (ob_tx_api.cpp:552) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=46] acquire global snapshot fail(ret=-4012, tx={this:0x7fdcd5aa11f0, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340585504583, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}) [2024-02-19 19:03:35.436618] WARN [SQL.EXE] stmt_setup_snapshot_ (ob_sql_trans_control.cpp:614) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=39] fail to get snapshot(ret=-4012, local_ls_id={id:1}, session={this:0x7fdd425960c0, id:1, tenant:"sys", tenant_id:1, effective_tenant:"sys", effective_tenant_id:1, database:"oceanbase", user:"root@%", consistency_level:3, session_state:0, tx:0x7fdcd5aa11f0}) [2024-02-19 19:03:35.436648] WARN [SQL.EXE] start_stmt (ob_sql_trans_control.cpp:481) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=30] fail to exec stmt_setup_snapshot_(session, das_ctx, plan, plan_ctx, txs)(ret=-4012, session_id=1, *tx_desc={this:0x7fdcd5aa11f0, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340585504583, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}) [2024-02-19 19:03:35.436672] INFO [SQL.EXE] start_stmt (ob_sql_trans_control.cpp:530) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=22] start stmt(ret=-4012, auto_commit=true, session_id=1, snapshot={this:0x7fdd290d4120, valid:false, source:0, core:{version:-1, tx_id:{txid:0}, scn:-1}, uncertain_bound:0, snapshot_lsid:{id:-1}, parts:[]}, savepoint=0, tx_desc={this:0x7fdcd5aa11f0, tx_id:{txid:0}, state:1, addr:"172.1.3.242:2882", tenant_id:1, session_id:1, xid:NULL, access_mode:0, tx_consistency_type:0, isolation:1, snapshot_version:-1, snapshot_scn:0, active_scn:-1, op_sn:1, alloc_ts:1708340585504583, active_ts:-1, commit_ts:-1, finish_ts:-1, timeout_us:-1, lock_timeout_us:-1, expire_ts:9223372036854775807, coord_id:{id:-1}, parts:[], exec_info_reap_ts:0, commit_version:-1, commit_cb:null, cluster_id:-1, cluster_version:0, flags_.SHADOW:false, flags_.INTERRUPTED:false, flags_.BLOCK:false, flags_.REPLICA:false, can_elr:false, cflict_txs:[], abort_cause:0, commit_expire_ts:0, commit_task_.is_registered():false, ref:1}, plan_type=1, stmt_type=1, has_for_update=false, query_start_time=1708340585505181, use_das=false, session={this:0x7fdd425960c0, id:1, tenant:"sys", tenant_id:1, effective_tenant:"sys", effective_tenant_id:1, database:"oceanbase", user:"root@%", consistency_level:3, session_state:0, tx:0x7fdcd5aa11f0}, plan=0x7fdcda010050, consistency_level_in_plan_ctx=3, trans_result={incomplete:false, parts:[], touched_ls_list:[], cflict_txs:[]}) [2024-02-19 19:03:35.436712] WARN [SQL] start_stmt (ob_result_set.cpp:282) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=38] fail to start stmt(ret=-4012, phy_plan->get_dependency_table()=[{table_id:1, schema_version:0, object_type:1, is_db_explicit:false, is_existed:true}]) [2024-02-19 19:03:35.436726] WARN [SQL] do_open_plan (ob_result_set.cpp:451) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] fail start stmt(ret=-4012) [2024-02-19 19:03:35.436734] WARN [SQL] open (ob_result_set.cpp:150) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=6] execute plan failed(ret=-4012) [2024-02-19 19:03:35.436745] WARN [SERVER] open (ob_inner_sql_result.cpp:146) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] open result set failed(ret=-4012) [2024-02-19 19:03:35.436754] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:607) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=6] result set open failed(ret=-4012, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_schema_status' ORDER BY row_id, column_name"}) [2024-02-19 19:03:35.436767] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=13] execute failed(ret=-4012, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_schema_status' ORDER BY row_id, column_name"}, retry_cnt=0) [2024-02-19 19:03:35.436778] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=7] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-4012, err_:"OB_TIMEOUT", retry_type:0, client_ret:-4012}, need_retry=false) [2024-02-19 19:03:35.436818] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=22] result set close failed(ret=-4012) [2024-02-19 19:03:35.436828] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] result set close failed(ret=-4012) [2024-02-19 19:03:35.436836] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=7] failed to close result(close_ret=-4012, ret=-4012) [2024-02-19 19:03:35.436867] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106719][RSAsyncTask2][T1][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_schema_status' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-02-19 19:03:35.436883] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:574) [1106719][RSAsyncTask2][T0][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_schema_status' ORDER BY row_id, column_name"}, process_time=29931699) [2024-02-19 19:03:35.436894] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106719][RSAsyncTask2][T0][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_schema_status' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-02-19 19:03:35.436908] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1106719][RSAsyncTask2][T0][YB42AC0103F2-000611B922878A4F-0-0] [lt=10] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_schema_status' ORDER BY row_id, column_name) [2024-02-19 19:03:35.436920] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106719][RSAsyncTask2][T0][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-02-19 19:03:35.436953] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1106719][RSAsyncTask2][T0][YB42AC0103F2-000611B922878A4F-0-0] [lt=31] execute_read failed(ret=-4012, cluster_id=1, tenant_id=1) [2024-02-19 19:03:35.436973] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1106719][RSAsyncTask2][T0][YB42AC0103F2-000611B922878A4F-0-0] [lt=17] query failed(ret=-4012, conn=0x7fdd42596050, start=1708340585505152, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_schema_status' ORDER BY row_id, column_name) [2024-02-19 19:03:35.436986] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1106719][RSAsyncTask2][T0][YB42AC0103F2-000611B922878A4F-0-0] [lt=12] read failed(ret=-4012) [2024-02-19 19:03:35.436998] WARN [SHARE] load (ob_core_table_proxy.cpp:436) [1106719][RSAsyncTask2][T0][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_schema_status' ORDER BY row_id, column_name) [2024-02-19 19:03:35.437084] WARN [SHARE] load (ob_core_table_proxy.cpp:368) [1106719][RSAsyncTask2][T0][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] load failed(ret=-4012, for_update=false) [2024-02-19 19:03:35.437096] WARN [SHARE] load_refresh_schema_status (ob_schema_status_proxy.cpp:221) [1106719][RSAsyncTask2][T0][YB42AC0103F2-000611B922878A4F-0-0] [lt=11] fail to load core table(ret=-4012) [2024-02-19 19:03:35.437106] INFO [SHARE] load_refresh_schema_status (ob_schema_status_proxy.cpp:265) [1106719][RSAsyncTask2][T0][YB42AC0103F2-000611B922878A4F-0-0] [lt=9] [SCHEMA_STATUS] load refreshed schema status(ret=-4012) [2024-02-19 19:03:35.437235] WARN [SHARE] run2 (ob_async_task_queue.cpp:148) [1106719][RSAsyncTask2][T0][YB42AC0103F2-000611B922878A4F-0-0] [lt=8] task process failed, start retry(max retry time=0, retry interval=1000000, ret=-4012) [2024-02-19 19:03:35.437329] INFO [SHARE] store (ob_rs_mgr.cpp:128) [1106720][RSAsyncTask3][T0][Y0-000611B92307888A-0-0] [lt=12] store rs list succeed(agent=0, addr_list=[{server:"172.1.3.242:2882", role:1, sql_port:2881}], force=true) [2024-02-19 19:03:35.443798] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=24] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.443830] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=33] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.443845] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=13] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:35.444429] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.444432] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=17] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.444441] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.444451] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=9] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:35.444453] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.444466] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:35.444554] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.444577] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=24] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.445058] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.445078] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.445089] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=10] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:35.445344] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=7] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.445363] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.445372] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=8] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:35.445688] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=8] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.445712] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.445728] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=14] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:35.445931] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=7] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.446082] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=150] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.446096] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=13] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:35.446328] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=16] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.446351] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.446363] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:35.446713] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=11] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.446738] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.446752] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=13] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:35.447046] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.447068] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.447079] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=9] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:35.447645] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.447657] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=7] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.447665] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.447670] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.447675] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=10] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:35.447679] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=7] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:35.448230] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=7] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.448251] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.448265] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=13] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:35.448410] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=7] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.448434] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.448445] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=10] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:35.448682] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106788][RpcIO][T0][Y0-0000000000000000-0-0] [lt=20] [RPC EASY STAT](log_str=conn count=1/1, request done=19527/19527, request doing=0/0) [2024-02-19 19:03:35.448864] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=26] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.448879] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.448888] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=8] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:35.448916] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106789][RpcIO][T0][Y0-0000000000000000-0-0] [lt=17] [RPC EASY STAT](log_str=conn count=1/1, request done=19527/19527, request doing=0/0) [2024-02-19 19:03:35.449108] INFO [STORAGE.TRANS] get_number (ob_timestamp_access.cpp:49) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.449132] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.449145] WARN [STORAGE.TRANS] get_gts (ob_gts_source.cpp:234) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=12] get_gts_from_local_timestamp_service fail(leader="172.1.3.242:2882", server="172.1.3.242:2882", tmp_ret=-4038) [2024-02-19 19:03:35.449477] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=7] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.450087] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.450340] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.450713] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=36] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.450943] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=28] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.451312] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.451553] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.451920] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.452264] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.452537] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.452878] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.453149] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.453488] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.453771] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=31] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.454098] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.454396] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=27] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.454701] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.454725] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.454751] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.454999] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.455433] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.455599] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.456038] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.456283] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.456639] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.456899] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=29] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.457249] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.457512] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.457829] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.458126] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.458468] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.458702] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.459050] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.459303] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.459635] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.459966] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.460209] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.460838] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.460843] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.461438] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.461444] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.462039] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.462105] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=120] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.462665] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.462715] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.463280] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=31] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.463329] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.463886] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.463945] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=27] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.464520] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.464584] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.464882] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.464908] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.465147] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.465177] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.465734] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.465785] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.466333] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.466364] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.466931] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=17] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.466980] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.467572] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.467588] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=22] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.468193] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.468205] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=110] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.468798] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=12] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.468800] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=24] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.469380] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.469410] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.470019] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=16] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.470152] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.470650] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.470804] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.471018] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.471039] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=20] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615471009}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.471059] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=17] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615471009}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.471082] INFO [STORAGE.TRANS] refresh_gts (ob_gts_source.cpp:520) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=20] refresh gts(ret=-4038, ret="OB_NOT_MASTER", tenant_id=1, need_refresh=false, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340615471009}}) [2024-02-19 19:03:35.471266] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.471543] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=18] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.471882] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.472148] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=14] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.472462] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=19] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.472704] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=21] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.473039] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=27] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.473313] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=23] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.473619] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.473901] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.474212] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=25] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.474496] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=20] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.474791] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=26] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.475049] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.475087] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=15] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.475079] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.475681] WARN [STORAGE.TRANS] get_gts_from_local_timestamp_service_ (ob_gts_source.cpp:292) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=13] global_timestamp_service get gts fail(leader="172.1.3.242:2882", tmp_gts=0, ret=-4038, ret="OB_NOT_MASTER") [2024-02-19 19:03:35.484174] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=23] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340615484154}) [2024-02-19 19:03:35.484215] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=42] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340615471009}}) [2024-02-19 19:03:35.485222] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.485249] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.486168] WARN [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:287) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-02-19 19:03:35.486188] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:35.486200] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:35.486213] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340615486154) [2024-02-19 19:03:35.486223] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340615286168, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:35.486304] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] get wrs ts(ls_id={id:1}, delta_ns=-1706042771799809168, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:35.486322] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:35.495398] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.495469] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=73] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.505610] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.505656] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.515813] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.515858] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.526052] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.526089] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.536233] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.536279] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.538997] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=69] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:35.539147] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=79] Wash time detail, (compute_wash_size_time=173, refresh_score_time=66, wash_time=5) [2024-02-19 19:03:35.546395] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.546427] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.556542] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.556575] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.558437] INFO [ARCHIVE] stop (ob_archive_scheduler_service.cpp:137) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=9] stop archive scheduler service [2024-02-19 19:03:35.559393] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=7] table not exist(tenant_id=1, database_id=201001, table_name=__all_backup_info, ret=-5019) [2024-02-19 19:03:35.559411] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=17] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_backup_info, ret=-5019) [2024-02-19 19:03:35.559422] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=9] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:35.559430] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=7] resolve table relation factor failed(ret=-5019, table_name=__all_backup_info) [2024-02-19 19:03:35.559440] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=7] fail to resolve table(ret=-5019) [2024-02-19 19:03:35.559447] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=6] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:35.559457] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=6] Table 'oceanbase.__all_backup_info' doesn't exist [2024-02-19 19:03:35.559464] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=6] resolve basic table failed(ret=-5019) [2024-02-19 19:03:35.559471] WARN [SQL.RESV] resolve_table_list (ob_update_resolver.cpp:423) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=6] failed to resolve table(ret=-5019) [2024-02-19 19:03:35.559478] WARN [SQL.RESV] resolve (ob_update_resolver.cpp:76) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=7] resolve table failed(ret=-5019) [2024-02-19 19:03:35.559486] WARN [SQL.RESV] stmt_resolver_func (ob_resolver.cpp:155) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=6] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3074) [2024-02-19 19:03:35.559500] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=7] failed to resolve(ret=-5019) [2024-02-19 19:03:35.559508] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=7] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.559517] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=6] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.559524] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=6] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:35.559532] WARN [SQL] stmt_query (ob_sql.cpp:175) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=6] fail to handle text query(stmt=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882', ret=-5019) [2024-02-19 19:03:35.559541] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=7] executor execute failed(ret=-5019) [2024-02-19 19:03:35.559548] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=7] execute failed(ret=-5019, executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, retry_cnt=0) [2024-02-19 19:03:35.559563] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=10] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:35.559577] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=12] result set close failed(ret=-5019) [2024-02-19 19:03:35.559584] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=6] result set close failed(ret=-5019) [2024-02-19 19:03:35.559590] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=6] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:35.559610] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1106733][BackupLease][T1][YB42AC0103F2-000611B923978EAE-0-0] [lt=6] failed to process record(executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:35.559622] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1106733][BackupLease][T0][YB42AC0103F2-000611B923978EAE-0-0] [lt=8] failed to process final(executor={ObIExecutor:, sql:"update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'"}, aret=-5019, ret=-5019) [2024-02-19 19:03:35.559631] WARN [SERVER] execute_write_inner (ob_inner_sql_connection.cpp:1818) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=8] execute sql failed(ret=-5019, tenant_id=1, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:35.559639] INFO [SERVER] execute_write_inner (ob_inner_sql_connection.cpp:1900) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=7] execute write sql(ret=-5019, tenant_id=1, affected_rows=0, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:35.559666] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=6] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:35.559674] WARN [SERVER] execute_write (ob_inner_sql_connection.cpp:1786) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=7] execute_write failed(ret=-5019, tenant_id=1, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882', is_user_sql=false) [2024-02-19 19:03:35.559681] WARN [SERVER] execute_write (ob_inner_sql_connection.cpp:1775) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=7] execute_write failed(ret=-5019, tenant_id=1, sql="update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882'") [2024-02-19 19:03:35.559689] WARN [COMMON.MYSQLP] write (ob_mysql_proxy.cpp:133) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=7] execute sql failed(ret=-5019, conn=0x7fdcf4ef4050, start=1708340615558523, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:35.559725] WARN [SERVER] clean_backup_scheduler_leader (ob_backup_operator.cpp:348) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=8] execute sql failed(ret=-5019, sql=update __all_backup_info set value = '' where name = 'backup_scheduler_leader' and value='172.1.3.242:2882') [2024-02-19 19:03:35.559734] WARN [SERVER] clean_backup_scheduler_leader (ob_backup_manager.cpp:517) [1106733][BackupLease][T0][Y0-0000000000000000-0-0] [lt=8] failed to clean backup scheduler leader(ret=-5019) [2024-02-19 19:03:35.563748] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=24] Cache replace map node details(ret=0, replace_node_count=0, replace_time=20740, replace_start_pos=1321152, replace_num=15728) [2024-02-19 19:03:35.566686] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.566712] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.571710] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=16] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.571744] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=35] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615571699}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.571771] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=24] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615571699}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.576819] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=25] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.576857] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.576784] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=26] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=37605, clean_start_pos=1132452, clean_num=31457) [2024-02-19 19:03:35.578636] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC88-0-0] [lt=100] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:35.578665] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC88-0-0] [lt=28] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:35.578700] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC88-0-0] [lt=34] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:35.578715] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC88-0-0] [lt=13] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:35.578725] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC88-0-0] [lt=10] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:35.584435] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=15] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340615584421}) [2024-02-19 19:03:35.584462] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=27] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340615571699}}) [2024-02-19 19:03:35.586280] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] get wrs ts(ls_id={id:1}, delta_ns=-1706042771799709412, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:35.586303] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:35.586990] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.587013] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=23] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.590341] INFO [SQL.PC] update_memory_conf (ob_plan_cache.cpp:1499) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=18] update plan cache memory config(ob_plan_cache_percentage=5, ob_plan_cache_evict_high_percentage=90, ob_plan_cache_evict_low_percentage=50, tenant_id=1) [2024-02-19 19:03:35.590369] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1130) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=25] start lib cache evict(tenant_id=1, mem_hold=2097152, mem_limit=107374180, cache_obj_num=2, cache_node_num=2) [2024-02-19 19:03:35.590381] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1147) [1106739][PlanCacheEvict][T1][Y0-0000000000000000-0-0] [lt=10] end lib cache evict(tenant_id=1, cache_evict_num=0, mem_hold=2097152, mem_limit=107374180, cache_obj_num=2, cache_node_num=2) [2024-02-19 19:03:35.593792] INFO [SQL.PC] runTimerTask (ob_plan_cache_manager.cpp:291) [1106739][PlanCacheEvict][T0][Y0-0000000000000000-0-0] [lt=8] schedule next cache evict task(evict_interval=1000000) [2024-02-19 19:03:35.596600] INFO [SQL.PC] runTimerTask (ob_plan_cache_manager.cpp:299) [1106739][PlanCacheEvict][T0][Y0-0000000000000000-0-0] [lt=30] schedule next cache evict task(evict_interval=1000000) [2024-02-19 19:03:35.597135] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.597171] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.607307] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.607347] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.617469] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.617504] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.627624] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.627661] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.637144] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:186) [1108342][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=40] start do ls ha handler(ls_id_array_=[{id:1}]) [2024-02-19 19:03:35.637784] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.637842] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=58] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.647966] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.648009] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.658154] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.658200] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.660388] INFO [PALF] log_loop_ (log_loop_thread.cpp:106) [1107532][T1_LogLoop][T1][Y0-0000000000000000-0-0] [lt=41] LogLoopThread round_cost_time(round_cost_time=3) [2024-02-19 19:03:35.667024] INFO do_work (ob_rl_mgr.cpp:704) [1106705][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=22] swc wakeup.(stat_period_=1000000, ready=false) [2024-02-19 19:03:35.668339] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.668372] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.668964] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106795][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=30] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/6, request doing=0/0) [2024-02-19 19:03:35.668966] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106796][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=20] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/1, request doing=0/0) [2024-02-19 19:03:35.669025] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:589) [1106798][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=13] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:35.670152] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106792][BatchIO][T0][Y0-0000000000000000-0-0] [lt=18] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:35.670207] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:527) [1106800][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=15] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:35.670564] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106791][BatchIO][T0][Y0-0000000000000000-0-0] [lt=18] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:35.670593] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:565) [1106793][BatchIO][T0][Y0-0000000000000000-0-0] [lt=8] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-02-19 19:03:35.672336] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=11] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.672372] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=32] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615672327}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.672398] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=24] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615672327}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.678511] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.678544] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.684493] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=14] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340615684469}) [2024-02-19 19:03:35.684531] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1106708][SerScheQueue1][T1][YB42AC0103F2-000611B922B78585-0-0] [lt=40] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340615672327}}) [2024-02-19 19:03:35.686277] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:35.686299] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:35.686319] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340615686263) [2024-02-19 19:03:35.686333] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340615486238, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:35.686414] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=28] get wrs ts(ls_id={id:1}, delta_ns=-1706042771799609476, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:35.686433] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:35.688670] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.688706] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.699019] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.699055] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.709199] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.709241] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=43] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.716699] INFO [CLOG] run_loop_ (ob_server_log_block_mgr.cpp:587) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=10] ObServerLogBlockMGR run loop(ret=0, this={dir::"/backup/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:8589934592, next_total_size:8589934592, status:0}, min_block_id:1275, max_block_id:1372, is_inited:true}) [2024-02-19 19:03:35.716773] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:202) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=39] decide disk size finished(dir="/backup/oceanbase/data/sstable", suggested_disk_size=8589934592, suggested_disk_percentage=0, default_disk_percentage=60, total_space=246944890880, free_space=220974178304, disk_size=8589934592) [2024-02-19 19:03:35.716789] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:202) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=17] decide disk size finished(dir="/backup/oceanbase/data/clog", suggested_disk_size=8589934592, suggested_disk_percentage=0, default_disk_percentage=30, total_space=246944890880, free_space=220974178304, disk_size=8589934592) [2024-02-19 19:03:35.716809] INFO [SERVER] cal_all_part_disk_size (ob_server_utils.cpp:164) [1106802][LogLoop][T0][Y0-0000000000000000-0-0] [lt=18] decide_all_disk_size succ(data_dir="/backup/oceanbase/data/sstable", clog_dir="/backup/oceanbase/data/clog", suggested_data_disk_size=8589934592, suggested_data_disk_percentage=0, data_default_disk_percentage=60, clog_default_disk_percentage=30, shared_mode=true, data_disk_size=8589934592, log_disk_size=8589934592) [2024-02-19 19:03:35.719399] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.719429] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.729577] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.729611] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.739766] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.739812] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.749953] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.750043] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=91] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.751421] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-02-19 19:03:35.751455] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=32] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-02-19 19:03:35.751469] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=13] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:35.751480] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=11] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-02-19 19:03:35.751497] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=12] fail to resolve table(ret=-5019) [2024-02-19 19:03:35.751514] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=15] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:35.751529] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=9] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-02-19 19:03:35.751545] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=15] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:35.751555] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=8] resolve basic table failed(ret=-5019) [2024-02-19 19:03:35.751564] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=9] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:35.751574] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=8] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:35.751583] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=9] resolve normal query failed(ret=-5019) [2024-02-19 19:03:35.751594] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=10] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:35.751613] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=9] failed to resolve(ret=-5019) [2024-02-19 19:03:35.751628] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=14] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.751641] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=9] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.751650] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=9] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:35.751661] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=8] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-02-19 19:03:35.751679] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=15] executor execute failed(ret=-5019) [2024-02-19 19:03:35.751689] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=9] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0) [2024-02-19 19:03:35.751714] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=18] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:35.751751] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=21] result set close failed(ret=-5019) [2024-02-19 19:03:35.751768] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=16] result set close failed(ret=-5019) [2024-02-19 19:03:35.751778] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=8] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:35.751803] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:35.751822] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=17] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-02-19 19:03:35.751835] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:35.751847] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:35.751865] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=17] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:35.751876] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=10] query failed(ret=-5019, conn=0x7fdcdc89a050, start=1708340615751189, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:35.751888] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=12] read failed(ret=-5019) [2024-02-19 19:03:35.751905] WARN [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:612) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=15] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-02-19 19:03:35.751994] WARN [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=11] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:35.752026] WARN [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=31] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-02-19 19:03:35.752037] WARN [SHARE] next (ob_ls_table_iterator.cpp:71) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=11] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:35.752047] WARN [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:331) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=8] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:35.752058] WARN [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:213) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=8] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-02-19 19:03:35.752069] WARN [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:193) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=9] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-02-19 19:03:35.752088] WARN [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:43) [1108321][T1_LSMetaCh][T1][YB42AC0103F2-000611B9221790E2-0-0] [lt=17] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:35.760177] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.760219] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.770369] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.770428] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=62] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.772920] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=26] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.772968] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=46] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615772909}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.772990] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=19] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615772909}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.773006] INFO [STORAGE.TRANS] statistics (ob_gts_source.cpp:70) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=13] gts statistics(tenant_id=1, gts_rpc_cnt=0, get_gts_cache_cnt=6616, get_gts_with_stc_cnt=15948, try_get_gts_cache_cnt=0, try_get_gts_with_stc_cnt=0, wait_gts_elapse_cnt=0, try_wait_gts_elapse_cnt=0) [2024-02-19 19:03:35.777954] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=234] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:35.778139] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=105] Wash time detail, (compute_wash_size_time=225, refresh_score_time=74, wash_time=7) [2024-02-19 19:03:35.780561] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.780587] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=25] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.782478] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=59] Cache replace map node details(ret=0, replace_node_count=0, replace_time=18477, replace_start_pos=1336880, replace_num=15728) [2024-02-19 19:03:35.784638] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=18] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340615784624}) [2024-02-19 19:03:35.784661] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=24] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340615772909}}) [2024-02-19 19:03:35.786422] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25] get wrs ts(ls_id={id:1}, delta_ns=-1706042771799509447, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:35.786444] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:35.790699] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.790730] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.800840] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.800871] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.810896] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=40] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=32712, clean_start_pos=1163909, clean_num=31457) [2024-02-19 19:03:35.810974] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.810994] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=19] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.821110] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=24] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.821154] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=46] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.822749] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=12] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:35.822779] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=41] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-02-19 19:03:35.822790] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=9] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:35.822801] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=8] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-02-19 19:03:35.822811] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=11] fail to resolve table(ret=-5019) [2024-02-19 19:03:35.822817] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=5] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:35.822826] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=6] Table 'oceanbase.__all_server' doesn't exist [2024-02-19 19:03:35.822834] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=6] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:35.822842] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=7] resolve basic table failed(ret=-5019) [2024-02-19 19:03:35.822851] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=8] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:35.822858] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=6] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:35.822866] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=6] resolve normal query failed(ret=-5019) [2024-02-19 19:03:35.822873] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=7] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:35.822894] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=15] failed to resolve(ret=-5019) [2024-02-19 19:03:35.822902] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=8] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.822911] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=6] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.822920] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=8] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:35.822933] WARN [SQL] stmt_query (ob_sql.cpp:175) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=10] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, ret=-5019) [2024-02-19 19:03:35.822951] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=16] executor execute failed(ret=-5019) [2024-02-19 19:03:35.822962] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=9] execute failed(ret=-5019, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, retry_cnt=0) [2024-02-19 19:03:35.822987] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=18] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:35.823011] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=21] result set close failed(ret=-5019) [2024-02-19 19:03:35.823019] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=8] result set close failed(ret=-5019) [2024-02-19 19:03:35.823034] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=14] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:35.823062] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=9] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:35.823091] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1107047][T1_Occam][T1][YB42AC0103F2-000611B922F7A02D-0-0] [lt=17] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-02-19 19:03:35.823112] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=18] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:35.823124] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:35.823133] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:35.823144] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] query failed(ret=-5019, conn=0x7fdcdc9be050, start=1708340615822484, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:35.823156] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=12] read failed(ret=-5019) [2024-02-19 19:03:35.823168] WARN get_my_sql_result_ (ob_table_access_helper.h:329) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x7fdcfffccd78, table=__all_server, condition=where svr_ip='172.1.3.242' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.1.3.242' and svr_port=2882, columns_str="zone") [2024-02-19 19:03:35.823188] WARN read_single_row (ob_table_access_helper.h:178) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=15] get mysql result failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1, columns=0x7fdcfffccd78, table=__all_server, where_condition=where svr_ip='172.1.3.242' and svr_port=2882) [2024-02-19 19:03:35.823278] WARN [COORDINATOR] get_self_zone_name (table_accessor.cpp:517) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] get zone from __all_server failed(ret=-4016, ret="OB_ERR_UNEXPECTED", columns=0x7fdcfffccd78, where_condition="where svr_ip='172.1.3.242' and svr_port=2882", zone_name_holder=) [2024-02-19 19:03:35.823299] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:450) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=20] get self zone name failed(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:35.823309] WARN [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:459) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] zone name is empty(ret=-4016, ret="OB_ERR_UNEXPECTED", all_ls_election_reference_info=[]) [2024-02-19 19:03:35.823320] WARN [COORDINATOR] refresh (ob_leader_coordinator.cpp:107) [1107047][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] get all ls election reference info failed(ret=-4016, ret="OB_ERR_UNEXPECTED", MTL_ID()=1) [2024-02-19 19:03:35.828774] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC89-0-0] [lt=205] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:35.828813] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC89-0-0] [lt=39] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:35.828837] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC89-0-0] [lt=23] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:35.828856] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC89-0-0] [lt=16] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:35.828869] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC89-0-0] [lt=13] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:35.831279] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.831312] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.831951] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:326) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=13] ====== check clog disk timer task ====== [2024-02-19 19:03:35.831972] INFO [PALF] get_disk_usage (palf_env_impl.cpp:820) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=18] get_disk_usage(ret=0, capacity(MB):=2048, used(MB):=1945) [2024-02-19 19:03:35.833097] INFO [STORAGE.TRANS] get_rec_log_ts (ob_trans_ctx_mgr_v4.cpp:1283) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=10] succ to get rec log ts(*this={this:0x7fdce3204030, ls_id:{id:1}, tenant_id:1, state:"F_WORKING", total_tx_ctx_count:0, leader_takeover_ts:{mts:0}, is_leader_serving:false, max_replay_commit_version:1707751112415295196, ls_retain_ctx_mgr:{retain_ctx_list_.size():0}, aggre_rec_log_ts:-1, prev_aggre_rec_log_ts:-1, online_ts:0, uref:1073741825}, aggre_rec_log_ts=9223372036854775807) [2024-02-19 19:03:35.833130] INFO [STORAGE.TRANS] get_rec_log_ts (ob_tx_ctx_memtable.cpp:231) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=31] tx ctx memtable get rec log ts(this={ObITable:{this:0x7fdce5f6e080, key:{tablet_id:{id:49401}, column_group_idx:0, table_type:"TX_CTX_MEMTABLE", log_ts_range:{start_log_ts:1, end_log_ts:1708337131277985}}, ref_cnt:2, upper_trans_version:-4007, timestamp:0}, this:0x7fdce5f6e080, snapshot_version:1708337131277985, ls_id:{id:1}, is_frozen:false}, rec_log_ts=9223372036854775807) [2024-02-19 19:03:35.833162] INFO [STORAGE.TABLELOCK] get_rec_log_ts (ob_lock_memtable.cpp:739) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=25] rec_log_ts of ObLockMemtable is (rec_log_ts_=9223372036854775807, flushed_log_ts_=1707033175148098668, freeze_log_ts_=0, max_committed_log_ts_=-1, is_frozen_=false, ls_id_={id:1}) [2024-02-19 19:03:35.833175] INFO [STORAGE.TRANS] get_rec_log_ts (ob_ls_tx_service.cpp:437) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=11] [CHECKPOINT] ObLSTxService::get_rec_log_ts(common_checkpoint_type="TX_DATA_MEMTABLE_TYPE", common_checkpoints_[min_rec_log_ts_common_checkpoint_type_index]={ObIMemtableMgr:{Memtables:this:0x7fdce89de180, ref_cnt:1, is_inited:true, tablet_id:{id:49402}, freezer:0x7fdce89e30d0, table_type:1, memtable_head:0, memtable_tail:2, t3m:0x7fdd18bce030, tables:[0x7fdce5eea080, 0x7fdce5eea360, null, null, null, null, null, null, null, null, null, null, null, null, null, null]}, is_freezing:false, ls_id:{id:1}, tx_data_table:0x7fdce89e4550, ls_tablet_svr:0x7fdce89de160, slice_allocator:0x7fdce89e4590}, min_rec_log_ts=1707209832548318068, ls_id_={id:1}) [2024-02-19 19:03:35.835183] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=20] get rec log ts(service_type_=0, rec_log_ts=9223372036854775807) [2024-02-19 19:03:35.835201] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=19] get rec log ts(service_type_=1, rec_log_ts=9223372036854775807) [2024-02-19 19:03:35.835212] INFO [STORAGE.TRANS] get_rec_log_ts (ob_id_service.cpp:300) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=9] get rec log ts(service_type_=2, rec_log_ts=9223372036854775807) [2024-02-19 19:03:35.835226] INFO [STORAGE] update_clog_checkpoint (ob_checkpoint_executor.cpp:158) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=8] [CHECKPOINT] clog checkpoint no change(checkpoint_ts=1707209832548318068, checkpoint_ts_in_ls_meta=1707209832548318068, ls_id={id:1}, service_type="TRANS_SERVICE") [2024-02-19 19:03:35.835246] INFO [STORAGE] cannot_recycle_log_over_threshold_ (ob_checkpoint_service.cpp:239) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=14] cannot_recycle_log_size statistics(cannot_recycle_log_size=1905773194, threshold=644245094) [2024-02-19 19:03:35.837378] INFO [PALF] locate_by_lsn_coarsely (palf_handle_impl.cpp:1605) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=10] locate_by_lsn_coarsely(ret=0, ret="OB_SUCCESS", this={palf_id:1, self:"172.1.3.242:2882", has_set_deleted:false}, lsn={lsn:24563027948}, committed_lsn={lsn:25325337226}, result_ts_ns=1707530339417374084) [2024-02-19 19:03:35.837678] INFO [STORAGE] advance_checkpoint_by_flush (ob_checkpoint_executor.cpp:226) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=300] advance checkpoint by flush to avoid clog disk full(recycle_ts=1707530339417374084, end_lsn={lsn:25325337226}, clog_checkpoint_lsn={lsn:23419564032}, calcu_recycle_lsn={lsn:24563027948}, ls_->get_ls_id()={id:1}) [2024-02-19 19:03:35.837708] INFO [STORAGE] advance_checkpoint_by_flush (ob_checkpoint_executor.cpp:244) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=26] start flush(recycle_ts=1707530339417374084, ls_->get_clog_checkpoint_ts()=1707209832548318068, ls_->get_ls_id()={id:1}) [2024-02-19 19:03:35.839136] INFO [STORAGE.TRANS] get_rec_log_ts (ob_trans_ctx_mgr_v4.cpp:1283) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=14] succ to get rec log ts(*this={this:0x7fdce3204030, ls_id:{id:1}, tenant_id:1, state:"F_WORKING", total_tx_ctx_count:0, leader_takeover_ts:{mts:0}, is_leader_serving:false, max_replay_commit_version:1707751112415295196, ls_retain_ctx_mgr:{retain_ctx_list_.size():0}, aggre_rec_log_ts:-1, prev_aggre_rec_log_ts:-1, online_ts:0, uref:1073741825}, aggre_rec_log_ts=9223372036854775807) [2024-02-19 19:03:35.839177] INFO [STORAGE.TRANS] get_rec_log_ts (ob_tx_ctx_memtable.cpp:231) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=43] tx ctx memtable get rec log ts(this={ObITable:{this:0x7fdce5f6e080, key:{tablet_id:{id:49401}, column_group_idx:0, table_type:"TX_CTX_MEMTABLE", log_ts_range:{start_log_ts:1, end_log_ts:1708337131277985}}, ref_cnt:2, upper_trans_version:-4007, timestamp:0}, this:0x7fdce5f6e080, snapshot_version:1708337131277985, ls_id:{id:1}, is_frozen:false}, rec_log_ts=9223372036854775807) [2024-02-19 19:03:35.839209] INFO [STORAGE] freeze (ob_tx_data_memtable_mgr.cpp:192) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=26] start freeze tx data memtable(ls_id_={id:1}) [2024-02-19 19:03:35.839225] INFO [STORAGE] freeze_ (ob_tx_data_memtable_mgr.cpp:228) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=13] There is a freezed memetable existed. Try freeze after flushing it.(ret=-4023, ret="OB_EAGAIN", get_memtable_count_()=2) [2024-02-19 19:03:35.839239] WARN [STORAGE] freeze (ob_tx_data_memtable_mgr.cpp:206) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=13] freeze tx data memtable fail.(ret=-4023, ret="OB_EAGAIN") [2024-02-19 19:03:35.839262] WARN [STORAGE] flush (ob_tx_data_memtable_mgr.cpp:430) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=22] freeze failed(ret=-4023, ret="OB_EAGAIN", this=0x7fdce89de180) [2024-02-19 19:03:35.839275] WARN [STORAGE.TRANS] flush (ob_ls_tx_service.cpp:451) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=12] obCommonCheckpoint flush failed(tmp_ret=-4023, common_checkpoints_[i]=0x7fdce89de250) [2024-02-19 19:03:35.839289] INFO [STORAGE.TABLELOCK] get_rec_log_ts (ob_lock_memtable.cpp:739) [1108333][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=12] rec_log_ts of ObLockMemtable is (rec_log_ts_=9223372036854775807, flushed_log_ts_=1707033175148098668, freeze_log_ts_=0, max_committed_log_ts_=-1, is_frozen_=false, ls_id_={id:1}) [2024-02-19 19:03:35.841434] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.841465] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.851605] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.851640] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=39] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.863280] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.863343] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=64] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.873543] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.873483] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.873567] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=23] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615873535}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.873591] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=21] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615873535}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.873573] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=92] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.883830] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.883864] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.884773] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=16] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340615884752}) [2024-02-19 19:03:35.884797] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=25] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340615873535}}) [2024-02-19 19:03:35.884809] WARN [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:485) [1106741][SysLocAsyncUp0][T0][YB42AC0103F2-000611B9212AA0F4-0-0] [lt=23] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, tasks=[{cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340615884752}]) [2024-02-19 19:03:35.886431] INFO [STORAGE.TRANS] print_stat_ (ob_tenant_weak_read_service.cpp:524) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] [WRS] [TENANT_WEAK_READ_SERVICE] [STAT](tenant_id=1, server_version={version:1708336686671726824, total_part_count:1, valid_inner_part_count:1, valid_user_part_count:0}, server_version_delta=3929214697215, in_cluster_service=false, cluster_version=0, min_cluster_version=0, max_cluster_version=0, get_cluster_version_err=0, cluster_version_delta=1708340615886424039, cluster_service_master="0.0.0.0:0", cluster_service_tablet_id={id:226}, post_cluster_heartbeat_count=0, succ_cluster_heartbeat_count=0, cluster_heartbeat_interval=1000000, local_cluster_version=0, local_cluster_delta=1708340615886424039, force_self_check=false, weak_read_refresh_interval=100000) [2024-02-19 19:03:35.886484] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=42] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:35.886511] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:35.886536] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340615886527}) [2024-02-19 19:03:35.886551] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340615886470) [2024-02-19 19:03:35.886564] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340615686360, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:35.886596] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:738) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] current server is WRS leader, need start CLUSTER weak read service(tenant_id=1, serve_leader_epoch=0, cur_leader_epoch=138, cluster_service_tablet_id_={id:226}, in_service=false, can_update_version=false, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:35.886621] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:336) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] begin start service(tenant_id=1, is_in_service()=false, can_update_version=false) [2024-02-19 19:03:35.886634] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:338) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] start TenantWeakReadClusterService(tenant_id=1) [2024-02-19 19:03:35.887507] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:5885) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=6] table not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:35.887530] WARN [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:5830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=22] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_weak_read_service, ret=-5019) [2024-02-19 19:03:35.887540] WARN [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:5610) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=9] fail to resolve table relation recursively(tenant_id=1, ret=-5019) [2024-02-19 19:03:35.887548] WARN [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:5446) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=7] resolve table relation factor failed(ret=-5019, table_name=__all_weak_read_service) [2024-02-19 19:03:35.887557] WARN [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:1384) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=6] fail to resolve table(ret=-5019) [2024-02-19 19:03:35.887564] WARN [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:1435) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=7] fail to resolve sys view(ret=-5019) [2024-02-19 19:03:35.887575] WARN resolve_basic_table (ob_dml_resolver.cpp:1527) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=6] Table 'oceanbase.__all_weak_read_service' doesn't exist [2024-02-19 19:03:35.887582] WARN [SQL.RESV] resolve_basic_table (ob_select_resolver.cpp:4185) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=7] resolve base or alias table factor failed(ret=-5019) [2024-02-19 19:03:35.887589] WARN [SQL.RESV] resolve_table (ob_dml_resolver.cpp:1932) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=6] resolve basic table failed(ret=-5019) [2024-02-19 19:03:35.887595] WARN [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3976) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=6] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-02-19 19:03:35.887601] WARN [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1059) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=6] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-02-19 19:03:35.887608] WARN [SQL.RESV] resolve (ob_select_resolver.cpp:1258) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=6] resolve normal query failed(ret=-5019) [2024-02-19 19:03:35.887615] WARN [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:170) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=5] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3073) [2024-02-19 19:03:35.887628] WARN [SQL] generate_stmt (ob_sql.cpp:2167) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=7] failed to resolve(ret=-5019) [2024-02-19 19:03:35.887635] WARN [SQL] generate_physical_plan (ob_sql.cpp:2291) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=7] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.887644] WARN [SQL] handle_physical_plan (ob_sql.cpp:3779) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=6] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-02-19 19:03:35.887654] WARN [SQL] handle_text_query (ob_sql.cpp:1917) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=9] fail to handle physical plan(ret=-5019) [2024-02-19 19:03:35.887662] WARN [SQL] stmt_query (ob_sql.cpp:175) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=6] fail to handle text query(stmt=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '', ret=-5019) [2024-02-19 19:03:35.887670] WARN [SERVER] do_query (ob_inner_sql_connection.cpp:595) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=7] executor execute failed(ret=-5019) [2024-02-19 19:03:35.887678] WARN [SERVER] query (ob_inner_sql_connection.cpp:733) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=6] execute failed(ret=-5019, executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, retry_cnt=0) [2024-02-19 19:03:35.887692] WARN [SERVER] after_func (ob_query_retry_ctrl.cpp:830) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=10] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-02-19 19:03:35.887706] WARN [SERVER] inner_close (ob_inner_sql_result.cpp:211) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=12] result set close failed(ret=-5019) [2024-02-19 19:03:35.887713] WARN [SERVER] force_close (ob_inner_sql_result.cpp:191) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=6] result set close failed(ret=-5019) [2024-02-19 19:03:35.887719] WARN [SERVER] query (ob_inner_sql_connection.cpp:738) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=6] failed to close result(close_ret=-5019, ret=-5019) [2024-02-19 19:03:35.887743] WARN [SERVER] query (ob_inner_sql_connection.cpp:763) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=7] failed to process record(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, record_ret=-5019, ret=-5019) [2024-02-19 19:03:35.887756] WARN [SERVER] query (ob_inner_sql_connection.cpp:780) [1108330][T1_TenantWeakRe][T1][YB42AC0103F2-000611B923A797F2-0-0] [lt=12] failed to process final(executor={ObIExecutor:, sql:"select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = ''"}, aret=-5019, ret=-5019) [2024-02-19 19:03:35.887769] WARN [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:2003) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] execute sql failed(ret=-5019, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:35.887791] WARN [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1164) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-02-19 19:03:35.887801] WARN [SERVER] execute_read (ob_inner_sql_connection.cpp:1943) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] execute_read failed(ret=-5019, cluster_id=1, tenant_id=1) [2024-02-19 19:03:35.887812] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:108) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] query failed(ret=-5019, conn=0x7fdcdc924050, start=1708340615887327, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:35.887825] WARN [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:63) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] read failed(ret=-5019) [2024-02-19 19:03:35.887836] WARN [STORAGE.TRANS] query_cluster_version_range_ (ob_tenant_weak_read_cluster_service.cpp:192) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] execute sql read fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", exec_tenant_id=1, tenant_id=1, sql=select min_version, max_version from __all_weak_read_service where tenant_id = 1 and level_id = 0 and level_value = '') [2024-02-19 19:03:35.887901] WARN [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:367) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] query cluster version range from WRS table fail(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-02-19 19:03:35.887917] INFO [STORAGE.TRANS] start_service (ob_tenant_weak_read_cluster_service.cpp:415) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] start service done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, in_service=false, leader_epoch=0, current_version=0, delta=1708340615887913, min_version=0, max_version=0, max_stale_time=5000000000, all_valid_server_count=0, total_time=1306, wlock_time=33, check_leader_time=2, query_version_time=0, persist_version_time=0) [2024-02-19 19:03:35.887936] WARN [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:781) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] start CLUSTER weak read service fail(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-02-19 19:03:35.887949] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:791) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, need_start_service=true, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=138, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-02-19 19:03:35.888015] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] get wrs ts(ls_id={id:1}, delta_ns=-1706042771799407610, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:35.888027] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=1, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:35.888054] INFO [STORAGE.TRANS] generate_new_version (ob_tenant_weak_read_server_version_mgr.cpp:120) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15] [WRS] update tenant weak read server version(tenant_id=1, server_version={version:1708336686671726824, total_part_count:1, valid_inner_part_count:1, valid_user_part_count:0, epoch_tstamp:1708340615887963}, version_delta=-1706628346055838771) [2024-02-19 19:03:35.888071] INFO [COMMON] print_io_status (ob_io_struct.cpp:619) [1106661][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=25] [IO STATUS](tenant_ids=[1, 500], send_thread_count=2, send_queues=[0, 0]) [2024-02-19 19:03:35.894186] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.894241] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=56] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.904386] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.904431] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=50] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.914560] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.914602] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.924773] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.924832] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=86] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.929491] WARN [SERVER] batch_process_tasks (ob_ls_table_updater.cpp:333) [1106713][LSMetaTblUp0][T0][YB42AC0103F2-000611B9217D2F4A-0-0] [lt=38] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1, task={tenant_id:1, ls_id:{id:1}, add_timestamp:1708337390831403}) [2024-02-19 19:03:35.934969] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.935020] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=52] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.941022] WARN [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2113) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=10] get invalid Ethernet speed, use default(devname="ens18") [2024-02-19 19:03:35.941054] WARN [SERVER] runTimerTask (ob_server.cpp:2632) [1106653][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=32] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4002, ret="OB_INVALID_ARGUMENT") [2024-02-19 19:03:35.945161] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=36] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.945191] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.955316] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.955363] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.965611] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.965645] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.974191] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:35.974235] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=45] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615974177}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.974260] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=23] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340615974177}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:35.974278] WARN [STORAGE.TRANS] operator() (ob_ts_mgr.h:225) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=13] refresh gts failed(ret=-4038, ret="OB_NOT_MASTER", gts_tenant_info={v:1}) [2024-02-19 19:03:35.974289] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:229) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=11] refresh gts functor(ret=-4038, ret="OB_NOT_MASTER", gts_tenant_info={v:1}) [2024-02-19 19:03:35.977176] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.977207] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.984949] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=14] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340615984918}) [2024-02-19 19:03:35.984986] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=39] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340615974177}}) [2024-02-19 19:03:35.986929] WARN [STORAGE.TRANS] post_cluster_heartbeat_rpc_ (ob_tenant_weak_read_service.cpp:797) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] get cluster service master fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, cluster_service_tablet_id={id:226}) [2024-02-19 19:03:35.986953] WARN [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:438) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24] tenant weak read service cluster heartbeat RPC fail(rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.1.3.242:2882", cluster_service_tablet_id={id:226}) [2024-02-19 19:03:35.986975] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:756) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version=1708336686671726824, valid_part_count=1, total_part_count=1, generate_timestamp=1708340615986915) [2024-02-19 19:03:35.986986] WARN [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:766) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1708340615886583, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-02-19 19:03:35.987046] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8] get wrs ts(ls_id={id:1}, delta_ns=-1706042771799308193, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:35.987061] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:35.987351] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.987382] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:35.997521] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=33] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:35.997566] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=48] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:36.004457] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:695) [1106689][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=29] Cache replace map node details(ret=0, replace_node_count=0, replace_time=20079, replace_start_pos=1352608, replace_num=15728) [2024-02-19 19:03:36.007693] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:36.007724] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:36.011536] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1009) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=159] Wash compute wash size(is_wash_valid=true, sys_total_wash_size=1386106880, global_cache_size=4161536, tenant_max_wash_size=4161536, tenant_min_wash_size=4161536, tenant_ids_=[512, 500, 999, 506, 507, 508, 509, 510, 1]) [2024-02-19 19:03:36.011685] INFO [COMMON] wash (ob_kvcache_store.cpp:343) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=77] Wash time detail, (compute_wash_size_time=177, refresh_score_time=68, wash_time=5) [2024-02-19 19:03:36.017828] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:36.017856] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:36.027968] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:36.028019] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=54] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:36.038130] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=28] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:36.038162] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=35] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:36.041073] INFO [SERVER] loop (ob_safe_destroy_thread.cpp:133) [1106767][SafeDestroy][T0][Y0-0000000000000000-0-0] [lt=12] ObSafeDestroyTaskQueue::loop begin(queue_.size()=0) [2024-02-19 19:03:36.041101] INFO [SERVER] loop (ob_safe_destroy_thread.cpp:140) [1106767][SafeDestroy][T0][Y0-0000000000000000-0-0] [lt=17] ObSafeDestroyTaskQueue::loop finish(ret=0, queue_.size()=0) [2024-02-19 19:03:36.044571] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:645) [1106688][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=22] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=32860, clean_start_pos=1195366, clean_num=31457) [2024-02-19 19:03:36.048292] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:36.048326] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:36.058473] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:36.058513] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=44] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:36.068634] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:36.068679] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=47] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:36.074872] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-02-19 19:03:36.074909] WARN [STORAGE.TRANS] post (ob_gts_rpc.cpp:226) [1106784][TsMgr][T1][Y0-0000000000000000-0-0] [lt=37] post local gts request failed(ret=-4038, ret="OB_NOT_MASTER", server="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340616074860}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:36.074929] WARN [STORAGE.TRANS] query_gts_ (ob_gts_source.cpp:605) [1106784][TsMgr][T0][Y0-0000000000000000-0-0] [lt=19] post gts request failed(ret=-4038, ret="OB_NOT_MASTER", leader="172.1.3.242:2882", msg={tenant_id:1, srr:{mts:1708340616074860}, range_size:1, sender:"172.1.3.242:2882"}) [2024-02-19 19:03:36.078997] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:36.079033] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:36.079171] WARN [COORDINATOR] get_ls_election_reference_info (ob_leader_coordinator.cpp:138) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC8A-0-0] [lt=117] can not find this ls_id in all_ls_election_reference_info_(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, all_ls_election_reference_info=[]) [2024-02-19 19:03:36.079202] WARN [COORDINATOR] refresh_ (election_priority_v1.cpp:144) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC8A-0-0] [lt=30] fail to get ls election reference info(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, *this={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:36.079217] WARN [COORDINATOR] operator() (election_priority_impl.cpp:246) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC8A-0-0] [lt=14] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id_={id:1}, element={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:36.079234] WARN iterate (ob_tuple.h:272) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC8A-0-0] [lt=14] assign element failed(ret=-4018, std::get(tuple)={is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}) [2024-02-19 19:03:36.079247] WARN [COORDINATOR] refresh (election_priority_impl.cpp:261) [1109024][T1_TNT_L0_G2][T1][YB42AC0103F2-000611B92267BC8A-0-0] [lt=13] refresh priority failed(ret=-4018, ret="OB_ENTRY_NOT_EXIST", MTL_ID()=1, ls_id={id:1}, *this={priority:{is_valid:false, is_observer_stopped:false, is_server_stopped:false, is_zone_stopped:false, fatal_failures:[], is_primary_region:false, serious_failures:[], is_in_blacklist:false, in_blacklist_reason:, log_ts:0, is_manual_leader:false, zone_priority:9223372036854775807}}) [2024-02-19 19:03:36.085074] INFO [SHARE.LOCATION] add_update_task (ob_ls_location_service.cpp:449) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=17] add update task in local_async_queue_set_(ret=0, ret="OB_SUCCESS", task={cluster_id:1, tenant_id:1, ls_id:{id:1}, add_timestamp:1708340616085059}) [2024-02-19 19:03:36.085109] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:624) [1107631][T1_FreInfoReloa][T1][YB42AC0103F2-000611B922278C07-0-0] [lt=35] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:{mts:0}, gts:0, barrier_ts:0, latest_srr:{mts:1708340616074860}}) [2024-02-19 19:03:36.087004] INFO [STORAGE.TRANS] generate_weak_read_timestamp_ (ob_ls_wrs_handler.cpp:175) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18] get wrs ts(ls_id={id:1}, delta_ns=-1706042771799208997, timestamp=1707751112415295196, min_tx_service_ts=9223372036854775807) [2024-02-19 19:03:36.087032] INFO [STORAGE.TRANS] print_stat_info (ob_keep_alive_ls_handler.cpp:210) [1108330][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=26] [Keep Alive Stat] LS Keep Alive Info(tenant_id=1, LS_ID={id:1}, Not_Master_Cnt=0, Near_To_GTS_Cnt=0, Other_Error_Cnt=0, Submit_Succ_Cnt=0, last_log_ts=1707751112323271497, last_lsn={lsn:25325336648}, last_gts=0, min_start_scn=0, min_start_status=1) [2024-02-19 19:03:36.089181] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:36.089218] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=38] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:36.099368] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:36.099411] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=45] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:36.109538] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=30] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:36.109572] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=37] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:36.119716] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:36.119763] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3 [2024-02-19 19:03:36.120348] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:199) [1107573][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=41] sql audit evict task end(evict_high_mem_level=32212254, evict_high_size_level=90000, evict_batch_count=0, elapse_time=1, size_used=14936, mem_used=31196160) [2024-02-19 19:03:36.129932] WARN [PALF] recycle_blocks_ (palf_env_impl.cpp:1025) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=57] there is not any block can be recycled, need verify the baselsn of PalfHandleImpl whether has been advanced(ret=0, this={self:"172.1.3.242:2882", log_dir:"/backup/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, disk_opts_for_recycling_blocks:{log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95}, status:1}}) [2024-02-19 19:03:36.129994] ERROR [PALF] try_recycle_blocks (palf_env_impl.cpp:766) [1107529][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=40] clog disk space is almost full(total_size(MB)=2048, used_size(MB)=1945, used_percent(%)=95, warn_size(MB)=1638, warn_percent(%)=80, limit_size(MB)=1945, limit_percent(%)=95, maximum_used_size(MB)=1945, maximum_log_stream=1, oldest_log_stream=1, oldest_timestamp=1707200283752293320) BACKTRACE:0xb61bbbb 0xb60d4f6 0x3d2bb93 0x3d2b871 0x3d2b65c 0x3d2b48e 0x3dc0933 0x3a3a675 0x3a3a29b 0x3a391f1 0xb5fc3ac 0xb5ffbb7 0xb5fa7ea 0x7fdd4c4be14a 0x7fdd4c1eddc3